Coming from ROS2

If you have experience with ROS2, you already know most of the concepts in HORUS. This guide maps what you know to how HORUS does it, highlights the architectural differences, and shows code side-by-side.

Concept Mapping

ROS2HORUSNotes
NodeNode traitSame concept. Implement tick() instead of callbacks
Publisher / SubscriberTopic (send/recv)Named channels, zero-copy via SHM
ServiceServiceRequest/response, same pattern
ActionActionLong-running tasks with feedback
tf2TransformFrametf / tf_static topics, tree lookups
Parameter ServerRuntimeParamsPer-node typed parameters
Launch fileSchedulerSingle process, all nodes in one scheduler
rqt / FoxgloveMonitorBuilt-in web dashboard + TUI
rosbagRecord / ReplayTopic recording and playback
QoS profilesNot yet available
Lifecycle nodeNode traitinit() / shutdown() methods on every node
DDS middlewareSHM IPCNo middleware layer, sub-microsecond latency
colcon buildhorus buildSingle manifest (horus.toml), no CMake
ros2 topic echohorus topic echoSame idea, different CLI

Architecture Differences

ROS2: Multi-Process, Callback-Based

In ROS2, each node is typically its own OS process. Nodes communicate over DDS (a network middleware), and you write callbacks that fire when messages arrive. Launch files coordinate which processes to start.

ROS2: Each node is a separate process, communicating over DDS middleware

HORUS: Single-Process, Tick-Based

In HORUS, all nodes live in one process. The scheduler calls each node's tick() in a deterministic order every cycle. Nodes communicate through shared-memory topics with zero-copy reads.

HORUS: All nodes in one process, deterministic tick order, zero-copy SHM

Why Tick-Based Matters for Real-Time

PropertyROS2 CallbacksHORUS Ticks
Execution orderNon-deterministicDeterministic (.order())
Timing jitterDepends on DDS, OS schedulingBounded by scheduler budget
Deadline enforcementManual (timers)Built-in (.deadline(), .on_miss())
Thread safetyYou manage mutexesSingle-threaded tick, no locks needed
LatencyMicroseconds to milliseconds (DDS)Sub-microsecond (SHM)

Cross-Process Communication

HORUS nodes can still talk across processes. SHM topics are visible to any process on the same machine. You simply run two schedulers that share the same topic names — no DDS required.

Code Comparison

Here is the same motor controller node in ROS2 C++ and HORUS (Rust and Python).

ROS2 C++

#include <rclcpp/rclcpp.hpp>
#include <sensor_msgs/msg/imu.hpp>
#include <geometry_msgs/msg/twist.hpp>

class MotorNode : public rclcpp::Node {
public:
  MotorNode() : Node("motor") {
    sub_ = create_subscription<sensor_msgs::msg::Imu>(
      "imu", 10, [this](sensor_msgs::msg::Imu::SharedPtr msg) {
        last_imu_ = *msg;
      });
    pub_ = create_publisher<geometry_msgs::msg::Twist>("cmd_vel", 10);
    timer_ = create_wall_timer(10ms, [this]() { tick(); });
  }

private:
  void tick() {
    geometry_msgs::msg::Twist cmd;
    cmd.linear.x = compute_speed(last_imu_);
    pub_->publish(cmd);
  }

  rclcpp::Subscription<sensor_msgs::msg::Imu>::SharedPtr sub_;
  rclcpp::Publisher<geometry_msgs::msg::Twist>::SharedPtr pub_;
  rclcpp::TimerBase::SharedPtr timer_;
  sensor_msgs::msg::Imu last_imu_;
};

int main(int argc, char** argv) {
  rclcpp::init(argc, argv);
  rclcpp::spin(std::make_shared<MotorNode>());
}

HORUS

use horus::prelude::*;

struct MotorNode {
    imu_sub: Topic<Imu>,
    cmd_pub: Topic<Twist>,
}

impl MotorNode {
    fn new() -> Result<Self> {
        Ok(Self {
            imu_sub: Topic::new("imu")?,
            cmd_pub: Topic::new("cmd_vel")?,
        })
    }
}

impl Node for MotorNode {
    fn name(&self) -> &str { "motor_node" }

    fn tick(&mut self) {
        if let Some(imu) = self.imu_sub.recv() {
            let cmd = Twist::default(); // compute from IMU
            self.cmd_pub.send(cmd);
        }
    }
}

fn main() -> Result<()> {
    let mut scheduler = Scheduler::new();
    scheduler.add(MotorNode::new()?)
        .order(0)
        .rate(100_u64.hz())
        .build()?;
    scheduler.run()
}

Key differences from ROS2:

  • No callback boilerplate -- tick() reads and writes directly
  • Rate is set on the scheduler, not via a timer
  • No SharedPtr, no mutex -- the scheduler guarantees single-threaded access
  • Scheduler::run() / horus.run() replaces rclcpp::spin()
  • Python uses node.recv("topic") and node.send("topic", data) instead of typed subscribers

Message Type Mapping

ROS2 MessageHORUS TypeModule
sensor_msgs/ImuImuhorus::prelude
sensor_msgs/LaserScanLaserScanhorus::prelude
sensor_msgs/ImageImagehorus::memory
sensor_msgs/JointStateJointStatehorus::prelude
sensor_msgs/PointCloud2PointCloudhorus::memory
geometry_msgs/TwistTwisthorus::prelude
geometry_msgs/PosePose3Dhorus::prelude
geometry_msgs/TransformTFMessagehorus::transform_frame
nav_msgs/OdometryOdometryhorus::prelude
std_msgs/StringStringRust stdlib
std_msgs/BoolboolRust stdlib
std_msgs/Float64f64Rust stdlib

What Happens When a Node Misbehaves

In ROS2, if a node hangs, nothing detects it — the robot keeps running on its last commanded velocity. There is no built-in watchdog or deadline enforcement.

In HORUS, the scheduler monitors every node with a graduated watchdog. If a node stops completing ticks on time, the system automatically warns, skips the node, and eventually calls enter_safe_state() to stop motors and apply brakes — all without any other node being affected.

See Safety Monitor for the full reference with configuration, timeout guidelines, and code examples.

What HORUS Adds Over ROS2

Zero-copy SHM. Topics use shared memory by default. Readers get a direct pointer to the data — no serialization, no copy. This gives sub-microsecond publish-to-read latency.

Deterministic mode. The scheduler can run in lockstep with a simulation clock. Every tick produces identical results given the same inputs. This is critical for sim-to-real transfer.

Built-in safety monitor. Every node has a watchdog. If a node exceeds its deadline, the scheduler can warn, skip the node, reduce its rate, or trigger a safe-state shutdown — all configured per-node via .on_miss().

Auto-RT detection. Set .rate() or .budget() on a node and HORUS automatically classifies it as real-time. No need to manually configure thread priorities or scheduling policies.

Single-file configuration. One horus.toml replaces package.xml, CMakeLists.txt, setup.py, and launch files. Dependencies, scripts, and node configuration all live in one place.

What HORUS Doesn't Have Yet

Multi-machine networking. HORUS currently runs on a single machine. SHM topics do not cross network boundaries. For multi-machine setups, you would need a custom bridge.

Visualization (rviz equivalent). There is no 3D visualization tool like rviz. The Monitor provides metrics dashboards but not scene rendering.

Bag file format. Record/Replay works but uses an internal format. There is no equivalent to the rosbag2 format or interoperability with ROS2 bags.

QoS profiles. There is no quality-of-service configuration for topics (reliability, durability, history depth). Topics are currently best-effort with configurable buffer sizes.

Ecosystem breadth. ROS2 has thousands of community packages. HORUS is younger and has a smaller library of pre-built drivers and algorithms. Check the HORUS Registry for available packages.

Migration Checklist

If you are porting a ROS2 project to HORUS:

  1. Map your nodes. Each ROS2 node becomes a struct implementing the Node trait
  2. Replace callbacks with tick(). Read all inputs at the top of tick(), compute, then publish outputs
  3. Convert message types. Use the mapping table above. Custom messages become Rust structs
  4. Replace launch files. Build your scheduler in main() with .add() calls
  5. Replace package.xml + CMakeLists.txt. Write one horus.toml
  6. Replace tf2 with TransformFrame. Same tree semantics, publish to tf / tf_static topics
  7. Test with tick_once(). HORUS supports single-tick execution for deterministic unit tests

Design Decisions

Why tick() instead of callbacks? ROS2 callbacks fire when messages arrive — the order depends on timing, executor implementation, and system load. In HORUS, the scheduler calls tick() on every node in a fixed order every cycle. This means you always know that the sensor node ran before the controller, and the controller ran before the actuator. For safety-critical systems, deterministic ordering eliminates race conditions that only appear under load.

Why single-process instead of multi-process? ROS2's multi-process model uses DDS for inter-process communication, adding serialization and kernel transitions (~50 µs per message). HORUS's single-process model uses in-process ring buffers (~3–36 ns). For robots where all software runs on one computer, the multi-process overhead is pure waste. When you do need process isolation (fault boundaries), HORUS supports cross-process topics transparently — same API, same code, just separate processes.

Why horus.toml instead of package.xml + CMakeLists.txt? ROS2 inherits its build system from catkin/ament, which requires separate files for package metadata (package.xml), build instructions (CMakeLists.txt), Python setup (setup.py), and node launch (launch.py). HORUS collapses all of this into one TOML file. Adding a dependency is one line, not three edits across three files.

See Also