Coming from ROS2
If you have experience with ROS2, you already know most of the concepts in HORUS. This guide maps what you know to how HORUS does it, highlights the architectural differences, and shows code side-by-side.
Concept Mapping
| ROS2 | HORUS | Notes |
|---|---|---|
| Node | Node trait | Same concept. Implement tick() instead of callbacks |
| Publisher / Subscriber | Topic (send/recv) | Named channels, zero-copy via SHM |
| Service | Service | Request/response, same pattern |
| Action | Action | Long-running tasks with feedback |
| tf2 | TransformFrame | tf / tf_static topics, tree lookups |
| Parameter Server | RuntimeParams | Per-node typed parameters |
| Launch file | Scheduler | Single process, all nodes in one scheduler |
| rqt / Foxglove | Monitor | Built-in web dashboard + TUI |
| rosbag | Record / Replay | Topic recording and playback |
| QoS profiles | — | Not yet available |
| Lifecycle node | Node trait | init() / shutdown() methods on every node |
| DDS middleware | SHM IPC | No middleware layer, sub-microsecond latency |
colcon build | horus build | Single manifest (horus.toml), no CMake |
ros2 topic echo | horus topic echo | Same idea, different CLI |
Architecture Differences
ROS2: Multi-Process, Callback-Based
In ROS2, each node is typically its own OS process. Nodes communicate over DDS (a network middleware), and you write callbacks that fire when messages arrive. Launch files coordinate which processes to start.
HORUS: Single-Process, Tick-Based
In HORUS, all nodes live in one process. The scheduler calls each node's tick() in a deterministic order every cycle. Nodes communicate through shared-memory topics with zero-copy reads.
Why Tick-Based Matters for Real-Time
| Property | ROS2 Callbacks | HORUS Ticks |
|---|---|---|
| Execution order | Non-deterministic | Deterministic (.order()) |
| Timing jitter | Depends on DDS, OS scheduling | Bounded by scheduler budget |
| Deadline enforcement | Manual (timers) | Built-in (.deadline(), .on_miss()) |
| Thread safety | You manage mutexes | Single-threaded tick, no locks needed |
| Latency | Microseconds to milliseconds (DDS) | Sub-microsecond (SHM) |
Cross-Process Communication
HORUS nodes can still talk across processes. SHM topics are visible to any process on the same machine. You simply run two schedulers that share the same topic names — no DDS required.
Code Comparison
Here is the same motor controller node in ROS2 C++ and HORUS (Rust and Python).
ROS2 C++
#include <rclcpp/rclcpp.hpp>
#include <sensor_msgs/msg/imu.hpp>
#include <geometry_msgs/msg/twist.hpp>
class MotorNode : public rclcpp::Node {
public:
MotorNode() : Node("motor") {
sub_ = create_subscription<sensor_msgs::msg::Imu>(
"imu", 10, [this](sensor_msgs::msg::Imu::SharedPtr msg) {
last_imu_ = *msg;
});
pub_ = create_publisher<geometry_msgs::msg::Twist>("cmd_vel", 10);
timer_ = create_wall_timer(10ms, [this]() { tick(); });
}
private:
void tick() {
geometry_msgs::msg::Twist cmd;
cmd.linear.x = compute_speed(last_imu_);
pub_->publish(cmd);
}
rclcpp::Subscription<sensor_msgs::msg::Imu>::SharedPtr sub_;
rclcpp::Publisher<geometry_msgs::msg::Twist>::SharedPtr pub_;
rclcpp::TimerBase::SharedPtr timer_;
sensor_msgs::msg::Imu last_imu_;
};
int main(int argc, char** argv) {
rclcpp::init(argc, argv);
rclcpp::spin(std::make_shared<MotorNode>());
}
HORUS
use horus::prelude::*;
struct MotorNode {
imu_sub: Topic<Imu>,
cmd_pub: Topic<Twist>,
}
impl MotorNode {
fn new() -> Result<Self> {
Ok(Self {
imu_sub: Topic::new("imu")?,
cmd_pub: Topic::new("cmd_vel")?,
})
}
}
impl Node for MotorNode {
fn name(&self) -> &str { "motor_node" }
fn tick(&mut self) {
if let Some(imu) = self.imu_sub.recv() {
let cmd = Twist::default(); // compute from IMU
self.cmd_pub.send(cmd);
}
}
}
fn main() -> Result<()> {
let mut scheduler = Scheduler::new();
scheduler.add(MotorNode::new()?)
.order(0)
.rate(100_u64.hz())
.build()?;
scheduler.run()
}
Key differences from ROS2:
- No callback boilerplate --
tick()reads and writes directly - Rate is set on the scheduler, not via a timer
- No
SharedPtr, no mutex -- the scheduler guarantees single-threaded access Scheduler::run()/horus.run()replacesrclcpp::spin()- Python uses
node.recv("topic")andnode.send("topic", data)instead of typed subscribers
Message Type Mapping
| ROS2 Message | HORUS Type | Module |
|---|---|---|
sensor_msgs/Imu | Imu | horus::prelude |
sensor_msgs/LaserScan | LaserScan | horus::prelude |
sensor_msgs/Image | Image | horus::memory |
sensor_msgs/JointState | JointState | horus::prelude |
sensor_msgs/PointCloud2 | PointCloud | horus::memory |
geometry_msgs/Twist | Twist | horus::prelude |
geometry_msgs/Pose | Pose3D | horus::prelude |
geometry_msgs/Transform | TFMessage | horus::transform_frame |
nav_msgs/Odometry | Odometry | horus::prelude |
std_msgs/String | String | Rust stdlib |
std_msgs/Bool | bool | Rust stdlib |
std_msgs/Float64 | f64 | Rust stdlib |
What Happens When a Node Misbehaves
In ROS2, if a node hangs, nothing detects it — the robot keeps running on its last commanded velocity. There is no built-in watchdog or deadline enforcement.
In HORUS, the scheduler monitors every node with a graduated watchdog. If a node stops completing ticks on time, the system automatically warns, skips the node, and eventually calls enter_safe_state() to stop motors and apply brakes — all without any other node being affected.
See Safety Monitor for the full reference with configuration, timeout guidelines, and code examples.
What HORUS Adds Over ROS2
Zero-copy SHM. Topics use shared memory by default. Readers get a direct pointer to the data — no serialization, no copy. This gives sub-microsecond publish-to-read latency.
Deterministic mode. The scheduler can run in lockstep with a simulation clock. Every tick produces identical results given the same inputs. This is critical for sim-to-real transfer.
Built-in safety monitor. Every node has a watchdog. If a node exceeds its deadline, the scheduler can warn, skip the node, reduce its rate, or trigger a safe-state shutdown — all configured per-node via .on_miss().
Auto-RT detection. Set .rate() or .budget() on a node and HORUS automatically classifies it as real-time. No need to manually configure thread priorities or scheduling policies.
Single-file configuration. One horus.toml replaces package.xml, CMakeLists.txt, setup.py, and launch files. Dependencies, scripts, and node configuration all live in one place.
What HORUS Doesn't Have Yet
Multi-machine networking. HORUS currently runs on a single machine. SHM topics do not cross network boundaries. For multi-machine setups, you would need a custom bridge.
Visualization (rviz equivalent). There is no 3D visualization tool like rviz. The Monitor provides metrics dashboards but not scene rendering.
Bag file format. Record/Replay works but uses an internal format. There is no equivalent to the rosbag2 format or interoperability with ROS2 bags.
QoS profiles. There is no quality-of-service configuration for topics (reliability, durability, history depth). Topics are currently best-effort with configurable buffer sizes.
Ecosystem breadth. ROS2 has thousands of community packages. HORUS is younger and has a smaller library of pre-built drivers and algorithms. Check the HORUS Registry for available packages.
Migration Checklist
If you are porting a ROS2 project to HORUS:
- Map your nodes. Each ROS2 node becomes a struct implementing the
Nodetrait - Replace callbacks with
tick(). Read all inputs at the top oftick(), compute, then publish outputs - Convert message types. Use the mapping table above. Custom messages become Rust structs
- Replace launch files. Build your scheduler in
main()with.add()calls - Replace
package.xml+CMakeLists.txt. Write onehorus.toml - Replace tf2 with TransformFrame. Same tree semantics, publish to
tf/tf_statictopics - Test with
tick_once(). HORUS supports single-tick execution for deterministic unit tests
Design Decisions
Why tick() instead of callbacks?
ROS2 callbacks fire when messages arrive — the order depends on timing, executor implementation, and system load. In HORUS, the scheduler calls tick() on every node in a fixed order every cycle. This means you always know that the sensor node ran before the controller, and the controller ran before the actuator. For safety-critical systems, deterministic ordering eliminates race conditions that only appear under load.
Why single-process instead of multi-process? ROS2's multi-process model uses DDS for inter-process communication, adding serialization and kernel transitions (~50 µs per message). HORUS's single-process model uses in-process ring buffers (~3–36 ns). For robots where all software runs on one computer, the multi-process overhead is pure waste. When you do need process isolation (fault boundaries), HORUS supports cross-process topics transparently — same API, same code, just separate processes.
Why horus.toml instead of package.xml + CMakeLists.txt?
ROS2 inherits its build system from catkin/ament, which requires separate files for package metadata (package.xml), build instructions (CMakeLists.txt), Python setup (setup.py), and node launch (launch.py). HORUS collapses all of this into one TOML file. Adding a dependency is one line, not three edits across three files.
See Also
- HORUS vs ROS2 — Detailed feature comparison with benchmarks
- Why HORUS? — Motivation and design philosophy
- Quick Start — Build your first HORUS application
- Architecture — System design overview
- CLI Reference — HORUS CLI commands (mapped from ros2 CLI)