Why HORUS?

You're building a robot. Maybe it's a warehouse AGV, a surgical arm, a drone, or a research platform. You need software that reads sensors, computes control signals, and drives actuators — all on a single computer, all in real time.

The conventional answer is ROS2. But within a week, you're debugging DDS discovery, tuning QoS profiles, writing three config files per package, and wondering why your 1 kHz motor controller has 100 µs jitter. You're spending more time fighting the framework than building the robot.

HORUS exists because most robots don't need a distributed middleware stack. They need fast, deterministic communication between components on the same machine — and a framework that gets out of the way.

What HORUS Does Differently

Shared Memory IPC — 575x Faster

Traditional robotics middleware (ROS2/DDS) serializes messages, pushes them through a network stack, and deserializes on the other end — even when sender and receiver are on the same machine. HORUS skips all of that. Topics use shared memory: the publisher writes data once, the subscriber reads from the same address. No copies, no serialization, no kernel transitions.

MessageHORUSROS2 (DDS)Speedup
Motor command (16 B)~85 ns~50 µs588x
IMU reading (304 B)~400 ns~55 µs138x
LiDAR scan (1.5 KB)~900 ns~70 µs78x
Point cloud (12 KB)~12 µs~150 µs13x

Measured on Intel i9-14900K. See Benchmarks for full methodology.

The speedup matters most for small, frequent messages — exactly the CmdVel and IMU messages that drive tight control loops. At 1 kHz, a 50 µs DDS message eats 5% of every cycle. A 85 ns HORUS message is negligible.

Deterministic Execution — No Race Conditions

HORUS runs nodes in a guaranteed order every tick:

// simplified
scheduler.add(SafetyMonitor::new()?).order(0).build()?;  // Always first
scheduler.add(SensorReader::new()?).order(1).build()?;    // Always second
scheduler.add(Controller::new()?).order(2).build()?;      // Always third
scheduler.add(Actuator::new()?).order(3).build()?;        // Always last

No callback scheduling surprises. No mutex deadlocks. The safety monitor always runs before the actuator — every tick, guaranteed. Two runs of the same code produce the same execution order. For safety-critical systems, this is not optional.

In ROS2, callbacks fire when events arrive. Execution order depends on timing, message arrival, and executor implementation. Under load, callbacks can be delayed or reordered. Two runs of the same code may execute callbacks in different orders.

Auto-Detected Real-Time

Set a rate or budget, and HORUS automatically enables RT features — dedicated thread, budget enforcement, deadline monitoring:

// simplified
scheduler.add(MotorNode::new()?)
    .order(0)
    .rate(1000_u64.hz())     // 1 kHz → auto-enables RT, derives budget + deadline
    .on_miss(Miss::SafeMode) // Enter safe state if tick takes too long
    .build()?;

No DDS QoS tuning. No rmw configuration files. No manual thread priority management. Declare your timing requirements and HORUS handles the rest.

Single-File Config

One horus.toml replaces CMakeLists.txt, package.xml, and launch files:

[package]
name = "warehouse-robot"
version = "1.0.0"

[dependencies]
nalgebra = "0.32"

[scripts]
start = "horus run --release"
test = "horus test --parallel"

Built-in Safety

Safety features are part of the scheduler — not bolted on after the fact:

  • Watchdog: Detects frozen nodes with graduated degradation (warn → skip → isolate)
  • Deadline enforcement: .budget() and .deadline() are first-class scheduler features
  • Safe state: Every node can implement enter_safe_state() — stop motors, close valves
  • Emergency stop: Event-driven nodes react in microseconds via .on("emergency.stop")
  • BlackBox: Flight recorder for post-mortem crash analysis
  • Fault tolerance: Per-node failure policies — restart, skip, or fatal
  • Record & Replay: Tick-perfect replay for reproducing field bugs

Rust + Python — Your Choice

// simplified
// Rust: Maximum performance, compile-time safety
struct Controller { cmd: Topic<CmdVel> }
impl Node for Controller {
    fn name(&self) -> &str { "Controller" }
    fn tick(&mut self) { self.cmd.send(compute_velocity()); }
}

Same framework, same topics, same scheduler. Mix Rust and Python nodes in the same application. Use Python for prototyping and ML, Rust for production control — or both simultaneously.

Who Uses HORUS

  • Research labs prototyping new robot behaviors (Python for quick iteration)
  • Startups building production robots (Rust safety + performance)
  • Control engineers who need deterministic timing (auto-RT, deadline enforcement)
  • Teams migrating from ROS2 who want simpler tooling without sacrificing capability
  • Solo developers who want to build a robot without weeks of framework setup

What HORUS Is NOT (Yet)

Being honest about current limitations:

LimitationImpactWorkaround
Single-machine onlyNo distributed multi-robot communicationUse ROS2 for fleet management, HORUS for on-robot control
No RViz equivalentNo 3D visualization of robot statehorus monitor shows nodes/topics/metrics; use Foxglove for 3D
Smaller ecosystemFewer ready-made packages than ROS2's 15-year libraryGrowing registry; HORUS packages + Rust crate ecosystem
No ROS2 bag compatibilityCan't replay existing rosbag2 filesHORUS has its own recording/replay system

These are active development areas. The core framework is production-ready; the ecosystem is growing.

Get Started

curl -fsSL https://gitlab.com/softmata/horus/-/raw/release/install.sh | bash
horus new my-robot
cd my-robot && horus run

See Also