Why HORUS?
You're building a robot. Maybe it's a warehouse AGV, a surgical arm, a drone, or a research platform. You need software that reads sensors, computes control signals, and drives actuators — all on a single computer, all in real time.
The conventional answer is ROS2. But within a week, you're debugging DDS discovery, tuning QoS profiles, writing three config files per package, and wondering why your 1 kHz motor controller has 100 µs jitter. You're spending more time fighting the framework than building the robot.
HORUS exists because most robots don't need a distributed middleware stack. They need fast, deterministic communication between components on the same machine — and a framework that gets out of the way.
What HORUS Does Differently
Shared Memory IPC — 575x Faster
Traditional robotics middleware (ROS2/DDS) serializes messages, pushes them through a network stack, and deserializes on the other end — even when sender and receiver are on the same machine. HORUS skips all of that. Topics use shared memory: the publisher writes data once, the subscriber reads from the same address. No copies, no serialization, no kernel transitions.
| Message | HORUS | ROS2 (DDS) | Speedup |
|---|---|---|---|
| Motor command (16 B) | ~85 ns | ~50 µs | 588x |
| IMU reading (304 B) | ~400 ns | ~55 µs | 138x |
| LiDAR scan (1.5 KB) | ~900 ns | ~70 µs | 78x |
| Point cloud (12 KB) | ~12 µs | ~150 µs | 13x |
Measured on Intel i9-14900K. See Benchmarks for full methodology.
The speedup matters most for small, frequent messages — exactly the CmdVel and IMU messages that drive tight control loops. At 1 kHz, a 50 µs DDS message eats 5% of every cycle. A 85 ns HORUS message is negligible.
Deterministic Execution — No Race Conditions
HORUS runs nodes in a guaranteed order every tick:
// simplified
scheduler.add(SafetyMonitor::new()?).order(0).build()?; // Always first
scheduler.add(SensorReader::new()?).order(1).build()?; // Always second
scheduler.add(Controller::new()?).order(2).build()?; // Always third
scheduler.add(Actuator::new()?).order(3).build()?; // Always last
No callback scheduling surprises. No mutex deadlocks. The safety monitor always runs before the actuator — every tick, guaranteed. Two runs of the same code produce the same execution order. For safety-critical systems, this is not optional.
In ROS2, callbacks fire when events arrive. Execution order depends on timing, message arrival, and executor implementation. Under load, callbacks can be delayed or reordered. Two runs of the same code may execute callbacks in different orders.
Auto-Detected Real-Time
Set a rate or budget, and HORUS automatically enables RT features — dedicated thread, budget enforcement, deadline monitoring:
// simplified
scheduler.add(MotorNode::new()?)
.order(0)
.rate(1000_u64.hz()) // 1 kHz → auto-enables RT, derives budget + deadline
.on_miss(Miss::SafeMode) // Enter safe state if tick takes too long
.build()?;
No DDS QoS tuning. No rmw configuration files. No manual thread priority management. Declare your timing requirements and HORUS handles the rest.
Single-File Config
One horus.toml replaces CMakeLists.txt, package.xml, and launch files:
[package]
name = "warehouse-robot"
version = "1.0.0"
[dependencies]
nalgebra = "0.32"
[scripts]
start = "horus run --release"
test = "horus test --parallel"
Built-in Safety
Safety features are part of the scheduler — not bolted on after the fact:
- Watchdog: Detects frozen nodes with graduated degradation (warn → skip → isolate)
- Deadline enforcement:
.budget()and.deadline()are first-class scheduler features - Safe state: Every node can implement
enter_safe_state()— stop motors, close valves - Emergency stop: Event-driven nodes react in microseconds via
.on("emergency.stop") - BlackBox: Flight recorder for post-mortem crash analysis
- Fault tolerance: Per-node failure policies — restart, skip, or fatal
- Record & Replay: Tick-perfect replay for reproducing field bugs
Rust + Python — Your Choice
// simplified
// Rust: Maximum performance, compile-time safety
struct Controller { cmd: Topic<CmdVel> }
impl Node for Controller {
fn name(&self) -> &str { "Controller" }
fn tick(&mut self) { self.cmd.send(compute_velocity()); }
}
Same framework, same topics, same scheduler. Mix Rust and Python nodes in the same application. Use Python for prototyping and ML, Rust for production control — or both simultaneously.
Who Uses HORUS
- Research labs prototyping new robot behaviors (Python for quick iteration)
- Startups building production robots (Rust safety + performance)
- Control engineers who need deterministic timing (auto-RT, deadline enforcement)
- Teams migrating from ROS2 who want simpler tooling without sacrificing capability
- Solo developers who want to build a robot without weeks of framework setup
What HORUS Is NOT (Yet)
Being honest about current limitations:
| Limitation | Impact | Workaround |
|---|---|---|
| Single-machine only | No distributed multi-robot communication | Use ROS2 for fleet management, HORUS for on-robot control |
| No RViz equivalent | No 3D visualization of robot state | horus monitor shows nodes/topics/metrics; use Foxglove for 3D |
| Smaller ecosystem | Fewer ready-made packages than ROS2's 15-year library | Growing registry; HORUS packages + Rust crate ecosystem |
| No ROS2 bag compatibility | Can't replay existing rosbag2 files | HORUS has its own recording/replay system |
These are active development areas. The core framework is production-ready; the ecosystem is growing.
Get Started
curl -fsSL https://gitlab.com/softmata/horus/-/raw/release/install.sh | bash
horus new my-robot
cd my-robot && horus run
See Also
- HORUS vs ROS2 — Detailed technical comparison
- Coming from ROS2 — Migration guide with concept mapping
- Installation — Install in 5 minutes
- Quick Start — Build your first robot
- Architecture — System design overview
- Benchmarks — Full performance data