Multi-Process Architecture
HORUS topics work transparently across process boundaries. Two nodes in separate processes communicate the same way as two nodes in the same process — through shared memory. No broker, no serialization layer, no configuration.
// Process 1: sensor.rs
let topic: Topic<Imu> = Topic::new("imu")?;
topic.send(imu_reading);
// Process 2: controller.rs
let topic: Topic<Imu> = Topic::new("imu")?; // same name = same topic
if let Some(reading) = topic.recv() {
// Got it — zero-config, sub-microsecond
}
How It Works
When you call Topic::new("imu"), HORUS creates (or opens) a shared memory region. Any process on the same machine that calls Topic::new("imu") with the same type connects to the same underlying ring buffer. The shared memory backend is managed by horus_sys — you never configure paths manually.
HORUS auto-detects whether a topic is same-process or cross-process and picks the fastest path:
| Scenario | Latency | How It Works |
|---|---|---|
| Same thread | ~3ns | Direct pointer handoff |
| Same process, 1:1 | ~18ns | Lock-free single-producer/single-consumer ring buffer |
| Same process, 1:N | ~24ns | Broadcast to multiple in-process subscribers |
| Same process, N:1 | ~26ns | Multiple in-process publishers, one subscriber |
| Same process, N:N | ~36ns | Full many-to-many in-process |
| Cross-process, POD type | ~50ns | Zero-copy shared memory (no serialization) |
| Cross-process, N:1 | ~65ns | Shared memory, multiple publishers |
| Cross-process, 1:N | ~70ns | Shared memory, multiple subscribers |
| Cross-process, 1:1 | ~85ns | Shared memory, serialized type |
| Cross-process, N:N | ~167ns | Shared memory, full many-to-many |
Cross-process adds ~30-130ns vs in-process — still sub-microsecond. You don't configure any of this. The backend is selected automatically based on topology and upgrades transparently as participants join or leave.
Running Multiple Processes
Option 1: horus run with Multiple Files
# Builds and runs both files as separate processes
horus run sensor.rs controller.rs
# Mixed languages work too
horus run sensor.py controller.rs
# With release optimizations
horus run -r sensor.rs controller.rs
horus run compiles each file, then launches all processes and manages their lifecycle (SIGTERM on Ctrl+C, etc.).
Option 2: Separate Terminals
Run each node in its own terminal:
# Terminal 1
horus run sensor.rs
# Terminal 2
horus run controller.rs
Topics auto-discover via shared memory. No coordination needed.
Option 3: horus launch (YAML)
For production, declare your multi-process layout in a launch file:
# launch.yaml
nodes:
- name: sensor
cmd: horus run sensor.rs
- name: controller
cmd: horus run controller.rs
- name: monitor
cmd: horus run monitor.py
horus launch launch.yaml
Example: Two-Process Sensor Pipeline
Process 1 — sensor.rs:
use horus::prelude::*;
message! {
WheelEncoder {
left_ticks: i64,
right_ticks: i64,
timestamp_ns: u64,
}
}
struct EncoderNode {
publisher: Topic<WheelEncoder>,
ticks: i64,
}
impl EncoderNode {
fn new() -> Result<Self> {
Ok(Self {
publisher: Topic::new("wheel.encoders")?,
ticks: 0,
})
}
}
impl Node for EncoderNode {
fn name(&self) -> &str { "Encoder" }
fn tick(&mut self) {
self.ticks += 10;
self.publisher.send(WheelEncoder {
left_ticks: self.ticks,
right_ticks: self.ticks + 2,
timestamp_ns: horus::now_ns(),
});
}
}
fn main() -> Result<()> {
let mut sched = Scheduler::new().tick_rate(100_u64.hz());
sched.add(EncoderNode::new()?).order(0).build()?;
sched.run()?;
Ok(())
}
Process 2 — odometry.rs:
use horus::prelude::*;
message! {
WheelEncoder {
left_ticks: i64,
right_ticks: i64,
timestamp_ns: u64,
}
}
struct OdometryNode {
encoder_sub: Topic<WheelEncoder>,
odom_pub: Topic<Odometry>,
last_left: i64,
last_right: i64,
}
impl OdometryNode {
fn new() -> Result<Self> {
Ok(Self {
encoder_sub: Topic::new("wheel.encoders")?,
odom_pub: Topic::new("odom")?,
last_left: 0,
last_right: 0,
})
}
}
impl Node for OdometryNode {
fn name(&self) -> &str { "Odometry" }
fn tick(&mut self) {
if let Some(enc) = self.encoder_sub.recv() {
let dl = enc.left_ticks - self.last_left;
let dr = enc.right_ticks - self.last_right;
self.last_left = enc.left_ticks;
self.last_right = enc.right_ticks;
println!("[Odom] delta L={} R={}", dl, dr);
}
}
}
fn main() -> Result<()> {
let mut sched = Scheduler::new().tick_rate(100_u64.hz());
sched.add(OdometryNode::new()?).order(0).build()?;
sched.run()?;
Ok(())
}
Run them:
# Terminal 1
horus run sensor.rs
# Terminal 2
horus run odometry.rs
The WheelEncoder messages flow through shared memory at ~50ns latency, with zero configuration.
When to Use Multi-Process
| Factor | Single Process | Multi-Process |
|---|---|---|
| Latency | ~3-36ns (intra-process) | ~50-167ns (cross-process) |
| Determinism | Full control via scheduler ordering | Each process has its own scheduler |
| Isolation | A crash takes down everything | A crash is contained to one process |
| Languages | Single language per binary | Mix Rust + Python freely |
| Restart | Must restart everything | Restart one process independently |
| Debugging | Single debugger session | Attach debugger to one process |
| Deployment | One binary to deploy | Multiple binaries |
| Complexity | Simpler | More moving parts |
Use single-process when:
- All nodes are the same language
- You need deterministic ordering between nodes (e.g., sensor → controller → actuator)
- Latency matters at the nanosecond level
- Simpler deployment is preferred
Use multi-process when:
- Mixing Rust and Python (e.g., Rust motor control + Python ML inference)
- Process isolation is needed (safety-critical separation)
- Independent restart required (update one node without stopping others)
- Different update rates or lifecycle requirements
Introspection
HORUS CLI tools work across processes automatically:
# See all topics (from any process)
horus topic list
# Monitor a topic published by another process
horus topic echo wheel.encoders
# See all running nodes across processes
horus node list
# Check bandwidth across processes
horus topic bw wheel.encoders
Cleaning Up
Shared memory files persist after processes exit. Clean them with:
horus clean --shm # Remove stale shared memory regions
Or they are automatically cleaned on the next horus run.
See Also
- Topics (Full Reference) — topic API and backend details
- Message Performance — POD types and zero-copy transport
- Multi-Language — Rust + Python interop patterns