Nodes: The Building Blocks

A robot arm picks parts off a conveyor belt. One piece of software reads the camera, another detects parts, another plans the arm's trajectory, and another sends motor commands. If any of these components shares memory or a call stack with the others, a bug in the camera driver can crash the motor controller — and the arm drops whatever it's holding.

HORUS solves this with nodes: isolated components that each do one job. A camera node reads frames. A detection node finds parts. A planner node computes trajectories. A motor node sends commands. They communicate through shared-memory channels, but they don't share state. If one crashes, the others keep running — and the safety monitor stops the arm cleanly.

For the complete Node trait reference with all methods, see Nodes — Full Reference.

How It Works

What is a Node?

A node is one component doing one job:

  • A SensorNode reads the camera or IMU
  • A ControlNode moves the motors
  • A SafetyNode prevents collisions
  • A PlannerNode decides where to go

Every node implements the Node trait. The only required method is tick() — your main logic that runs every cycle:

// simplified
use horus::prelude::*;

struct Heartbeat;

impl Node for Heartbeat {
    fn name(&self) -> &str { "Heartbeat" }

    fn tick(&mut self) {
        println!("Robot is alive!");
    }
}

The scheduler calls tick() repeatedly — you don't manage loops, threads, or timing.

How Nodes Communicate

Nodes don't call each other directly. They send data through Topics — named channels:

Nodes communicate through Topics, not direct calls

The sensor doesn't know the monitor exists. It just publishes data. Any number of subscribers can listen — zero coupling between components.

In Python, topics are declared via constructor kwargs:

# Python: topics declared via constructor kwargs
node = horus.Node(
    pubs=[horus.CmdVel, "status"],    # typed + generic
    subs=[horus.LaserScan],           # typed
    tick=my_tick,
    rate=50
)

Node Lifecycle

Every node has three phases:

Node lifecycle: init once, tick repeatedly, shutdown once
PhaseMethodWhenUse for
Startupinit()Once, before first tickOpen files, connect to hardware
Runningtick()Every scheduler cycleRead sensors, compute, send commands
Cleanupshutdown()Once, on exitStop motors, close connections
// simplified
use horus::prelude::*;

impl Node for MotorController {
    fn name(&self) -> &str { "Motor" }

    fn init(&mut self) -> Result<()> {
        self.motor.connect()?;
        Ok(())
    }

    fn tick(&mut self) {
        if let Some(cmd) = self.commands.recv() {
            self.motor.set_velocity(cmd);
        }
    }

    // SAFETY: always stop motors in shutdown
    fn shutdown(&mut self) -> Result<()> {
        self.motor.set_velocity(0.0);
        self.motor.disconnect()?;
        Ok(())
    }
}

Running Nodes

Nodes run inside a Scheduler. Add nodes, set their execution order, and run:

// simplified
use horus::prelude::*;

fn main() -> Result<()> {
    let mut scheduler = Scheduler::new();

    scheduler.add(SensorNode::new()?)
        .order(0)      // runs first
        .build()?;

    scheduler.add(ControlNode::new()?)
        .order(1)      // runs second
        .build()?;

    scheduler.run()    // runs until Ctrl+C
}

.order() controls execution sequence — lower numbers run first. This ensures the sensor publishes data before the controller consumes it.

Design Decisions

Why isolated nodes instead of a single program? A monolithic program shares one call stack. A panic in the camera driver kills everything — including the motor controller, which may leave the robot arm in a dangerous position. Nodes provide fault boundaries: the scheduler can isolate a crashing node while the rest of the system continues and the safety monitor stops actuators cleanly.

Why tick() instead of run()? A run() method gives the node full control — it can loop forever, block on I/O, or forget to check for shutdown signals. A tick() method gives the scheduler full control: it decides when to call each node, how long to allow, and when to force shutdown. This enables deterministic execution, deadline monitoring, and coordinated shutdown across all nodes.

Why communicate through Topics instead of direct calls? Direct calls create tight coupling — the sensor must know the controller's API, and adding a logger means modifying the sensor. Topics decouple: the sensor publishes to "temperature" and doesn't know who reads it. Adding a logger is zero changes to existing code.

Trade-offs

GainCost
Fault isolation — one crash doesn't kill the systemCommunication through Topics is indirect (nanoseconds, not zero)
Testable in isolation — tick a node once and assertMore boilerplate than a function call
Composable — mix and match nodes across projectsNodes must agree on topic names and message types
Deterministic execution order via schedulerNo direct function calls between nodes

See Also