Builder Composition Guide
You know what .rate() does. You know what .compute() does. But what happens when you call both? Does .rate() make it RT, or does .compute() override that? What if you add .budget() on top? Does call order matter?
These are the questions that trip up every HORUS developer eventually. Each method's documentation explains what it does in isolation, but the real power — and the real confusion — comes from combining them. This page is the complete reference for how builder methods interact.
The Core Rule: Deferred Finalization
Every builder method just stores a value. Nothing happens until you call .build(). At that point, the scheduler looks at everything you've set and resolves the configuration in one pass.
This means method call order does not matter:
// These three produce the EXACT same node configuration:
sched.add(node).rate(100_u64.hz()).compute().order(5).build()?;
sched.add(node).compute().order(5).rate(100_u64.hz()).build()?;
sched.add(node).order(5).compute().rate(100_u64.hz()).build()?;
All three result in a Compute node that ticks at most 100 times per second. The scheduler sees both .rate() and .compute() at build time, and .compute() wins — regardless of when you called it.
This is different from how most builder patterns work. In a typical builder, the last call wins because it overwrites a field. In HORUS, the scheduler applies resolution rules that consider all fields together.
Think of it like a form, not a pipeline. You're filling out fields on a form. When you submit (.build()), the system reads the whole form and makes decisions. It doesn't matter which field you filled in first.
The .rate() Dual Meaning
This is the single most important interaction to understand. .rate() changes its behavior based on what else is set:
// Scenario A: .rate() alone → RT
sched.add(motor)
.rate(1000_u64.hz()) // → Rt class, budget=800us, deadline=950us
.build()?;
// Scenario B: .rate() + explicit class → just a frequency limiter
sched.add(planner)
.rate(10_u64.hz()) // → just "tick at most 10x/sec"
.compute() // → Compute class, NO budget, NO deadline
.build()?;
Why? Because "run at 1,000 Hz" and "run at most 10 times per second" are different intents. A motor controller running at 1,000 Hz needs a dedicated thread, timing enforcement, and deadline monitoring. A path planner ticking at 10 Hz just needs a frequency cap — it's CPU-bound work that runs on the thread pool.
The resolution rule:
.rate() combined with... | Resulting class | .rate() means... |
|---|---|---|
| Nothing else | Rt | "This node has real-time timing requirements" |
.compute() | Compute | "Tick at most N times per second" (frequency cap) |
.async_io() | AsyncIo | "Tick at most N times per second" (frequency cap) |
.on("topic") | Event | Ignored — Event nodes trigger on messages, not time |
.budget() or .deadline() only | Rt | Both rate and explicit timing — RT with overrides |
What .rate() auto-derives (Rt only)
When .rate() results in the Rt class, it auto-derives timing parameters you didn't set:
sched.add(sensor)
.rate(100_u64.hz()) // period = 10ms
.build()?;
// Auto-derived: budget = 8ms (80%), deadline = 9.5ms (95%)
sched.add(sensor)
.rate(100_u64.hz()) // period = 10ms
.budget(5_u64.ms()) // explicit budget overrides 80% default
.build()?;
// Result: budget = 5ms (explicit), deadline = 9.5ms (still auto-derived)
sched.add(sensor)
.rate(100_u64.hz())
.budget(5_u64.ms())
.deadline(8_u64.ms()) // explicit deadline overrides 95% default
.build()?;
// Result: budget = 5ms, deadline = 8ms (both explicit)
| What you set | Budget | Deadline |
|---|---|---|
.rate(100.hz()) only | 8ms (80% of 10ms) | 9.5ms (95% of 10ms) |
.rate(100.hz()).budget(5.ms()) | 5ms (explicit) | 9.5ms (auto) |
.rate(100.hz()).deadline(8.ms()) | 8ms (auto 80%) | 8ms (explicit) |
.rate(100.hz()).budget(5.ms()).deadline(8.ms()) | 5ms | 8ms |
.budget(5.ms()) only (no rate) | 5ms | 5ms (deadline = budget) |
.deadline(8.ms()) only (no rate) | None | 8ms |
Full Interaction Matrix
This table shows what happens when you combine any two builder methods. Read it as: "row method + column method → result."
Execution Class Methods
Only one execution class can be active. If you call multiple, the last one wins (with a warning logged):
// DON'T DO THIS — .compute() is silently overridden
sched.add(node).compute().async_io().build()?; // → AsyncIo, NOT Compute
// DO THIS — pick one
sched.add(node).async_io().build()?;
| First | + Second | Result | Notes |
|---|---|---|---|
.compute() | .async_io() | AsyncIo | Warning: first overridden |
.compute() | .on("topic") | Event | Warning: first overridden |
.async_io() | .compute() | Compute | Warning: first overridden |
.async_io() | .on("topic") | Event | Warning: first overridden |
.on("topic") | .compute() | Compute | Warning: first overridden |
.on("topic") | .async_io() | AsyncIo | Warning: first overridden |
RT-Only Methods on Non-RT Nodes
Some methods only make sense for RT nodes. Using them on the wrong class produces warnings or errors:
| Method | On Rt node | On Compute node | On Event node | On AsyncIo node | On BestEffort node |
|---|---|---|---|---|---|
.budget() | Sets budget | Error | Error | Error | Promotes to Rt |
.deadline() | Sets deadline | Error | Error | Error | Promotes to Rt |
.on_miss() | Sets policy | Warning (no effect) | Warning (no effect) | Warning (no effect) | Warning (no effect) |
.priority() | Sets OS priority | Warning (ignored) | Warning (ignored) | Warning (ignored) | Warning (ignored) |
.core() | Pins to CPU | Warning (ignored) | Warning (ignored) | Warning (ignored) | Warning (ignored) |
.watchdog() | Per-node watchdog | Works | Works | Works | Works |
.rate() | Sets rate | Frequency cap | Ignored | Frequency cap | Promotes to Rt |
.order() | Sets order | Sets order | Sets order | Sets order | Sets order |
.failure_policy() | Sets policy | Sets policy | Sets policy | Sets policy | Sets policy |
"Promotes to Rt" means the method changes a BestEffort node to Rt. Setting .budget() or .deadline() on a node with no explicit execution class makes it Rt — just like .rate() does. The scheduler interprets "this node has timing constraints" as "this node needs real-time scheduling."
Goal-Oriented Recipes
Instead of "what does this method do?", here's "I need X — which methods do I chain?"
"100 Hz sensor driver with deadline monitoring"
sched.add(imu_driver)
.order(1) // After safety monitor (order 0)
.rate(100_u64.hz()) // 10ms period → Rt class
.on_miss(Miss::Skip) // Drop a reading if we're late
.build()?;
Why these methods: .rate() → Rt class with auto-derived 8ms budget / 9.5ms deadline. .on_miss(Miss::Skip) → if the driver stalls waiting for hardware, skip one reading rather than accumulating delay. .order(1) → runs after safety-critical nodes.
What removing each method changes:
- Remove
.rate()→ BestEffort, no timing enforcement at all - Remove
.on_miss()→ defaults toMiss::Warn(logs but takes no action) - Remove
.order()→ defaults to 100 (normal priority)
"Background logger that must not starve RT nodes"
sched.add(logger)
.order(200) // Runs last
.compute() // Own thread pool
.rate(10_u64.hz()) // At most 10x/sec (not RT!)
.failure_policy(FailurePolicy::Ignore) // Never crash for logging
.build()?;
Why .compute() and not just BestEffort: A logger doing disk I/O in the main loop would block all BestEffort nodes behind it. .compute() moves it to the thread pool. .rate(10.hz()) caps frequency (NOT RT — .compute() overrides that).
What if you used .async_io() instead: Also works. Use .async_io() if the logger does network I/O (cloud upload). Use .compute() if it does local file I/O with CPU-bound formatting.
"Event-driven planner that reacts to new scans"
sched.add(planner)
.order(5)
.on("lidar.scan") // Sleep until new scan arrives
.build()?;
Why not .rate(): The planner has nothing to do until a new scan arrives. Polling at a fixed rate wastes CPU. .on("lidar.scan") means zero CPU when idle, instant wake on new data.
Can you add .budget() to an Event node? No — this is an error at .build(). Event nodes trigger on data arrival, not on a fixed schedule, so deadline enforcement doesn't apply.
"1 kHz motor controller on production hardware"
sched.add(motor_ctrl)
.order(0) // Highest priority
.rate(1000_u64.hz()) // 1ms period → Rt
.budget(300_u64.us()) // Must finish in 300us
.deadline(900_u64.us()) // Hard wall at 900us
.on_miss(Miss::SafeMode) // Hold position on overrun
.priority(90) // OS-level SCHED_FIFO priority
.core(0) // Pinned to CPU 0
.build()?;
Why explicit .budget() and .deadline(): Auto-derived values (800µs budget, 950µs deadline at 1 kHz) are generous defaults. After profiling, you know the motor controller takes ~200µs. Setting .budget(300.us()) with a .deadline(900.us()) gives a tighter budget for monitoring while leaving headroom before the deadline fires.
Why .priority(90) and .core(0): On a multi-core robot computer, pinning the motor controller to an isolated CPU core eliminates jitter from OS scheduling and cache migration. .priority(90) ensures the kernel never preempts this thread for normal processes.
"ML inference that takes 50-200ms"
sched.add(detector)
.order(10)
.compute() // Thread pool — long-running is fine
.build()?;
Why no .rate(): ML inference time varies (50-200ms depending on scene complexity). A fixed rate would either waste CPU (rate too low) or queue up work (rate too high). Let it run as fast as it can on the thread pool.
Why not .async_io(): ML inference is CPU-bound, not I/O-bound. .compute() uses a CPU thread pool optimized for parallel work. .async_io() uses tokio, which is optimized for I/O waiting.
"Safety monitor that must never miss"
sched.add(safety_monitor)
.order(0) // Runs first, always
.rate(1000_u64.hz()) // Matches fastest control loop
.budget(100_u64.us()) // Must be extremely fast
.deadline(200_u64.us()) // Tight deadline
.on_miss(Miss::Stop) // Kill everything if this misses
.priority(99) // Maximum OS priority
.core(1) // Dedicated CPU core
.watchdog(5_u64.ms()) // Tight per-node watchdog
.failure_policy(FailurePolicy::Fatal) // Panic if tick() errors
.build()?;
Every method is load-bearing: Remove any one and you lose a safety guarantee. This is the maximum-configuration pattern for the most critical node in your system.
What Happens If I...
Quick answers to common "what if" questions:
"...call .rate() and .compute()?"
Compute class. .rate() becomes a frequency cap, not RT. No budget, no deadline, no timing enforcement.
"...call .budget() without .rate()?"
RT class. .budget() alone implies "this node has timing requirements." Deadline auto-derived as deadline = budget.
"...call .deadline() without .rate() or .budget()?"
RT class. Budget is not set (no auto-derivation without .rate()). The scheduler monitors wall time against the deadline.
"...set .budget() larger than .deadline()?"
Error at .build(). Budget is "expected time," deadline is "maximum time." A budget larger than the deadline means you expect the work to take longer than the hard limit — that's a configuration mistake.
"...set .budget(Duration::ZERO)?"
Error at .build(). Zero budget is meaningless.
"...call .on_miss(Miss::Stop) on a Compute node?"
Warning: "has no effect without a deadline." Compute nodes have no deadline, so the miss policy can never trigger. The node builds successfully, but .on_miss() does nothing.
"...call .priority(99) on a Compute node?"
Warning: "only RT nodes get SCHED_FIFO threads." Priority is silently ignored. The node builds and runs fine — it just doesn't get OS-level priority.
"...call .on("") (empty topic)?"
Error at .build(). An Event node with an empty topic can never trigger.
"...call .compute().async_io()?"
AsyncIo class (last wins). Warning logged that .compute() was overridden. Pick one.
"...just call .build() with no methods at all?"
BestEffort class with order 100. Ticks in the main loop at the scheduler's global rate. This is the simplest valid configuration.
Anti-Patterns
Cargo-culting RT configuration
// WRONG: Adding RT methods "just in case" to a logger
sched.add(logger)
.rate(100_u64.hz())
.budget(5_u64.ms())
.priority(50)
.core(3)
.build()?;
This wastes a dedicated CPU thread and an entire CPU core on a logger. Use .async_io() or just default BestEffort.
Using .compute() for everything
// WRONG: Motor controller on the thread pool
sched.add(motor_ctrl)
.compute() // No timing guarantees!
.rate(1000_u64.hz())
.build()?;
.compute().rate() gives you a frequency cap, not RT. The motor controller has no budget, no deadline, and no Miss policy. When the thread pool is busy with other Compute nodes, the motor controller waits. Use .rate() alone for nodes with timing requirements.
Deadline without a response plan
// QUESTIONABLE: Deadline set but using default Miss::Warn
sched.add(motor_ctrl)
.rate(1000_u64.hz())
.budget(300_u64.us())
.deadline(900_u64.us())
.build()?; // Default on_miss is Miss::Warn
If you've set explicit budget and deadline, you've decided this node's timing matters. But the default Miss::Warn just logs a warning and continues — the robot keeps moving with a late motor command. Add .on_miss(Miss::SafeMode) or .on_miss(Miss::Skip) to define what should actually happen.
Mixing intent across classes
// WRONG: Event node that also needs a deadline
sched.add(handler)
.on("command")
.deadline(10_u64.ms()) // ERROR at .build()!
.build()?;
Event nodes trigger on messages, not time. A deadline ("must finish within 10ms of... what?") doesn't apply because there's no periodic schedule to miss. If you need deadline enforcement, use .rate() instead of .on() and poll the topic in tick().
Putting It All Together: Complete System
use horus::prelude::*;
fn main() -> Result<()> {
let mut sched = Scheduler::new()
.tick_rate(500_u64.hz())
.prefer_rt()
.watchdog(500_u64.ms())
.blackbox(64);
// Safety monitor — maximum everything
sched.add(SafetyMonitor::new()?)
.order(0)
.rate(1000_u64.hz())
.budget(100_u64.us())
.deadline(200_u64.us())
.on_miss(Miss::Stop)
.priority(99)
.core(1)
.failure_policy(FailurePolicy::Fatal)
.build()?;
// Motor controller — strict RT
sched.add(MotorController::new()?)
.order(1)
.rate(500_u64.hz())
.on_miss(Miss::SafeMode)
.priority(80)
.core(0)
.build()?;
// IMU driver — RT with auto-derived timing
sched.add(ImuDriver::new()?)
.order(2)
.rate(200_u64.hz())
.on_miss(Miss::Skip)
.build()?;
// Emergency stop — event-driven, zero CPU when idle
sched.add(EmergencyHandler::new()?)
.order(0)
.on("emergency.stop")
.build()?;
// Path planner — CPU-heavy, no deadline
sched.add(PathPlanner::new()?)
.order(10)
.compute()
.build()?;
// ML detector — CPU-heavy, rate-limited
sched.add(ObjectDetector::new()?)
.order(11)
.compute()
.rate(10_u64.hz())
.build()?;
// Cloud telemetry — I/O-bound
sched.add(TelemetryUploader::new()?)
.order(100)
.async_io()
.rate(1_u64.hz())
.failure_policy(FailurePolicy::Ignore)
.build()?;
// Dashboard — runs in main loop, no special needs
sched.add(Dashboard::new()?)
.order(200)
.build()?;
sched.run()
}
Each node uses exactly the methods it needs — no more, no less. The safety monitor has every RT option enabled. The dashboard has none. The path planner and ML detector use .compute() to stay off the RT threads. The telemetry uploader uses .async_io() because it blocks on network I/O.
Design Decisions
Why is method order irrelevant (deferred finalization)?
Early prototypes resolved execution class eagerly — .rate() immediately set the class to Rt, and .compute() overwrote it. This created subtle ordering bugs: .rate().compute() and .compute().rate() produced different nodes. Developers had to remember which method to call last. Deferred finalization eliminates this entire class of bugs by resolving everything at .build() time, where the scheduler can see the full picture.
Why does .rate() change meaning based on context?
The alternative was having two methods: .rate() for RT and .frequency_cap() for non-RT. But this forces developers to understand execution classes before they can set a tick rate. With the current design, the intent is clear from context: .rate(1000.hz()) alone means "timing matters" (Rt), while .rate(10.hz()).compute() means "don't run too often" (frequency cap). The mental model is "describe what you need, the scheduler figures out how to run it."
Why errors instead of silent fixes for invalid combinations?
Setting .budget() on a Compute node is almost always a mistake — the developer thinks they're getting timing enforcement, but Compute nodes don't have it. Silently ignoring the budget would hide the bug. Erroring at .build() catches it immediately, before the robot moves. The principle: configuration mistakes should fail fast, not fail silently on the factory floor.
Trade-offs
| Gain | Cost |
|---|---|
| Order-independent builders — no "call this last" bugs | Must understand deferred finalization to predict results |
.rate() dual meaning — one method, context-dependent behavior | Must know that .rate().compute() is NOT RT |
Strict validation — catches mistakes at .build() | Learning curve: must understand which combinations are valid |
RT auto-detection — no explicit .rt() call needed | Less visible which nodes are RT (use horus monitor to check) |
Warnings for ignored methods — .priority() on Compute logs a warning | Warning fatigue if you're intentionally mixing configurations during prototyping |
See Also
- Execution Classes — The 5 classes and when to use each
- Choosing Configuration — Progressive complexity guide (Levels 0-5)
- Real-Time Systems — Budget, deadline, jitter, and what "real-time" means
- Scheduler API — Complete method reference with signatures
- Scheduler — Full Reference — Execution model and tick lifecycle