Node Configuration Composition (Python)

You know what rate does. You know what compute does. But what happens when you pass both to Node()? Does rate make it RT, or does compute override that? What if you add budget on top? Does it matter whether you also set on_miss?

These are the questions that trip up every Python HORUS developer eventually. Each parameter's documentation explains what it does in isolation, but the real power — and the real confusion — comes from combining them. This page is the complete reference for how Node() parameters interact.

The Core Rule: All-at-Once Resolution

Every Node() parameter is just a value on a form. When you call horus.run() or sched.run(), the scheduler looks at everything you passed and resolves the configuration in one pass.

This means parameter order does not matter — because there is no order. Python kwargs are evaluated together:

import horus

# These three produce the EXACT same node configuration:
horus.Node(name="a", tick=my_fn, rate=100, compute=True, order=5)
horus.Node(name="a", tick=my_fn, compute=True, order=5, rate=100)
horus.Node(name="a", tick=my_fn, order=5, compute=True, rate=100)

All three result in a Compute node that ticks at most 100 times per second. The scheduler sees both rate=100 and compute=True, and compute=True wins — it determines the execution class, while rate becomes a frequency cap.

Think of it like a form, not a pipeline. You are filling out fields on a form. When you submit (the scheduler starts), the system reads the whole form and makes decisions. There is no "first" or "last" parameter — they are all read together.

The rate Dual Meaning

This is the single most important interaction to understand. rate changes its behavior based on what else you pass:

import horus

us = horus.us  # 1e-6

# Scenario A: rate alone → RT
motor = horus.Node(
    name="motor",
    tick=motor_tick,
    rate=1000,              # → Rt class, budget=800μs, deadline=950μs
)

# Scenario B: rate + compute → just a frequency limiter
planner = horus.Node(
    name="planner",
    tick=planner_tick,
    rate=10,                # → just "tick at most 10x/sec"
    compute=True,           # → Compute class, NO budget, NO deadline
)

Why? Because "run at 1,000 Hz" and "run at most 10 times per second" are different intents. A motor controller running at 1,000 Hz needs a dedicated thread, timing enforcement, and deadline monitoring. A path planner ticking at 10 Hz just needs a frequency cap — it is CPU-bound work that runs on the thread pool.

The resolution rule:

rate combined with...Resulting classrate means...
Nothing elseRt"This node has real-time timing requirements"
compute=TrueCompute"Tick at most N times per second" (frequency cap)
async def tickAsyncIo"Tick at most N times per second" (frequency cap)
on="topic"EventIgnored — Event nodes trigger on messages, not time
budget or deadline onlyRtBoth rate and explicit timing — RT with overrides

What rate Auto-Derives (Rt Only)

When rate results in the Rt class, it auto-derives timing parameters you did not set:

import horus

us = horus.us
ms = horus.ms

# rate alone — everything auto-derived
sensor = horus.Node(
    name="sensor",
    tick=sensor_tick,
    rate=100,               # period = 10ms
)
# Auto-derived: budget = 8ms (80%), deadline = 9.5ms (95%)

# rate + explicit budget — budget overrides, deadline still auto
sensor = horus.Node(
    name="sensor",
    tick=sensor_tick,
    rate=100,               # period = 10ms
    budget=5 * ms,          # explicit budget overrides 80% default
)
# Result: budget = 5ms (explicit), deadline = 9.5ms (still auto-derived)

# rate + both explicit — full manual control
sensor = horus.Node(
    name="sensor",
    tick=sensor_tick,
    rate=100,
    budget=5 * ms,
    deadline=8 * ms,        # explicit deadline overrides 95% default
)
# Result: budget = 5ms, deadline = 8ms (both explicit)
What you setBudgetDeadline
rate=100 only8ms (80% of 10ms)9.5ms (95% of 10ms)
rate=100, budget=5*ms5ms (explicit)9.5ms (auto)
rate=100, deadline=8*ms8ms (auto 80%)8ms (explicit)
rate=100, budget=5*ms, deadline=8*ms5ms8ms
budget=5*ms only (no rate)5ms5ms (deadline = budget)
deadline=8*ms only (no rate)None8ms

The auto-derivation formula: budget = 80% of period, deadline = 95% of period. These defaults give your tick 80% of the period to finish, with a 15% buffer between budget and the hard deadline. If your tick consistently runs within the budget, you have a 20% safety margin before the deadline fires.

Full Interaction Matrix

This table shows what happens when you combine any two configuration parameters. Read it as: "row parameter + column parameter produces what result?"

Execution Class Parameters

Only one execution class can be active. If you pass multiple, the last-evaluated one wins (with a warning logged):

# DON'T DO THIS — compute is silently overridden
node = horus.Node(
    name="confused",
    tick=my_tick,
    compute=True,           # overridden
    on="scan",              # wins → Event class
)
# Warning: "confused: compute=True overridden by on='scan' — only one execution class applies"
First+ SecondResultNotes
compute=Trueon="topic"EventWarning: compute overridden
compute=Trueasync def tickAsyncIoWarning: compute overridden
on="topic"compute=TrueComputeWarning: on overridden
on="topic"async def tickAsyncIoWarning: on overridden
async def tickcompute=TrueErrorMutually exclusive
async def tickon="topic"ErrorMutually exclusive

async def is strict. Unlike compute and on, which produce warnings when combined, async def tick with either compute=True or on="topic" raises an error at startup. The scheduler cannot meaningfully combine async I/O with compute thread pools or event triggers.

RT-Only Parameters on Non-RT Nodes

Some parameters only make sense for RT nodes. Using them on the wrong execution class produces warnings or errors:

ParameterOn Rt nodeOn Compute nodeOn Event nodeOn AsyncIo nodeOn BestEffort node
budgetSets budgetErrorErrorErrorPromotes to Rt
deadlineSets deadlineErrorErrorErrorPromotes to Rt
on_missSets policyWarning (no effect)Warning (no effect)Warning (no effect)Warning (no effect)
prioritySets OS priorityWarning (ignored)Warning (ignored)Warning (ignored)Warning (ignored)
corePins to CPUWarning (ignored)Warning (ignored)Warning (ignored)Warning (ignored)
watchdogPer-node watchdogWorksWorksWorksWorks
rateSets tick rateFrequency capIgnoredFrequency capPromotes to Rt
orderSets orderSets orderSets orderSets orderSets order
failure_policySets policySets policySets policySets policySets policy

"Promotes to Rt" means the parameter changes a BestEffort node to Rt. Setting budget or deadline on a node with no explicit execution class makes it Rt — just like rate alone does. The scheduler interprets "this node has timing constraints" as "this node needs real-time scheduling."

The Promotion and Conflict Summary

To make the interaction rules concrete, here is every path to each execution class:

Execution classHow to get it
Rtrate alone; budget alone; deadline alone; rate + budget; rate + deadline
Computecompute=True (optionally with rate as frequency cap)
Eventon="topic" (rate is ignored if present)
AsyncIoasync def tick (optionally with rate as frequency cap)
BestEffortNo rate, no compute, no on, no budget, no deadline, sync def tick

Goal-Oriented Recipes

Instead of "what does this parameter do?", here is "I need X — which parameters do I pass?"

"100 Hz sensor driver with deadline monitoring"

import horus

imu_driver = horus.Node(
    name="imu_driver",
    tick=read_imu,
    order=1,                        # After safety monitor (order 0)
    rate=100,                       # 10ms period → Rt class
    on_miss="skip",                 # Drop a reading if we are late
    subs=["imu.config"],
    pubs=[horus.Imu],
)

Why these parameters: rate=100 alone triggers Rt class with auto-derived 8ms budget and 9.5ms deadline. on_miss="skip" means if the driver stalls waiting for hardware, skip one reading rather than accumulating delay. order=1 runs after safety-critical nodes.

What removing each parameter changes:

  • Remove rate → BestEffort, no timing enforcement at all
  • Remove on_miss → defaults to "warn" (logs but takes no action)
  • Remove order → defaults to 100 (normal priority)

"Background logger that must not starve RT nodes"

import horus

logger = horus.Node(
    name="logger",
    tick=log_data,
    order=200,                      # Runs last
    compute=True,                   # Thread pool — not the main tick thread
    rate=10,                        # At most 10x/sec (NOT RT!)
    failure_policy="ignore",        # Never crash for logging
    subs=["imu", "cmd_vel", "scan"],
)

Why compute=True and not just BestEffort: A logger doing disk I/O in the main loop would block all BestEffort nodes behind it. compute=True moves it to the thread pool. rate=10 caps frequency (NOT RT — compute=True overrides that).

What if you used async def tick instead: Also works if your logger does network I/O (cloud upload). Use async def tick for network I/O, compute=True for local file I/O with CPU-bound formatting.

"Event-driven planner that reacts to new scans"

import horus

planner = horus.Node(
    name="planner",
    tick=plan_path,
    order=5,
    on="lidar.scan",                # Sleep until new scan arrives
    subs=[horus.LaserScan],
    pubs=["path"],
)

Why not rate: The planner has nothing to do until a new scan arrives. Polling at a fixed rate wastes CPU. on="lidar.scan" means zero CPU when idle, instant wake on new data.

Can you add budget to an Event node? No — this is an error at startup. Event nodes trigger on data arrival, not on a fixed schedule, so deadline enforcement does not apply.

"1 kHz motor controller on production hardware"

import horus

us = horus.us

motor_ctrl = horus.Node(
    name="motor_ctrl",
    tick=control_motors,
    order=0,                        # Highest priority
    rate=1000,                      # 1ms period → Rt
    budget=300 * us,                # Must finish in 300μs
    deadline=900 * us,              # Hard wall at 900μs
    on_miss="safe_mode",            # Hold position on overrun
    priority=90,                    # OS-level SCHED_FIFO priority
    core=0,                         # Pinned to CPU 0
    subs=[horus.CmdVel],
    pubs=["motor.pwm"],
)

Why explicit budget and deadline: Auto-derived values (800μs budget, 950μs deadline at 1 kHz) are generous defaults. After profiling, you know the motor controller takes ~200μs. Setting budget=300*us with deadline=900*us gives a tighter budget for monitoring while leaving headroom before the deadline fires.

Why priority=90 and core=0: On a multi-core robot computer, pinning the motor controller to an isolated CPU core eliminates jitter from OS scheduling and cache migration. priority=90 ensures the kernel never preempts this thread for normal processes.

"ML inference that takes 50-200ms"

import horus

detector = horus.Node(
    name="yolo_detector",
    tick=run_inference,
    order=10,
    compute=True,                   # Thread pool — long-running is fine
    subs=[horus.Image],
    pubs=["detections"],
)

Why no rate: ML inference time varies (50-200ms depending on scene complexity). A fixed rate would either waste CPU (rate too low) or queue up work (rate too high). Let it run as fast as it can on the thread pool.

Why not async def tick: ML inference is CPU-bound, not I/O-bound. compute=True runs on a CPU thread pool optimized for parallel work. async def tick runs on the async runtime, which is optimized for I/O waiting.

"Safety monitor that must never miss"

import horus

us = horus.us
ms = horus.ms

safety_monitor = horus.Node(
    name="safety_monitor",
    tick=check_safety,
    order=0,                        # Runs first, always
    rate=1000,                      # Matches fastest control loop
    budget=100 * us,                # Must be extremely fast
    deadline=200 * us,              # Tight deadline
    on_miss="stop",                 # Kill everything if this misses
    priority=99,                    # Maximum OS priority
    core=1,                         # Dedicated CPU core
    watchdog=5 * ms,                # Tight per-node watchdog
    failure_policy="fatal",         # Panic if tick() raises
    pubs=["safety.status"],
)

Every parameter is load-bearing: Remove any one and you lose a safety guarantee. This is the maximum-configuration pattern for the most critical node in your system.

"Async telemetry uploader with graceful degradation"

import horus
import aiohttp

async def upload_tick(node):
    if node.has_msg("telemetry"):
        data = node.recv("telemetry")
        try:
            async with aiohttp.ClientSession() as session:
                await session.post("https://api.example.com/telemetry", json=data)
        except aiohttp.ClientError:
            node.log_warning("Upload failed — will retry next tick")

uploader = horus.Node(
    name="uploader",
    tick=upload_tick,               # async def → AsyncIo (auto-detected)
    rate=1,                         # At most 1x/sec (frequency cap, not RT)
    order=200,                      # Low priority
    failure_policy="ignore",        # Never crash for telemetry
    subs=["telemetry"],
)

Why no compute=True: The async def tick is auto-detected and classified as AsyncIo. Adding compute=True would be an error — they are mutually exclusive.

"Event-driven emergency stop handler"

import horus

def handle_estop(node):
    node.log_error("EMERGENCY STOP received!")
    node.send("cmd_vel", horus.CmdVel(linear=0.0, angular=0.0))
    node.request_stop()

estop = horus.Node(
    name="estop",
    tick=handle_estop,
    order=0,                        # Highest priority
    on="emergency.stop",            # Only fires when message arrives
    failure_policy="fatal",         # If this fails, everything stops
    subs=["emergency.stop"],
    pubs=[horus.CmdVel],
)

Why on instead of rate: An emergency stop handler should not be polling. It should sleep and use zero CPU until the moment an emergency.stop message arrives. on="emergency.stop" provides instant wake-up with no wasted cycles.

What Happens If I...

Quick answers to common "what if" questions.

"...pass rate and compute=True?" Compute class. rate becomes a frequency cap, not RT. No budget, no deadline, no timing enforcement.

"...pass budget without rate?" RT class. budget alone implies "this node has timing requirements." Deadline auto-derived as deadline = budget.

"...pass deadline without rate or budget?" RT class. Budget is not set (no auto-derivation without rate). The scheduler monitors wall time against the deadline.

"...set budget larger than deadline?" Error at startup. Budget is "expected time," deadline is "maximum time." A budget larger than the deadline means you expect the work to take longer than the hard limit — that is a configuration mistake.

"...set budget=0?" Error at startup. Zero budget is meaningless.

"...pass on_miss="stop" on a Compute node?" Warning: "has no effect without a deadline." Compute nodes have no deadline, so the miss policy can never trigger. The node runs fine, but on_miss does nothing.

"...pass priority=99 on a Compute node?" Warning: "only RT nodes get SCHED_FIFO threads." Priority is silently ignored. The node runs fine — it just does not get OS-level priority.

"...pass on="" (empty topic string)?" Error at startup. An Event node with an empty topic can never trigger.

"...pass compute=True with async def tick?" Error at startup. These are mutually exclusive. async def tick runs on the async I/O runtime; compute=True runs on the CPU thread pool. Pick one.

"...pass on="topic" with async def tick?" Error at startup. Event-driven triggering and async I/O are mutually exclusive.

"...just pass name and tick with no other parameters?" BestEffort class at rate=30 (the default) and order=100. Ticks in the main loop at the scheduler's global rate. This is the simplest valid configuration.

"...pass rate=30 (the default) — is that RT?" Yes. rate alone always means RT. Even rate=30 produces an RT node with a 26.7ms budget (80% of 33.3ms) and 31.7ms deadline (95%). If you want 30 Hz without RT, add compute=True or use async def tick.

"...use horus.run() with rt=True and all nodes are Compute?" rt=True on the scheduler enables system-level RT features (memory locking, SCHED_FIFO). But individual nodes still get their own execution class. A compute=True node stays on the thread pool even with rt=True on the scheduler — RT thread allocation only happens for nodes classified as Rt.

Anti-Patterns

Cargo-culting RT configuration

# WRONG: Adding RT parameters "just in case" to a logger
logger = horus.Node(
    name="logger",
    tick=log_data,
    rate=100,
    budget=5 * horus.ms,
    priority=50,
    core=3,
)

This wastes a dedicated CPU thread and an entire CPU core on a logger. The rate=100 alone makes it RT, budget confirms RT, and then priority and core pin it to real hardware resources. Use compute=True or just leave it as BestEffort with rate=30.

Using compute=True for everything

# WRONG: Motor controller on the thread pool
motor_ctrl = horus.Node(
    name="motor_ctrl",
    tick=control_motors,
    compute=True,                   # No timing guarantees!
    rate=1000,
)

compute=True with rate=1000 gives you a frequency cap, not RT. The motor controller has no budget, no deadline, and no on_miss policy. When the thread pool is busy with other Compute nodes, the motor controller waits its turn. Use rate=1000 alone for nodes with timing requirements.

Deadline without a response plan

# QUESTIONABLE: Deadline set but using default on_miss="warn"
motor_ctrl = horus.Node(
    name="motor_ctrl",
    tick=control_motors,
    rate=1000,
    budget=300 * horus.us,
    deadline=900 * horus.us,
    # on_miss defaults to "warn"
)

If you have set explicit budget and deadline, you have decided this node's timing matters. But the default on_miss="warn" just logs a warning and continues — the robot keeps moving with a late motor command. Add on_miss="safe_mode" or on_miss="skip" to define what should actually happen.

Mixing intent across classes

# WRONG: Event node that also needs a deadline
handler = horus.Node(
    name="handler",
    tick=handle_command,
    on="command",
    deadline=10 * horus.ms,         # ERROR at startup!
)

Event nodes trigger on messages, not time. A deadline ("must finish within 10ms of... what?") does not apply because there is no periodic schedule to miss. If you need deadline enforcement, use rate instead of on and poll the topic in your tick function.

on_miss without a deadline

# WARNING: on_miss has nothing to enforce
processor = horus.Node(
    name="processor",
    tick=process_data,
    compute=True,
    rate=50,
    on_miss="stop",                 # Warning: no deadline to miss
)

compute=True means no deadline. on_miss="stop" says "stop the scheduler if I miss my deadline," but there is no deadline to miss. This builds successfully with a warning, but on_miss is dead code.

priority on non-RT nodes

# WARNING: priority is silently ignored
planner = horus.Node(
    name="planner",
    tick=plan_path,
    compute=True,
    rate=10,
    priority=90,                    # Warning: only RT gets SCHED_FIFO
)

priority sets the OS-level SCHED_FIFO scheduling priority, which only applies to dedicated RT threads. Compute nodes run on the thread pool where OS priority is managed by the thread pool itself. This builds with a warning, and priority=90 is ignored.

Putting It All Together: Complete System

import horus

us = horus.us
ms = horus.ms


# --- Tick functions ---

def check_safety(node):
    """Verify all systems nominal."""
    imu = node.recv("imu")
    if imu and abs(imu.accel_z) < 5.0:
        node.send("safety.status", {"ok": False, "reason": "freefall"})
        node.request_stop()
    node.send("safety.status", {"ok": True})

def read_imu(node):
    """Read IMU hardware, publish typed message."""
    reading = read_hardware_imu()
    node.send("imu", horus.Imu(
        accel_x=reading[0], accel_y=reading[1], accel_z=reading[2],
        gyro_x=reading[3], gyro_y=reading[4], gyro_z=reading[5],
    ))

def control_motors(node):
    """PID loop: read cmd_vel, write motor PWM."""
    cmd = node.recv("cmd_vel")
    if cmd:
        left = cmd.linear - cmd.angular * 0.3
        right = cmd.linear + cmd.angular * 0.3
        node.send("motor.pwm", {"left": left, "right": right})

def handle_estop(node):
    """Immediate stop on emergency signal."""
    node.send("cmd_vel", horus.CmdVel(linear=0.0, angular=0.0))
    node.log_error("EMERGENCY STOP activated")
    node.request_stop()

def plan_path(node):
    """CPU-heavy path planning from latest scan."""
    scan = node.recv("scan")
    if scan:
        path = compute_path(scan)
        node.send("path", path)

def run_inference(node):
    """ML object detection — variable duration."""
    img = node.recv("camera.rgb")
    if img:
        detections = model.predict(img.to_numpy())
        for det in detections:
            node.send("detections", {
                "class": det.class_name,
                "confidence": float(det.confidence),
                "bbox": [det.x1, det.y1, det.x2, det.y2],
            })

async def upload_telemetry(node):
    """Async cloud upload — network I/O."""
    import aiohttp
    if node.has_msg("telemetry"):
        data = node.recv("telemetry")
        try:
            async with aiohttp.ClientSession() as session:
                await session.post("https://telemetry.example.com/v1", json=data)
        except Exception as e:
            node.log_warning(f"Upload failed: {e}")

def update_dashboard(node):
    """Low-priority display update."""
    stats = {"tick": horus.tick(), "elapsed": horus.elapsed()}
    node.send("dashboard", stats)


# --- Node definitions ---

# Safety monitor — maximum everything
safety = horus.Node(
    name="safety_monitor",
    tick=check_safety,
    order=0,
    rate=1000,
    budget=100 * us,
    deadline=200 * us,
    on_miss="stop",
    priority=99,
    core=1,
    watchdog=5 * ms,
    failure_policy="fatal",
    subs=[horus.Imu],
    pubs=["safety.status"],
)

# IMU driver — RT with auto-derived timing
imu = horus.Node(
    name="imu_driver",
    tick=read_imu,
    order=1,
    rate=200,
    on_miss="skip",
    pubs=[horus.Imu],
)

# Motor controller — strict RT
motor = horus.Node(
    name="motor_ctrl",
    tick=control_motors,
    order=2,
    rate=500,
    on_miss="safe_mode",
    priority=80,
    core=0,
    subs=[horus.CmdVel],
    pubs=["motor.pwm"],
)

# Emergency stop — event-driven, zero CPU when idle
estop = horus.Node(
    name="estop",
    tick=handle_estop,
    order=0,
    on="emergency.stop",
    failure_policy="fatal",
    subs=["emergency.stop"],
    pubs=[horus.CmdVel],
)

# Path planner — CPU-heavy, no rate constraint
planner = horus.Node(
    name="planner",
    tick=plan_path,
    order=10,
    compute=True,
    subs=[horus.LaserScan],
    pubs=["path"],
)

# ML detector — CPU-heavy, rate-limited
detector = horus.Node(
    name="detector",
    tick=run_inference,
    order=11,
    compute=True,
    rate=10,
    subs=[horus.Image],
    pubs=["detections"],
)

# Cloud telemetry — async I/O
telemetry = horus.Node(
    name="telemetry",
    tick=upload_telemetry,
    rate=1,
    order=100,
    failure_policy="ignore",
    subs=["telemetry"],
)

# Dashboard — BestEffort, no special needs
dashboard = horus.Node(
    name="dashboard",
    tick=update_dashboard,
    order=200,
    pubs=["dashboard"],
)


# --- Run the system ---

sched = horus.Scheduler(tick_rate=500, watchdog_ms=500)
sched.add(safety)
sched.add(imu)
sched.add(motor)
sched.add(estop)
sched.add(planner)
sched.add(detector)
sched.add(telemetry)
sched.add(dashboard)
sched.run()

Each node uses exactly the parameters it needs — no more, no less:

NodeClassWhy
safety_monitorRtEvery RT parameter enabled — the most critical node
imu_driverRtrate=200 alone, auto-derived timing, skip on miss
motor_ctrlRtrate=500 with explicit priority and CPU pinning
estopEventon="emergency.stop" — zero CPU until triggered
plannerComputecompute=True — CPU-heavy, runs on thread pool
detectorComputecompute=True, rate=10 — thread pool with frequency cap
telemetryAsyncIoasync def tick — auto-detected, network I/O
dashboardBestEffortNo class parameters — runs in main loop at default rate

Design Decisions

Why is parameter order irrelevant (all-at-once resolution)? Python kwargs have no meaningful order. But even if they did, the scheduler resolves everything together at startup because the alternative — resolving eagerly as each parameter is parsed — creates subtle bugs. In an eager system, rate=100, compute=True and compute=True, rate=100 could produce different nodes. All-at-once resolution eliminates this entire class of bugs.

Why does rate change meaning based on context? The alternative was having two parameters: rate for RT and frequency_cap for non-RT. But this forces developers to understand execution classes before they can set a tick rate. With the current design, the intent is clear from context: rate=1000 alone means "timing matters" (Rt), while rate=10, compute=True means "do not run too often" (frequency cap). The mental model is "describe what you need, the scheduler figures out how to run it."

Why errors instead of silent fixes for invalid combinations? Setting budget on a Compute node is almost always a mistake — the developer thinks they are getting timing enforcement, but Compute nodes do not have it. Silently ignoring the budget would hide the bug. Erroring at startup catches it immediately, before the robot moves. The principle: configuration mistakes should fail fast, not fail silently on the factory floor.

Why is async def tick strict about combinations while compute and on are lenient? compute=True and on="topic" are simple flags that the scheduler can reason about — when both are present, one clearly overrides the other. But async def tick fundamentally changes how the function runs (coroutine vs regular function). Running an async function on a synchronous thread pool (compute) would require wrapping it in asyncio.run() per tick, adding 50-100μs of event loop overhead. Rather than silently degrading performance, the scheduler rejects the combination.

Why kwargs instead of a builder chain? Python does not have HORUS's Rust builder pattern (.rate(100).compute().build()). The idiomatic Python equivalent is kwargs: Node(rate=100, compute=True). This gives the same "fill out a form" mental model with standard Python syntax. IDE auto-complete works, type checkers validate parameter types, and help(horus.Node) shows every option.

Trade-offs

GainCost
All-at-once resolution — no "pass this last" bugsMust understand that rate alone means RT
rate dual meaning — one parameter, context-dependent behaviorMust know that rate + compute=True is NOT RT
Strict validation — catches mistakes at startupLearning curve: must understand which combinations are valid
RT auto-detection — no explicit rt=True per nodeLess visible which nodes are RT (use horus monitor to check)
Warnings for ignored parameterspriority on Compute logs a warningWarning fatigue if you are intentionally experimenting
async def auto-detection — zero config for async I/OCannot use async tick on compute thread pool (must choose)

See Also