Perception Messages

Output types for machine learning and computer vision pipelines — detections, segmentation masks, tracked objects, and landmarks.

# simplified
from horus import Detection, Detection3D, BoundingBox2D, BoundingBox3D, SegmentationMask

Detection

2D object detection — flat constructor with class, confidence, and bounding box fields.

# simplified
import horus

det = horus.Detection(
    class_name="person",
    confidence=0.95,
    x=100.0, y=50.0,
    width=100.0, height=250.0,
    class_id=0,
    instance_id=0,
)
FieldTypeDefaultDescription
class_namestr""Detected class name
confidencefloat0.0Detection confidence
x, yfloat0.0Bounding box top-left (px)
width, heightfloat0.0Bounding box size (px)
class_idint0Numeric class identifier
instance_idint0Instance identifier
bboxBoundingBox2DBounding box as a BoundingBox2D object
MethodReturnsDescription
is_confident(threshold)boolTrue if confidence exceeds threshold
with_class_id(class_id)DetectionReturn a copy with class_id set

Detection3D

3D object detection — flat constructor with center, dimensions, and yaw.

# simplified
det3d = horus.Detection3D(
    class_name="car",
    confidence=0.87,
    cx=5.0, cy=2.0, cz=0.8,
    length=4.5, width=1.8, height=1.5,
    yaw=0.0,
)
FieldTypeDefaultDescription
class_namestr""Detected class name
confidencefloat0.0Detection confidence
cx, cy, czfloat0.0Bounding box center (m)
length, width, heightfloat0.0Bounding box dimensions (m)
yawfloat0.0Heading angle (rad)
bboxBoundingBox3DBounding box as a BoundingBox3D object
velocity_x, velocity_y, velocity_zfloat0.0Object velocity
MethodReturnsDescription
with_velocity(vx, vy, vz)Detection3DReturn a copy with velocity set

BoundingBox2D

Axis-aligned 2D bounding box in pixel coordinates.

# simplified
bbox = horus.BoundingBox2D(x=100.0, y=50.0, width=100.0, height=250.0)
FieldTypeDefaultDescription
x, yfloat0.0Top-left corner (px)
width, heightfloat0.0Box dimensions (px)
center_xfloatBox center X (getter only)
center_yfloatBox center Y (getter only)
areafloatBox area in pixels (getter only)

Static Methods:

MethodReturnsDescription
BoundingBox2D.from_center(cx, cy, width, height)BoundingBox2DCreate from center point

Methods:

MethodReturnsDescription
iou(other)floatIntersection over Union with another BoundingBox2D
as_tuple()tupleReturns (x, y, width, height)
as_xyxy()tupleReturns (x1, y1, x2, y2) format

BoundingBox3D

3D bounding box in world coordinates.

# simplified
bbox3d = horus.BoundingBox3D(
    cx=5.0, cy=2.0, cz=0.8,
    length=4.5, width=1.8, height=1.5,
    yaw=0.0,
)
FieldTypeDefaultDescription
cx, cy, czfloat0.0Box center (m)
length, width, heightfloat0.0Box dimensions (m)
yawfloat0.0Heading angle (rad)

SegmentationMask

Per-pixel class labels.

# simplified
mask = horus.SegmentationMask(width=640, height=480, mask_type=0, num_classes=21)
FieldTypeDefaultDescription
width, heightint0Image dimensions (px)
mask_typeint0Segmentation mask type (getter only)
num_classesint0Number of semantic classes
frame_idstrFrame identifier (getter only)
timestamp_nsint0Timestamp
seqint0Sequence number

Static Methods:

MethodReturnsDescription
SegmentationMask.semantic(width, height, num_classes)SegmentationMaskCreate a semantic segmentation mask
SegmentationMask.instance(width, height)SegmentationMaskCreate an instance segmentation mask
SegmentationMask.panoptic(width, height, num_classes)SegmentationMaskCreate a panoptic segmentation mask

Methods:

MethodReturnsDescription
is_semantic()boolTrue if semantic segmentation
is_instance()boolTrue if instance segmentation
is_panoptic()boolTrue if panoptic segmentation
data_size()intSize of mask data buffer in bytes
data_size_u16()intSize of mask data buffer in u16 elements

TrackedObject

Object with persistent tracking ID across frames.

# simplified
tracked = horus.TrackedObject(
    track_id=42,
    x=100.0, y=50.0,
    width=100.0, height=250.0,
    class_id=0,
    confidence=0.9,
)
FieldTypeDefaultDescription
track_idint0Persistent tracking ID
x, yfloat0.0Bounding box top-left (px)
width, heightfloat0.0Bounding box size (px)
class_idint0Numeric class identifier
confidencefloat0.0Detection confidence
class_namestrClass name
bboxBoundingBox2DCurrent bounding box (getter only)
predicted_bboxBoundingBox2DPredicted bounding box (getter only)
velocity_x, velocity_yfloatEstimated velocity (getter only)
velocitytupleVelocity as (vx, vy) tuple (getter only)
accel_x, accel_yfloatEstimated acceleration (getter only)
ageintTrack age in frames (getter only)
hitsintNumber of detection hits (getter only)
time_since_updateintFrames since last update (getter only)
stateintTrack state code (getter only)

Methods:

MethodReturnsDescription
speed()floatEstimated speed (magnitude of velocity)
heading()floatEstimated heading angle (radians)
is_tentative()boolTrue if track is tentative (not yet confirmed)
is_confirmed()boolTrue if track is confirmed
is_deleted()boolTrue if track is marked for deletion
confirm()NoneConfirm the track
delete()NoneMark the track for deletion
mark_missed()NoneMark a missed detection (no match this frame)
update(bbox, confidence)NoneUpdate with new detection

Landmark / Landmark3D

Visual landmarks for SLAM and localization.

# simplified
lm = horus.Landmark(x=1.5, y=2.3, visibility=0.95, index=7)
lm3d = horus.Landmark3D(x=1.5, y=2.3, z=0.8, visibility=0.95, index=7)

Landmark

FieldTypeDefaultDescription
x, yfloat0.0Position (px or m)
visibilityfloat1.0Visibility score (0.0-1.0)
indexint0Landmark index
Static MethodReturnsDescription
Landmark.visible(x, y, index)LandmarkCreate a visible landmark (visibility=1.0)
MethodReturnsDescription
is_visible(threshold)boolTrue if visibility exceeds threshold
distance_to(other)floatEuclidean distance to another Landmark

Landmark3D

FieldTypeDefaultDescription
x, y, zfloat0.03D position (m)
visibilityfloat1.0Visibility score (0.0-1.0)
indexint0Landmark index
Static MethodReturnsDescription
Landmark3D.visible(x, y, z, index)Landmark3DCreate a visible 3D landmark
MethodReturnsDescription
is_visible(threshold)boolTrue if visibility exceeds threshold
distance_to(other)floatEuclidean distance to another Landmark3D
to_2d()LandmarkProject to 2D (drops z coordinate)

Example: YOLO Detection Pipeline

# simplified
import horus

def detect_tick(node):
    img = node.recv("camera.rgb")
    if img is None:
        return

    frame = img.to_numpy()  # Zero-copy
    results = model.predict(frame)

    for r in results:
        det = horus.Detection(
            class_name=r.class_name,
            confidence=float(r.confidence),
            x=r.x, y=r.y,
            width=r.w, height=r.h,
            class_id=r.class_id,
        )
        node.send("detections", det)

detector = horus.Node(
    name="yolo",
    subs=[horus.Image],
    pubs=[horus.Detection],
    tick=detect_tick,
    rate=30,
    compute=True,
    on_miss="skip",
)
horus.run(detector)

See Also