Detection & Vision Messages (C++)
Perception types in horus::msg::. Include via <horus/msg/detection.hpp> and <horus/msg/vision.hpp>.
Detection Types
| Type | Key Fields | Use Case |
|---|---|---|
BoundingBox2D | center_x/y, width, height, angle | 2D object detection |
BoundingBox3D | center[3], size[3], rotation[4], confidence | 3D object detection |
Detection | bbox (BoundingBox2D), class_id, confidence | YOLO/SSD output |
Detection3D | bbox (BoundingBox3D), class_id, velocity[3] | 3D detector output |
TrackedObject | track_id, position[3], velocity[3], age | MOT tracker |
SegmentationMask | width, height, num_classes, mask_type | Semantic segmentation |
Vision Types
| Type | Key Fields | Use Case |
|---|---|---|
CameraInfo | width, height, fx/fy/cx/cy, distortion[5] | Camera calibration |
RegionOfInterest | x/y_offset, width, height, do_rectify | Image crop region |
StereoInfo | left (CameraInfo), right (CameraInfo), baseline | Stereo pair |
Detection Example
class Detector : public horus::Node {
public:
Detector() : Node("detector") {
det_pub_ = advertise<horus::msg::Detection>("detections");
}
void tick() override {
// After running inference...
horus::msg::Detection det{};
det.bbox.center_x = 320.0f; // pixels
det.bbox.center_y = 240.0f;
det.bbox.width = 50.0f;
det.bbox.height = 80.0f;
det.bbox.angle = 0.0f;
det.class_id = 1; // "person"
det.confidence = 0.95f;
det.timestamp_ns = 0;
det_pub_->send(det);
}
private:
horus::Publisher<horus::msg::Detection>* det_pub_;
};
CameraInfo — Intrinsic Calibration
horus::msg::CameraInfo cam{};
cam.width = 640;
cam.height = 480;
cam.fx = 525.0; // focal length x (pixels)
cam.fy = 525.0; // focal length y
cam.cx = 320.0; // principal point x
cam.cy = 240.0; // principal point y
cam.distortion[0] = -0.28; // k1
cam.distortion[1] = 0.07; // k2
cam.distortion[2] = 0.0; // p1
cam.distortion[3] = 0.0; // p2
cam.distortion[4] = 0.0; // k3
Tracking Types
| Type | Key Fields | Use Case |
|---|---|---|
Landmark | id, x, y, covariance[4] | 2D landmark |
Landmark3D | x, y, z, visibility, index | 3D landmark (packed) |
LandmarkArray | num_landmarks, confidence, bbox_* | Pose estimation output |
TrackingHeader | frame_count, active_tracks | Tracker metadata |
See Also
- Sensor Messages — LaserScan, Imu for perception input
- TensorPool API — Image and PointCloud for raw perception data