Data Types & Encoding

High-level types for camera images, 3D point clouds, and depth maps. These types use zero-copy shared memory transport internally but expose ergonomic, domain-specific APIs.

use horus::prelude::*; // Provides Image, PointCloud, DepthImage, TensorDtype, Device

Image

Represents a camera frame with pixel-level access and encoding metadata.

// Create a 1080p RGB image
let image = Image::new(1920, 1080, ImageEncoding::Rgb8);

// Publish on a topic
let topic: Topic<Image> = Topic::new("camera.rgb")?;
topic.send(&image);

// Receive and access pixels
if let Some(img) = topic.recv() {
    println!("{}x{}, encoding: {:?}", img.width(), img.height(), img.encoding());
    let pixel = img.pixel(100, 200);
}

ImageEncoding

EncodingChannelsDtypeDescription
Rgb83U8Standard RGB color
Rgba84U8RGB with alpha
Bgr83U8BGR (OpenCV default)
Bgra84U8BGR with alpha
Mono81U8Grayscale 8-bit
Mono161U16Grayscale 16-bit

PointCloud

Represents a 3D point cloud with per-point field access.

// Create a point cloud with XYZ + intensity fields
let cloud = PointCloud::new(10_000, &["x", "y", "z", "intensity"], TensorDtype::F32);

// Publish
let topic: Topic<PointCloud> = Topic::new("lidar.points")?;
topic.send(&cloud);

// Receive and iterate points
if let Some(pc) = topic.recv() {
    println!("{} points, {} fields", pc.num_points(), pc.fields().len());
    for i in 0..pc.num_points() {
        let x = pc.field_f32("x", i);
        let y = pc.field_f32("y", i);
        let z = pc.field_f32("z", i);
    }
}

DepthImage

Represents a depth map, typically from an RGBD or stereo camera.

// Create a 640x480 depth image with millimeter-precision U16 values
let depth = DepthImage::millimeters(640, 480);

// Publish
let topic: Topic<DepthImage> = Topic::new("camera.depth")?;
topic.send(&depth);

// Receive and query depth
if let Some(d) = topic.recv() {
    let depth_mm = d.depth_at(320, 240);
    println!("Center depth: {} mm", depth_mm);
}

TensorDtype

Element data type used when constructing PointCloud, DepthImage, and other tensor-backed types.

DtypeSizeUse Case
F324ML training/inference
F648High-precision computation
F162Memory-efficient inference
BF162Training on modern GPUs
U81Images
U162Depth sensors (mm)
U324Large indices
U648Counters, timestamps
I81Quantized inference
I162Audio, sensor data
I324General integer
I648Large signed values
Bool1Masks

TensorDtype Methods

let dtype = TensorDtype::F32;
assert_eq!(dtype.element_size(), 4);
assert!(dtype.is_float());
assert!(!dtype.is_signed_int());
println!("{}", dtype);  // "f32"

// DLPack interop
let dl = dtype.to_dlpack();
let back = TensorDtype::from_dlpack(dl.0, dl.1).unwrap();

// Parse from string
let parsed = TensorDtype::parse("float32").unwrap();
MethodReturnsDescription
.element_size()usizeBytes per element
.is_float()boolWhether this is a floating-point type (F16, BF16, F32, F64)
.is_signed_int()boolWhether this is a signed integer type (I8, I16, I32, I64)
TensorDtype::parse(s)Option<TensorDtype>Parse from string ("float32", "uint8", "int16", etc.)
.to_dlpack()(u8, u8)Convert to DLPack type code and bits
TensorDtype::from_dlpack(code, bits)Option<TensorDtype>Create from DLPack type code and bits

Device

The Device struct is a device location descriptor that tags tensors with a target device. Device::cuda(N) creates a descriptor — actual GPU tensor pools are not yet implemented.

Device::cpu()      // CPU / shared memory
Device::cuda(0)    // CUDA device descriptor (metadata only)

// Parse from string
let cpu = Device::parse("cpu").unwrap();
let dev = Device::parse("cuda:0").unwrap();

// Check device type
assert!(Device::cpu().is_cpu());
assert!(Device::cuda(0).is_cuda());
MethodReturnsDescription
Device::cpu()DeviceCPU device
Device::cuda(index)DeviceCUDA GPU device with the given index
.is_cpu()boolWhether this is a CPU device
.is_cuda()boolWhether this is a CUDA device
Device::parse(s)Option<Device>Parse from string ("cpu" or "cuda:0")

Python Interop

Image, PointCloud, and DepthImage are available in Python with NumPy zero-copy access.

import horus
import numpy as np

# Subscribe to camera images
topic = horus.Topic("camera.rgb", horus.Image)
img = topic.recv()
if img is not None:
    print(f"{img.width}x{img.height}, encoding: {img.encoding}")
    arr = img.to_numpy()  # Zero-copy NumPy view

# Subscribe to point clouds
pc_topic = horus.Topic("lidar.points", horus.PointCloud)
pc = pc_topic.recv()
if pc is not None:
    points = pc.to_numpy()  # (N, fields) NumPy array
    print(f"{pc.num_points} points")

# Subscribe to depth images
depth_topic = horus.Topic("camera.depth", horus.DepthImage)
depth = depth_topic.recv()
if depth is not None:
    depth_arr = depth.to_numpy()  # (H, W) NumPy array

See Also