Python Memory Types
Pool-backed types for sharing large sensor data between nodes with zero-copy IPC. Only a small descriptor (64-168 bytes) travels through the ring buffer; actual data stays in shared memory.
| Type | Use case | Create | See |
|---|---|---|---|
| Tensor | Custom data: costmaps, feature maps, state vectors | Tensor([1000, 1000]) | Full reference |
| Image | Camera frames (RGB, BGR, grayscale, Bayer) | Image(480, 640, "rgb8") | Full reference |
| PointCloud | LiDAR scans, 3D data (XYZ, XYZI, XYZRGB) | PointCloud(10000, 3) | Full reference |
| DepthImage | Depth maps (F32 meters, U16 millimeters) | DepthImage(480, 640) | Full reference |
All four types support zero-copy conversion to NumPy, PyTorch, and JAX via DLPack:
# simplified
np_array = img.to_numpy() # zero-copy
torch_tensor = torch.from_dlpack(img.as_tensor()) # zero-copy via DLPack
jax_array = img.to_jax() # zero-copy via DLPack
to_*() and .as_tensor() methods are zero-copy (~3 us). from_*() methods copy once into shared memory.
Tensor Bridge: .as_tensor()
Every domain type can be converted to a Tensor for full Pythonic operations. This is zero-copy -- the Tensor shares the same shared memory:
# simplified
img = Image(480, 640, "rgb8")
t = img.as_tensor() # shape=[480, 640, 3], dtype=uint8
t[0:10] += 128 # brighten top rows (writes to SHM)
bright = img + 50 # arithmetic returns Tensor
pt = torch.from_dlpack(t) # direct to PyTorch (zero-copy)
cloud = PointCloud(10000)
t = cloud.as_tensor() # shape=[10000, 3], dtype=float32
cloud[0] # first point (direct indexing)
len(cloud) # 10000
depth = DepthImage(480, 640)
t = depth.as_tensor() # shape=[480, 640], dtype=float32
Design Decisions
Why pool-backed instead of heap-allocated? Pool-backed memory enables cross-process sharing. A heap-allocated NumPy array must be serialized for IPC. Pool-backed types live in shared memory from the start, so topic.send() copies only the descriptor, not megabytes of pixel data.
Why DLPack for PyTorch/JAX? DLPack is the standard protocol for zero-copy tensor exchange across ML frameworks (NumPy 1.25+, PyTorch 1.10+, JAX 0.4+, CuPy, TensorFlow). One protocol covers all frameworks.
Why from_numpy() copies but to_numpy() doesn't? Publishing requires placing data at a specific pool slot. NumPy arrays at arbitrary heap addresses can't be shared. So from_numpy() copies once into the pool. to_numpy() returns a view into the already-shared memory -- no copy.
Thread safety: Pool-backed types use atomic reference counting. NumPy/PyTorch views should not outlive the source object -- when the HORUS type is dropped, the pool slot may be reclaimed.
See Also
- Tensor -- General-purpose tensor with Pythonic API
- Image -- Camera images with encoding support
- PointCloud -- 3D point clouds with format queries
- DepthImage -- Depth maps with typed access
- ML Utilities -- ML framework integration