I am currently working on optimizing high-bandwidth sensor data transmission (specifically LiDAR point clouds) using ROS 2 and Iceoryx for zero-copy communication.
I have successfully set up the Iceoryx environment and confirmed zero-copy works for fixed-size types. However, I am facing challenges when applying this to variable-size messages, such as sensor_msgs/msg/PointCloud2.
As I understand it, Iceoryx typically requires pre-allocated memory pools with fixed chunks. In the case of PointCloud2, the data size can vary depending on the LiDARâs points (in my case, around 5.2MB per message).
I have two specific questions:
1. Best practices for variable-size data like PointCloud2
How should we handle messages where the size is not strictly fixed at compile-time while still maintaining zero-copy benefits? Should we always pre-allocate the âworst-caseâ maximum size for the underlying buffers? If anyone has implemented this for sensor_msgs/msg/PointCloud2 or similar dynamic types, I would appreciate any advice or examples.
2. Tuning RouDi Configuration (size and count)
Regarding the roudi_config.toml (or the RouDi memory pool setup), what is the general rule of thumb for determining the optimal size and count?
For high-resolution LiDAR data:
How do you balance between the number of chunks (count) and the buffer size for each chunk to avoid memory exhaustion without being overly wasteful?
Are there any common pitfalls when setting these values for a system with multiple subscribers?
Iâve already got Iceoryx installed and basic IPC working, but I want to ensure my configuration is production-ready for large-scale sensor data.
Hi there! I canât speak for RouDI configuration, butâŚ
How should we handle messages where the size is not strictly fixed at compile-time while still maintaining zero-copy benefits?
The standard answer is you donât. If you have control over the middleware, you can try to implement rmw_publisher_allocation_t and rmw_subscription_allocation_t APIs to pre-allocate and then loan your messages both ways. If you have control over messages, you can craft your own and ensure they are fixed size. IIRC thatâs what the first demos for rmw_iceoryx used to do (see fixed_size_ros2_demo/fixed_size_msgs/msg at master ¡ Karsten1987/fixed_size_ros2_demo ¡ GitHub ).
Otherwise, intraprocess comms in rclcpp is a popular escape hatch.
If you are generally interested in the topic, the Accelerated Memory Transport Working Group is trying to address this. Nvidia is working on pushing large buffers to GPU memory. Like Nitros but ROS native. Should land in time for Lyrical. See Native buffer type . And Iâm personally working on a reference implementation of REP-0157, which is still draft. Should go public at some point later this year.
Thanks for your insightful advice, especially the mention of REP-0157 and the escape hatch.
In my current high-bandwidth LiDAR pipeline, I am debating between ROS 2 Composition (Intra-process) and Iceoryx (Inter-process) in my team.
While Composition seems like a simpler âtrue zero-copyâ solution by sharing pointers within a single process, Iceoryx offers better fault isolation by keeping nodes in separate processes.
Given that Iâm handling 5.2MB PointCloud2 data, which one would you personally recommend as a more âproduction-readyâ approach for long-run stability? Is the overhead of serialization in Iceoryx significant enough to favor moving everything into a single container?"
Regarding the RouDi configuration. Unfortunately, there is no silver bullet and it depends on the overall system. Usually, one over-commits and then checks with the introspection what is used. Depending on that, one can then tailor the memory config with some safety margins.
We are currently working on iceoryx2 and there is also an rmw_iceoryx2. It is not yet feature complete but depending on the feature set, it might be enough. But we would need to port it to latest iceoryx2. The RMW is currently a side topic since we lack funding for itâs development.
Have you thought about combining ROS 2 with iceoryx2? We have some users who run it in parallel to their own stack and use it only for the large data, like lidar and video. With iceoryx2 we got rid of the central daemon and memory configuration is done automatically, based on the runtime configuration of the supported publisher, subscriber and queue sizes on a per topic basis.
For true zero-copy the ros2_shm_msgs types could be used.