Organization: Open Robotics
Mentor: Arjo Chakravarty
Student: Shashank Rao (Github, LinkedIn)
Link to GSoC project: Google Summer of Code
Hello everyone,
This summer, as part of the Google Summer of Code, I’ve been working on gz-wgpu-rt-lidar: a new sensor plugin that brings hardware-accelerated, vendor-agnostic ray-tracing to Gazebo for physically-accurate LiDAR and Depth Camera simulation.
The goal was to move beyond traditional rasterization-based sensors, which can struggle with 360° coverage and performance on complex scenes. By casting rays directly against scene geometry, this plugin more closely simulates real-world sensor physics and delivers excellent performance on any modern GPU with ray-tracing capabilities (NVIDIA, AMD, Intel).
Key features
-
Custom Ray-Tracing Sensors: Easily add
rt_lidarandrt_camerasensors to your SDF models. -
Vendor-Agnostic Performance: Built with Rust and WGPU, the plugin leverages hardware ray-tracing on any compatible GPU.
-
Broad Geometry Support: Accurately renders scenes with meshes, boxes, and planes.
-
Dynamic World & Multi-Sensor Support: Run multiple sensors simultaneously and trust them to see the world as it changes, with automatic scene rebuilding when models are added, removed, or moved.
-
Non-Blocking Architecture: A dedicated worker thread handles rendering, preserving a high Real-Time Factor (RTF) in Gazebo.
-
Full ROS 2 Integration: Includes a launch file to bridge sensor data to ROS 2 topics for use with tools like RViz2.
Architecture:
-
Gazebo plugin: A C++ system plugin discovers custom sensors in SDF, parses parameters, builds a ray-tracing scene from world geometry, and enqueues render jobs on a dedicated worker thread to preserve the real-time factor.
-
FFI bridge: The plugin communicates with a Rust staticlib via a C API to create/update scenes, render frames, and exchange typed buffers.
-
Rust backend: The backend uses wgpu to build BLAS/TLAS acceleration structures and dispatch compute pipelines for LiDAR and depth.
Demonstration
Here is a look at the plugin running in the demo.sdf world, showing both the Gazebo simulation and the live LiDAR visualization in Rviz.
Performance Benchmarks
A key advantage of this approach is performance scaling. Unlike rasterization, which slows down significantly as scene complexity (vertex count) increases, the ray-traced method remains consistently fast. The performance gap widens dramatically in large environments.
These preliminary benchmarks on an RTX 3060 Mobile show that the ray-tracing pipeline maintains a much lower render time as the number of vertices in the scene grows into the millions.
Project Journey & Contributions
The development was an iterative process of building core features, refactoring for performance, and ensuring a smooth user experience. Key contributions include:
-
Adding initial support for SDF plane and box geometry.
-
Implementing mesh geometry support for complex models.
-
Refactoring the rendering logic into a multi-threaded architecture to not block the main simulation loop.
-
Fixing critical bugs related to mesh synchronization and segfaults.
-
Adding and documenting several new examples for easy testing.
Try It Yourself!
I’d love for you to try it out!
Requirements:
-
ROS 2 Jazzy
-
A recent Rust toolchain
-
A ray-tracing capable GPU (NVIDIA RTX, AMD RX 6000+, etc.)
Build and Run:
# Clone, install dependencies, and build
git clone https://github.com/arjo129/gz_wgpu_rt_lidar.git
cd gz_wgpu_rt_lidar
rosdep install --from-paths . --ignore-src -y
colcon build
# Run a demo
source install/setup.bash
gz sim examples/demo.sdf
To visualize in RViz2, use the provided ROS 2 bridge::
ros2 launch gz_wgpu_rt_lidar demo_bridge.launch.py
A huge thank you to my mentor, Arjo Chakravarty, and to Open Robotics for this fantastic GSoC opportunity!
Please check out the repository, give it a try, and share any feedback.
Thank you!


