New Release: Ground Segmentation Library + ROS 2 Integration!

GSeg3D: High-Precision Grid-Based Ground Segmentation for Safety-Critical Autonomous Driving and Robotics Applications

Paper DOI: https://doi.org/10.1109/ICRCV67407.2025.11349133

We’re excited to announce the release of our Ground Segmentation C++ library and its native ROS 2 support, designed to make reliable ground extraction easier and more efficient for robotics and autonomous systems!

:link: Explore the projects on GitHub:


:light_bulb: What’s Inside?

Ground Segmentation

  • A lightweight and flexible C++ library for extracting ground surfaces from 3D point clouds.

  • Ideal for mobile robots, perception pipelines, and real-world navigation tasks.

Ground Segmentation ROS2

  • Seamless integration with ROS 2 (Humble and beyond).

  • Ready-to-use ROS 2 nodes and interfaces.

  • Publishes segmented ground and non-ground point clouds for downstream perception stacks.


:rocket: Why It Matters

Ground extraction is a fundamental step in many robotic pipelines, from SLAM to motion planning. Our implementation emphasizes accuracy, performance, and ease of use, so you can focus on building higher-level autonomy.

Whether you’re working on autonomous driving, field robots, or research, we hope this tool accelerates your development.


:hammer_and_wrench: Get Started

:inbox_tray: Clone, build, and integrate with your system

:speech_balloon: Feedback and contributions are highly welcome!

ros2 robotics #PointCloud perception #OpenSource #C++

5 Likes

Is it only for indoor or also outdoor scenarios?

Both indoor and outdoor. Give it a try and see for yourself :slight_smile:

1 Like

@peci1 FYI

Thanks! A colleague of mine has tested your segmentation package on our outdoor dataset from a Livox Mid360 lidar and in general, we were quite impressed.

Our idea was to use this segmentation to find “surely traversable” terrain and then to use our MonoForce approach on the unsure/geometrically untraversable parts.

However, we’ve noticed that GSeg3D sometimes marks even 1 meter tall bushes as traversable (but the segmentation is usually flickering in these areas). After reading the paper, I don’t really understand how could that happen because the height variation in bushes is usually quite noticeable. It is winter now so the bushes are leafless, so maybe it could just be that there are too few points measured on them?

Thanks a lot for the detailed feedback, that’s very helpful.

Could you please share a short sample of the dataset (e.g., a rosbag snippet or a few PCD/PLY frames) where the bushes are classified as traversable? I’d be happy to test it on my side.

It might be parameter-related (e.g., cellSizeX/Y/Z, maxGroundHeightDeviation, etc.), especially with sparse winter vegetation and Livox sampling patterns. With a small sample, I can try a few configurations and better understand what’s happening.

Also, could you please confirm whether you’re using the ROS 2 node with the dual-segmentation setup (Phase I + Phase II)? That can influence how vegetation and sparse structures are handled.

Looking forward to taking a closer look.

1 Like