Traversability_generator3d: 3D Traversability from MLS Maps (JOSS 2026)

Hi all,
We recently published in JOSS about traversability_generator3d, a C++ library that generates 3D traversability maps from Multi-Level Surface (MLS) maps.

DOI: Journal of Open Source Software: traversability_generator3d: A C++ library for 3D traversability estimation from MLS maps

The library converts MLS maps into a TraversabilityMap3d that includes:

  • Plane fitting and slope estimation (RANSAC-based)
  • Step height evaluation (AABB / OBB checks)
  • Orientation-dependent motion constraints
  • Footprint-aware obstacle inflation
  • Frontier detection and bounded map expansion

It also optionally supports soil-aware traversability, If soil information is available, where:

  • Each cell maintains soil probability distributions
  • Soil types (e.g., sand, gravel, rocks, concrete, unknown) influence traversal cost
  • Gaussian spatial propagation models soil uncertainty
  • Specific soils can be configured as forbidden,
    automatically converting cells to obstacles
  • Geometric and semantic costs are combined in a unified 3D map structure

Unlike 2.5D elevation maps, MLS preserves multiple surface hypotheses per cell, enabling reasoning about overlapping terrain layers such as rubble, vegetation, tunnels, or bridges.

The output map classifies cells as:

TRAVERSABLE
OBSTACLE
FRONTIER
INFLATED_OBSTACLE / INFLATED_FRONTIER
UNKNOWN / UNSET

If you have a PLY or PCD map available, feel free to run it through the pipeline and try the visualization to inspect the generated traversability layers and node classifications.

4 Likes

This is really Interesting. Thanks for sharing,
Just curiosity, how are soil-aware traversability and orientation-dependent traversability computed? Do the traversability take wheel(parameters) also into consideration ?

Thanks for the question :slightly_smiling_face:

Soil-aware traversability is optional. The user can provide soil samples, which are spatially propagated probabilistically across the map. Each cell then maintains class probabilities (e.g., sand, gravel, concrete, rocks). If soil information is available, it influences traversal cost and, depending on configuration, can also mark certain soil types as non-traversable. If no samples are provided, the system still works purely geometrically.

Orientation-dependent traversability is derived from a locally fitted ground plane. From that, slope magnitude and steepest descent direction are computed. If the slope exceeds a threshold, only specific heading intervals aligned with the slope are allowed, restricting traversal directions on steeper terrain.

In simple terms:

Each cell does not just say “traversable or not.”
It can also say “you can only face certain directions here.”

  • If the terrain is flat → you can face any direction.

  • If the terrain is moderately sloped → you are only allowed to face roughly downhill (and optionally uphill), not sideways.

  • If the slope is too steep → the cell is not traversable at all.

These allowed directions are stored per cell as one or two continuous angle ranges and the planner e.g. ugv_nav4d ( GitHub - dfki-ric/ugv_nav4d: A 4D (X,Y,Z, Theta) Planner for Unmanned Ground Vehicles ) can use them during planning.

Regarding wheel parameters: they are not modeled explicitly. However, robot geometry (length, width, height), maximum step height, and slope limits are considered, so platform constraints are incorporated geometrically rather than via detailed wheel–terrain interaction modeling.

2 Likes

Thanks for the explanation!

One thing though, traversability depends heavily on wheel-soil interaction. It’s not only about “is this surface geometrically feasible?” but also “given my wheels and this soil, will I actually maintain traction here?” The same slope can be fine for one robot and a dead end for another, depending on slip and sinkage.

That said, your pipeline is really well done and nicely structured.

On our side, we’ve been working on ML models trained on Bekker-Wong terramechanics for multiple soil types. Your soil-aware layer looks like a natural fit, instead of fixed costs per soil class, our models could provide physics-based costs that account for the actual wheel parameters.

Would be great to chat about this if you’re interested!

Thanks! Your work sounds very promising. I contributed to the soil layer within the NoStrandAMust project, which focused on developing and training ML models to identify soil types using the robot’s proprioceptive data. The goal, as you mentioned, was to enable a highly mobile robot like Crex to explore diverse terrains and generate a detailed soil map. Subsequently, a robot with more limited mobility across varying ground conditions, such as a wheeled platform, could leverage this soil map to avoid potentially hazardous areas, for example a rover encountering fine sand and risking getting stuck in it.

1 Like

and yes, the soil semantics are a key factor to traversability on top of the geometric information. I am open to discuss this with you because as you can see in this issue, many decisions regarding traversability require some form of ground information like friction, sinkage, slip, etc. What sensory information do you need from the wheels for your ML models?

Our ML models are trained on Bekker-Wong terramechanics simulations and take 4 inputs: terrain type, slope, approach angle, and commanded velocity. From these, they predict wheel slip and sinkage, which we convert into traversability scores for path planning.

We use 4 predefined soil types, each encoded with a specific color. Each color maps to a set of Bekker parameters (cohesion, friction angle, etc.), so the soil classification directly drives the terramechanics model.

During training, we use raycasting under each wheel to get the terrain color at the contact point, which gives us the exact soil type for generating training data.

In deployment with a known map, the terrain mesh (STL) with color-encoded soils is provided beforehand. The ML model predicts slip for each graph edge, those predictions become traversability-weighted edge costs, and Dijkstra finds the least-cost path.

In deployment with an unknown map, we use a multi-layer grid map where each layer stores a specific ML input (soil type, slope, etc.). An RGB camera classifies terrain color ahead of the rover, and the traversability map is built incrementally as the robot explores.

To bridge the gap between simulation and the real world, I am currently developing Forest3D, a tool that generates realistic environments with PBR materials for each soil type. This replaces simple color-encoded terrains and allows us to generate richer training data by coupling realistic soil visuals with the terramechanics plugin. With PBR-quality visuals, semantic segmentation can be used to classify terrain types in a way that transfers more naturally to real-world imagery.

The remaining challenge is detecting the soil directly under the wheels, since an onboard RGB camera alone can’t see there. A collaborative drone providing an overhead view could fill that gap.

We’re still working on the full publication, but a preliminary overview is presented in Section 2.2 here: https://doi.org/10.3390/machines14010099

For Forest3D Tool, you can find it is here

2 Likes