I’d like to share Forest3D, a tool that creates realistic outdoor Gazebo environments from DEM data and Blender assets.
The problem: Existing tools convert DEM/GeoTIFF to terrain meshes, but the result is a bare, empty landscape. Populating it with vegetation and objects requires manual work.
The solution: Forest3D automates the full pipeline: terrain generation, Blender-to-Gazebo asset conversion, and procedural placement of trees, rocks, and bushes.
Key features:
Real terrain from DEM data
Automatic Blender → Gazebo model conversion
Procedural placement of trees, rocks, bushes
Docker image with Blender 4.2 + Gazebo Harmonic included
Forest3D is still experimental and under active development. Contributions, feedback, and testing are very welcome!
Future direction: Pairing Forest3D with our Gazebo Terramechanics Plugin to simulate wheel–soil interaction on generated terrains, enabling end-to-end simulation of off-road robots on realistic, and enabling large-scale synthetic data generation for machine learning traversability training.
Could you clarify, do you mean using terrain geometry (slope/elevation/curvature) to decide what to place or what textures to apply? Or linking visual appearance with physical soil properties for simulation? Currently, placement uses spatial heuristics (clustering, zone weighting) but doesn’t derive rules from terrain slope/elevation, apart from sampling height so assets sit on the terrain surface.
I meant whether your goal is to get something where the procedurally generated surface properties would be consistent with visual appearence. As an example, moss usually doesn’t provide very good traction and is quite soft. Things like this…
I’m asking because we’re developing the Monoforce model which does the opposite - estimating terrain properties from previous experience and visual appearance
Thanks for clarifying! Yeah, that’s exactly where we’re heading. So Forest3D works top-down: DEM to terrain mesh to realistic Blender assets converted to Gazebo with procedural placement. The terramechanics plugin is separate, it handles wheel-soil physics using Bekker-Wong parameters. Right now we use colored zones as placeholders for testing (red for Mars sand, green for clay, etc.). It works for development, but obviously colored blocks aren’t realistic. The idea is to swap those out with actual Blender assets like moss, rock, sand textures, and have each one carry its own soil parameters. So when the robot drives over moss, the plugin knows to apply soft, low-traction behavior automatically. The missing part is: how do we know what parameters to assign to each visual asset? We’d need labeled data mapping appearance to soil properties.
By the way, your model sound interesting, it could help me to fill that gap !!
Curious: is Monoforce trained on real traversal data, sim, or both? Does it output Bekker-Wong style params or something else like traversability cost? Got a paper or repo I could check out?
That’s exactly the hard problem. We concluded we can’t solve it directly, that’s why we developed Monoforce that doesn’t need these values thanks to self-supervision.
Monoforce is trained on real data only. There is, however, a differentiable simulation model employed, which enables computing useful self-supervision signals. So far, our model is quite simple - the terrain is just a spring-damper grid. It supports articulated, wheeled and simple tracked robots. But it can be as advanced as you can go with differentiable physics. This is a quite new field, so we’re still exploring what could be done and achieved. Basically each of our Ph.D. students writes his own differentiable simulator
I don’t know how to answer each part of your comment as you do, I’m still new here
Thanks for the links and papers! I’ll dig deeper soon, but yeah, real data with differentiable physics for self-supervision makes sense when you need gradients through the physics, I wish btw a good luck to your PhD students with their simulators.
We’re coming from the opposite direction as you mentioned before. We started with real-world data but moved to simulation for more control and diversity, which is how the terramechanics plugin came about. We’ve used it to generate datasets for training a traversability prediction model. Forest3D then came to make this workflow easier with realistic assets out of the box.
Concerning labeled data mapping appearance to soil properties, I would like to create an asset library where each visual asset has default Bekker-Wong parameters (k, φ, c, K, n, …) from published measurements, for example:
moss.blend → Clay (Bekker, 1969)
sand.blend → Dry sand (Wong, 2008)
mars_sand.blend → Mars simulant (NASA JSC-1A)
Users can then override with custom params of course. This would benefit from community contributions over time.
I try to stick close to the Gazebo engine and the Gazebo/ROS community who need realistic off-road simulation out of the box.
By the way, I noticed you have a monoforce_gazebo package. Is it just for visualization/testing, or is there integration with Gazebo physics?
Yeah, and I want to have a look at your plugin However, looking at the code, it seemed to me it is so far only for Gazebo Classic?
This would be great if Gazebo could generate worlds with realistic parameters and the community would be able to extend this library.
That is just for the basic tests similar to yours (e.g. green patches and red patches with different friction). And also for testing integration of the model to the rest of the system.
Yes, currently it’s for Gazebo Classic and tailored to our rover (Archimede). The roadmap:
Upgrade to Gazebo Harmonic
Make it robot-agnostic (configurable wheel parameters, contact links, etc.)
Yes, that’s the goal! The challenge is scope. Forest3D currently spawns any Blender asset (trees, vegetation, rocks…), but Gazebo struggles with dense realistic environments - FPS drops quickly.
I’m debating between:
One tool: Forest3D handles everything, visual assets plus terramechanics for soil
Two tools: Forest3D for general visual spawning, a separate tool specifically for soil visuals with accurate Bekker-Wong parameters
Not sure yet which approach makes more sense. I’d love to hear you feedback?
Modern Gazebo has the “levels” feature which allows you to load/unload parts of the world based on the positions of actors. This should help if you only simulate a single robot and you don’t need to see too far.
Thank you! I just watched it, it’s really relevant to what we’re doing, they went from DEM to kinematic slip. Would be nice to collaborate to build a full DEM-to-terramechanics pipeline.