Generating the fleet map transformation (#18)

Posted by @Achllle:

From the traffic-editor section on overlaying the SLAM map:

To derive such transforms, the traffic_editor GUI allows users to overlay robot maps on a floor plan and apply scale, translation and rotation transformations such that the two maps align correctly. The user can then apply the same transformations to convert between robot map and RMF coordinates when programming interfaces for their robot.

These transformation are not sufficient to feed into something like free_fleet. The transformation represents a corner from the SLAM map which doesn’t necessarily coincide with the origin of the map which is defined in the corresponding yaml file. It also seems like the values need to be negated. Curious to hear what approaches can be taken to get these values or if there is room for a tool/integration/extension that would calculate the correct parameters for a standard map_server map.

Posted by @codebot:

Yeah. This is an issue that we’ve hacked around a few ways so far, and have not yet cleanly solved end-to-end. It’s challenging because in large maps, it’s not just that you need to create an affine transformation (one way or another, using nudged for example ), but instead you need to warp the robot-generated map so that it can nicely align with the building floorplan.

Concrete example that we’ve seen: let’s say you have a long, somewhat-featureless corridor where a robot needs to drive 50m, take a 90-degree turn, and drive 20m further. Even with super great LIDAR and odometry, that map will drift (somewhat) due to lots of reasons like odometry drift, limited sensor range, etc.. It is unlikely to perfectly align with a floorplan image from the building CAD. Even if it’s only a few grid cells off, that can be enough to create problems if the space is tight.

@gbiggs has created the MapTransformer for this purpose: GitHub - osrf/map_transformer: Transform points from one map to another to account for slight differences in maps

There are GUI elements in traffic-editor to help specify those map correspondence points, and the map_transformer library building blocks are all there. We have not (yet) integrated it end-to-end cleanly with free_fleet or other fleet adapters due to other pressing issues. But this topic definitely is of great interest, especially for worlds large enough that the coordinate-space warping between robot-generated maps and floorplans is a big issue and creating an Affine transformation (one way or another) won’t be good enough.

Posted by @Achllle:

Agreed that for large maps this will be a problem. IMO that problem is ideally solved outside RMF/traffic-editor, either using a separate tool to warp the map (based off of the very cool map_transformer), by doing loop closure or seeding SLAM with the building CAD, or by pointing the user to an image editor where they can manually fix the parts that got messed up (actually works really well).

In the meantime, an illustrated example of how the coordinate transformations should be done for a reference case (e.g. a map from gmapping) would be very helpful. At least for me it wasn’t immediately obvious how to come up with those values and I wasted good time trying to guesstimate the values. For different mapping conventions, a similar transformation would have to be done. If that sounds like a plan, I’d be happy to submit a PR (to the multirobotbook?)

Posted by @codebot:

Hard to say. I guess there are a few approaches:

  1. produce better robot maps with more loop closures, longer-range sensors, etc. This is the ideal solution, but is not always feasible given the layout of some (very large) sites, such as long service corridors below/between very large buildings that are linear by nature, with no opportunities for loops.
  2. an external tool warps the robot-generated map so that it aligns with the “golden” building CAD map, and re-samples this warped space to produce a “corrected” occupancy-grid map with normal 5cm gridcells (or whatever the robot needs). This “corrected” map is then re-loaded into the robot fleet manager and used for robot navigation
  3. some tool (internal or external) annotates the warp points and uses map_transformer to perform the bidirectional nonlinear transform on-the-fly as needed (continually) inside the robot fleet adapter, to translate between the “golden” metric space of the building CAD and the “warped” space of the large robot map.

Although option 2 is definitely cleaner, it presents “political” challenges if an existing/incumbent robot fleet already operating in a facility is using warped maps and has already annotated them with pickup/dropoff points. That incumbent vendor might not be interested in “correcting” their maps to a truly metric/rectilinear coordinate system, since they would have to re-annotate their navigation points and re-test their deployment. But I agree, this is a better strategy for new deployments. Sadly, I think we’ll need strategy 3 for some deployments with existing fleets.

As to the general documentation and strategy for deriving the transforms, yes indeed we have also suffered pain and have done this manually a few times. It’s the kind of thing where you do it once per site and forget about the pain until the next site :disappointed: so we’ve continually bumped fixing it “for real” into the future. But we are now working on a mini test / “textbook” deployment in our own office that will be small given our space constraints, but will include enough integration points to be relevant (free_fleet, traffic management, infrastructure (PLC) integration, web UI, etc.). We’re creating a deployment guide to document this exercise at all levels. We’ll create a new robot-generated map for this, and will work on the traffic-editor map alignment UI tools as needed. This effort is currently underway and is a high-priority task for us (we have to replicate this at a demo site in a few weeks) so I think it will be done soon.