Proposal for Improving Connected Trajectory Planning (#630)

Posted by @alex-roba:

Currently, the trajectory planning implementation requires the robot to stop and rotate at each waypoint before proceeding if a rotation is needed to reach the next waypoint, even when there are no traffic dependencies to manage.

I propose an improvement: whenever a portion of the plan is free of dependencies and does not require waiting for traffic, we should send that entire set of waypoints and rotations to the fleet adapter. This would allow robots capable of constructing a smooth, connected plan to follow the waypoints efficiently.

By doing so, we can reduce unnecessary deceleration and acceleration at each waypoint, improving overall efficiency and motion continuity.

Posted by @mxgrey:

What you’re describing is exactly how the original API works (note that some documentation is outdated and I just opened a PR to fix it). Many users found it difficult to use this, so we introduced the EasyFullControl API to simplify things by sending one waypoint at a time. We’ve found that many commercial mobile robot platform APIs only accept one waypoint command at a time (rather than a smooth path) anyway, so in many cases there was no downside for users to switch to it.

The original API is still supported, and in fact the EasyFullControl API is implemented on top of it, so you can use that if your robot’s API supports smooth paths and you think it’s worth the extra integration challenge.

The main difficulty of the original API is that the system integrator bears a much greater burden of accurately reporting where the robot is with respect to the navigation graph using the update_position APIs. The EasyFullControl API takes care of this automatically. Currently it’s not very feasible to get a good middle ground between the two while we rely on a navigation graph with discrete vertices and edges. The major difficulties happen around doors, lifts, and parking spots where the semantics of whether a robot is “on” a certain vertex or edge becomes very important. We can automatically infer some things with the more narrow EasyFullControl API that we can’t infer with the more general original API.

I plan on improving the overall situation in the next generation by supporting a free space representation of the navigation graph. At that point it should be straight forward enough for users to simply report where their robot is located and we can use geometry to make semantic inferences. That should work without us needing to track each step of what command the robot is following.


Edited by @mxgrey at 2025-03-05T11:07:30Z