Hi everyone,
We recently completed our full migration from ROS 1 Noetic directly to ROS 2 Kilted Kaiju. We decided to skip the intermediate LTS releases (Humble/Jazzy) to land directly on the bleeding edge features and be prepared for the next LTS Lyrical in May 2026.
Some of you might have seen our initial LinkedIn post about the strategy, which was kindly picked up by OSRF. Since then, we’ve had time to document the actual execution.
You can see the full workflow (including a video of the “trash bin” migration
) in my follow-up post here:
Watch the Migration Workflow on LinkedIn
I wanted to share the technical breakdown here on Discourse, specifically regarding our usage of Pixi, Executors, and the RMW.
1. The Environment Strategy: Pixi & Conda
We bypassed the system-level install entirely. Since we were already using Pixi and Conda for our legacy ROS 1 stack, we leveraged this to make the transition seamless.
- Side-by-Side Development: This allowed us to run ROS 1 Noetic and ROS 2 Kilted environments on the same machines without environment variable conflicts.
- The “Disposable” Workspace: We treated workspaces as ephemeral. We could wipe a folder, resolve, and install the full Kilted stack from scratch in <60 seconds (installing user-space dependencies only).
Pixi Gotchas:
- Versioning: We found we needed to remove the
pixi.lockfile when pulling the absolute latest build numbers (since we were re-publishing packages with increasing build numbers rapidly during the migration). - File Descriptors: On large workspaces, Pixi occasionally ran out of file descriptors during the install phase. A simple retry (or
ulimitbump) always resolved this.
2. Observability & AI
We relied heavily on Claude Code to handle the observability side of the migration. Instead of maintaining spreadsheets and bash scripts, we had Claude generate “throw-away” web dashboards to visualize:
- Build orders
- Real-time CI status
- Package porting progress
(See the initial LinkedIn post for examples of these dashboards)
3. The Workflow
Our development loop looked like this: Feature Branch → CI Success → Publish to Staging → pixi install (on Robot) → Test
Because we didn’t rely on baking Docker images for every test, the iteration loop (Code → Robot) was extremely fast.
4. Technical Pain Points & Findings
This is where we spent most of our debugging time:
Executors (Python Nodes):
SingleThreadedExecutor: Great for speed, but lacked the versatility we needed (e.g., relying on callbacks within callbacks for certain nodes).MultiThreadedExecutor: This is what we are running mostly now. We noticed some performance overhead, so we pushed high-frequency topic subscriptions (e.g.,tfandjoint_states) to C++ nodes to compensate.ExperimentalEventExecutor: We tried to implement this but couldn’t get it stable enough for production yet.
RMW Implementation:
- We started with the default FastRTPS but encountered stability and latency issues in our specific setup.
- We switched to CycloneDDS and saw an immediate improvement in stability.
Questions for the Community:
- Has anyone put the new Zenoh RMW through its paces in Kilted/Rolling yet? We are eyeing that as a potential next step.
- Are others testing Kilted in production contexts yet, or have you had better luck with the Event Executor stability?
Related discussion on tooling: Pixi as a co-official way of installing ROS on Linux