Pixi as a co-official way of installing ROS on Linux

It’s that time of the year when someone with too much spare time on their hands proposes a radical change to the way ROS is distributed and built. This time, it’s my turn.

So let me start this with acknowledging that without all the tooling the ROS community has developed over the years (rosdep, bloom, the buildfarm - donate if you can, I did! -, colcon, etc.) we wouldn’t be here, 20, 10 years ago it was almost impossible to run a multilanguage federated distributed project without these tools, nothing like that existed! So I’m really grateful for all that.

However, the landscape is different now. We now have projects like Pixi, conda-forge and so on.

As per title of my post, I’m proposing that Pixi not only would be the recommended way of installing ROS 2 on Windows, but also on Linux, or at least, co-recommended for ROS 2 Lyrical Luth and onwards.

One of the first challenges that new users of ROS face is learning a new build tool and a development workflow that is ROS-specific. Although historically we really needed to develop all the tools I’ve mentioned, the optics of having our own build tool and package management system doesn’t help, with the perception that some users still have of ROS as a silo that doesn’t play nice with the outside world.

The main two tools that a user can replace with Pixi are colcon and rosdep, and to some extent bloom.

  • colcon has noble goals, becoming the one build tool for multilanguage workspaces, and as someone who has contributed to it (e.g. extensions for colcon to support Gradle and Cargo) I appreciate having it all under the same tool. However, it hasn’t achieved much widespread adoption outside ROS.
  • rosdep makes it easy to install the multilanguage dependencies, however it still has some long standing issues ( Add support for version_eq · Issue #803 · ros-infrastructure/rosdep · GitHub ) that are taken for granted in other package managers and because of the distribution model we have, ROS packages are installed at a system level, not everything is available via APT, etc.
  • bloom works great for submitting packages to the buildfarm. Pixi provides rattler-build, the process only requires a YAML file and can publish to not only prefix.dev, but also Anaconda.org and JFrog Artifactory.

I’ve been using Pixi for over a year for my own projects, some use ROS some don’t, and the experience couldn’t have been better:

  • No need for vendor packages thanks to conda-forge and robostack (over 43k packages available!)
  • No need for root access, all software is installed in a workspace, and workspaces are reproducible thanks to lockfiles, so I have the same environment on my CI as on my computer.
  • Distro-independent. I’m running AlmaLinux and Debian, I no longer have to worry whether ROS supports my distro or not.
  • Pixi can replace colcon thanks to the pixi build backends ( Building a ROS Package - Pixi )
  • Pixi is fast! It’s written in Rust :wink:

Also, from the ROS side, this would reduce the burden of maintaining the buildfarm, the infrasttructure, all the tools, etc. but that’s probably too far in the future and realisticallly it’d take a while if there’s consensus to replace it with someone else.

Over the years, like good opensource citziens we are, we have collaborated with other projects outside the ROS realm. For example, instead of rolling our own transport like we had in ROS 1, we’ve worked with FastDDS, OpenSplice, CycloneDDS and now Zenoh. I’d say this has been quite symbiotic and we’ve helped each other. I believe collaborating with the Pixi, and Robostack projects would be extremely beneficial for everyone involved.

@ruben-arts can surely say more about the benefits of using Pixi for ROS

27 Likes

Up front, I don’t know much about how conda / pixi work internally and just hat a glimpse at the documentation of it.

One big question comes to my mind :

Can I run any node that needs advanced privileges (e.g. raw socket access) while using a conda environment ?

Note, linux places a lot of security shenanigans in your way these days if trying to use shared libraries from ‘userland’ together with privileged processes.

1 Like

As the person who pushed for Pixi on Windows, I’m somewhat in favor of making Pixi a co-recommended way to consume ROS packages on Linux as well. Committing to Pixi more heavily might also make macOS a viable target platform again in the future.

However, there are a number of things I want to mention:

  1. We’d need to have a deep conversation/research about whether pixi build backends (and in particular pixi-build-ros) can completely replace colcon. Colcon has a lot of features and plugins, and pixi-build-ros is still marked as a “preview feature”.
  2. There have already been 4 major transitions of the build tool in ROS (rosbuild → catkin_make → catkin → ament_tools → colcon), so we’d have to carefully gauge how much community interest there is in yet another transition.
  3. I’m of the opinion that we can’t really get rid of rosdep. At least for the near/medium term, I think that building system packages (apt/rpm) is still a thing that ROS should do, especially given how heavily they are downloaded, so we’ll at least need rosdep for that.
  4. I do think it would be worthwhile to integrate rosdep with pixi like we do for our other system packages.
  5. I think we’ve found that removing vendor packages is harder than it looks. We were indeed able to remove some from the core with the switch of Windows to pixi, but we couldn’t complete the project because sometimes upstreams just don’t integrate well without us providing additional glue.
  6. I do think that we should look into making bloom integrate with rattler-build, and pushing more things to prefix.dev .

All of this is to say: I think we should work towards integrating ROS more with the Pixi infrastructure, but I also think that it needs to be handled slowly and carefully. If we do the above integrations, and we find that over time the community largely transitions over to them, then we can consider sunsetting the custom tools. My opinion, of course.

9 Likes

I agree, I don’t expect this process to be immediate. However, I wonder how many of those features and plugins in colcon are used by the majority of users.

Also true, but one criticism I hear often is that you have to follow the ROS way and ROS can’t be integrated into an existing codebase, starting with the build tool and the package management system.

I’d assume the download numbers are so high because that’s the recommended way of installing ROS. If there were a different way, perhaps users wouldn’t download system packages.

I’m on the opposite side of this, I think it’s us who should integrate with other systems, instead of integrating the other system into our tooling. But if it’s as a temporary way so that users migrate to Pixi, that’d be worth exploring.

Do you remember what was so ROS-specific that requires that additional glue?

Thanks for all the work getting ROS on Windows with Pixi, I only started using Pixi on Linux because one day I checked the instructions for Windows and realized how straightforward it was. And I assume Pixi was chosen as the build tool for Windows so quickly because the tool we used originally (Chocolatey) was just not the right one, such a quick change would be more difficult to justify for Linux given that the current tooling works and has worked for a long time.

1 Like

I just found out that @ahcorde submitted Install ros2 on Ubuntu with pixi by ahcorde · Pull Request #5833 · ros2/ros2_documentation · GitHub and Pixi config file to install ros2 on Ubuntu by ahcorde · Pull Request #1721 · ros2/ros2 · GitHub to install ROS 2 on Ubuntu with Pixi. The instructions still use colcon for building ROS 2, which would be a separate discussion, but a great first step nonetheless.

1 Like

As far as I know, yes. Conda is just a virtual environment. It does no sandboxing, namespacing, or similar. Sourcing a conda environment is really just like sourcing a colcon workspace.

While I’m a heavy user of nix-ros-overlay, I recognize the same advantages that Pixi offers for managing ROS. The strong reproducibility guaranteed by its lock file provides significant benefits :slight_smile:

2 Likes

Regarding the transition from bloom + colcon to pixi + pixi-build-ros, they could coexist for the time being, as they do now. You can use colcon within a conda environment, so I assume you can do the same in a pixi environment and do not need pixi-build-ros.

What is important, IMHO, is that this is established as an official method of installing ROS 2 with the same level of support as we have now with the build farm and the bloom packages.

A “nice to have” would eventually be the possibility to use conda-forge packages with ROS packages in the robostack channels, without requiring to “vendor” them first, as we have to do now with bloom packages.

This implies that the ros libraries will be installed in the user home directory.

Nowadays a process that has advanced privileges may not load / use a shared library from the home directory (There is a workaround by adding the lib to ldconfig). The reason for this is that one could just replace the shared library with “bad” code and execute it with higher privileges.

Currently I can create an executable make sure that I don’t use shared libraries from my own workspace, give it raw socket privileges and it will just work because ros libraries are system libs. If ros is installed in the home directory this would stop working.

Thanks @esteve for saying that.
There is often a lot of negativity from people that weren’t “there” and can’t appreciate how much ROS was ahead of its time solving these problems.

7 Likes

They are installed wherever the Pixi workspace is, which is usually on a user’s home directory.

True. This issue happens with building a colcon workspace with isolated install and build spaces, if using --merge-install or –-symlink-install, this issue does not happen, or at least I haven’t been able to reproduce it.

Take for example GitHub - esteve/my_pixi_ros_ws. I’ve created a Pixi workspace that has two packages: a library that creates a raw socket and a node that calls that library. One way of solving this issue is to build the node with rpath enabled (see my_pixi_ros_ws/src/my_cmake_ros_node_pkg/CMakeLists.txt at main · esteve/my_pixi_ros_ws · GitHub) and then add the CAP_NET_RAW capability to the node executable (sudo setcap cap_net_raw=ep ./install/lib/my_cmake_ros_node_pkg/my_cmake_node).

In any case, this is just an example, the Pixi and conda forums are probably better places for troubleshooting these issues.

As a Pixi developer, I don’t want to push “our ideas” as these decisions should be made because they’re simply better for the ROS users. But here are some clarifications to your questions/remarks.


Pixi is a single tool that covers several things ROS users already glue together today: installing dependencies, managing workspaces, running tasks, and building distributable packages. Each part great works on its own, but they extend each other perfectly.

1. Environment installation

Pixi installs conda and PyPI packages into isolated environments. Think of it as replacing apt/rpm/pip for development environments, a similar experience in terms of not breaking your machine to Docker, but without the overhead of a runtime.

Key points:

  • Pixi environments can live anywhere (not in hard-coded paths).
  • No container abstraction: it’s a normal host experience, thus making it extremely easy to containerize.
  • You use the environment via pixi run or pixi shell, similar to activating a ROS workspace (source setup.bash).
  • Works well for mixing system-level deps, Python, C++, CUDA, etc. in one place.
  • Works on all platforms (Linux, macOS, Windows).

For ROS users: this already solves a big chunk of what rosdep + system package managers are used for.

2. Task management

Pixi can run predefined tasks with dependencies and caching. This replaces most Makefile use cases.

  • One command to build, test, lint, or run applications.
  • Tasks are portable on all platforms and documented in the Pixi manifest.
  • Much easier onboarding than a readme with: “run these 7 commands in the right order”.

This is mainly about reducing project-specific knowledge.

3. Workspace management (preview)

Pixi can manage source dependencies inside a workspace, similar to colcon, but based on the conda package model.

Important differences:

  • Designed to package anything (C++, Python, CUDA, Fortran, R, etc.).
  • Uses recipes (like Debian), but hides most complexity behind Pixi build backends.
  • Build backends play the same role as Python build backends do for wheels.

This is still evolving, but it’s where Pixi goes beyond “just an environment manager”.

4. Package building (preview)

Pixi lets you go from local development → distributable binary packages with the same tool.

  • pixi build turns your local source into a conda package.
  • Sharing is similar to PyPI, but works for any language or build system.
  • This enables binary reuse across teams and CI without Docker images.

This opens the door to:

  • Faster CI
  • Fewer source rebuilds
  • Clearer separation between “development” and “distribution”

Practical starting points

Combine Pixi and colcon

Use Pixi only for environment installation and keep using colcon.

This already improves dependency management without changing workflows.

Example (2 years old but still valid): GitHub - ruben-arts/turtlesim-pixi: A super fast way to run the turtle sim

Next step: Pixi build integration

  • Let Pixi do what rosdep does, install all the dependencies from a package.xml
  • Let Pixi do what colcon does, build all packages from source based on the package.xml

In this case Pixi will not avoid the use of rosdep or colcon but is able to run on the same files, so different users can decide what their favorite tool for the job is.

Examples:


Extra information:

Pixi was build to solve the ROS package management issues we had in real robotics development. It’s first year we mostly focused on getting these environment installations to work and now we’re trying to focus more on specific use cases. Our biggest focus is: robotics, machine learning, AI, Data Science and HPC.

Feel free to join our Discord when you notice issues while trying Pixi, we’re happy to help you get started.

2 Likes
curl -fsSL https://pixi.sh/install.sh | sh

Do you find it fair to compare this installation method with ROS installation manuals? :smiley: I don’t, because ROS could have the very same one-line install (anything except Windows programs can). There’s just people who are very much afraid of this kind of install steps. What is true, however, is that this one-line install will work on more platforms than the ROS install. (Btw. pixi install.sh has 250 lines).

Last time I was playing with robostack (which, if I’m not wrong, is the Conda base of pixi), I really disliked the choice to use newest versions of all dependencies. This doesn’t make sense to me because different libraries have different paces of tracking their own dependencies. It would make a much better sense to me to specify some kind of “runtimes” that would be bound (ideally) to the Ubuntu-distributed versions of packages for the Tier 1 OS distro of your chosen ROS distro. Because when I’m developing a ROS package for Kilted, I know that it has to be compatible with exactly those versions (to be buildable by the buildfarm etc.). So it is difficult to write the package to be compatible both with the Ubuntu versions and with the Conda versions. I know that Conda has a superb dependency version matcher, but it shouldn’t be the user who explicitly specifies version of dependencies like libpcl or cmake (unless he explicitly wants to divert). Has this changed somehow since the time I tinkered around?

2 Likes

With that part of my response I try to explain that it’s easy to use tasks with pixi run instead of a README with specific shell/platform based commands.

For the installation of Pixi itself, we recommend the use of the curl command with the install script as we have most control over that style of installation, as we can help the user get started. You can also use brew winget nixpkgs scoop pacman apk and in the future hopefully apt, pip and more.

You’re right, conda is much less locked-in which results in potentionally more required knowledge to build your own packages. The RoboStack channel already forces some versions to be a specific, this done through a “distro mutex”: ros2-distro-mutex - robostack-kilted.

This flexibility hasn’t changed much, as it’s kind of a big feature to using RoboStack and conda-forge. What you’re pointing at is the “release-based distributions” vs “rolling-release distributions”. ROS has always focused on the release-based package management. This is great for consistency and reproducibility, but it’s limiting when you try to build on the edge of what is currently available. Especially in the fast moving robotics space this is a big problem. conda-forge is a rolling-release distribution, it brings you the freedom of deciding what packages work best together but it brings that burden to the user when it’s not well prepared by the package builders.

To lower the current burden of managing these packages, Pixi always gives you a lockfile (pixi.lock) which makes sure you don’t get unexpected behavior when you reinstall. This does add the requirement to take responsibility of updating the dependencies from time to time. Which would be a pixi update and then run the CI setup to make sure everything keeps working.

This is what I recall is the biggest issue for the infra team, as you would need to give that responsibility to the companies. While that might sound scary this is basically how all major, non system package managers work: pip, npm, cargo, julia, go etc.
Pixi (conda) takes this workflow and extends that to a bigger set of package formats.

For what it is worth, that is exactly what I did with the [pixi.toml](ros2/pixi.toml at rolling · ros2/ros2 · GitHub) for Windows; I pinned the dependencies so they are close to exactly the same as the Ubuntu versions. Personally, I think that is much better and more sustainable. One of the main reasons we stopped supporting macOS was that the rolling nature of Homebrew meant it was impossible to deliver anything for it, and things broke very often.

Personally, I think the ROS 2 core should deliver a very locked down version of dependencies that we know works. If companies need or want to relax that, it is up to them to do it for their own installation.

1 Like

Snaps have the runtime bases. Each snap can decide on which runtime it will be based, and that defines versions of dependencies. I’m not sure if cross-runtime dependencies are allowed, but I’d guess not.

Yeah, I understand this made your life much easier. On the other hand, if you pin down to patch versions, the users lose any kind of possibility of getting security fixes until you decide to bump the package in the lock file…

We could certainly revisit that particular assumption, and unpin the patch versions to allow some wiggle room. My main point stands though; using a rolling distribution leads to lots of pain.

2 Likes

Disclaimer: pixi & robostack developer here! :slight_smile:

@peci1 – I feel like you got the version resolution part kinda backwards: the Conda ecosystem gives total flexibilty to each individual package to specifiy dependency ranges such as pcl 1.15.*. A package can define the version ranges they are comfortable with and change the pins accordingly, and unlike Ubuntu, you can even still publish a package into a repository that requires python 2.7 – the solver will figure it out! Additionally, in the Conda ecosystem we can make patches after the fact – so, for example, if you figure out that pcl 1.15.2 has a breaking change, you’ll be able to modify the dependency to >=1.15.0,<1.15.2.

Lastly, whether the distribution is rolling or not depends on the distribution. RoboStack is built on top of conda-forge which follows a rolling distribution model with loosely coupled repositories that create the package and a “global pinning” for shared libraries to keep things synchronized. On top of that, RoboStack uses a meta-package to pin specific versions of robotics related libraries (like Boost, PCL,…) that we tested and know that they are compatible. Every package in RoboStack has a dependency on ros-mutex X that will force this dependency resolution and make the underlying packages less “rolling”.

However, it’s totally possible to come up with a new “robo-forge” where we have only robotics related packages and release snapshots every 6 months! Or it’s totally possible to take whatever is in conda-forge, and create a snapshot of all that every 6 months and only allow patch-updates afterwards! There is nothing inherent to Conda packages - it’s all distribution choices. All Conda package managers do is use a SAT solver to resolve to a working combination of packages :slight_smile:

1 Like

I understand this statement.

A helpful way for a body like OSRF, is to maintain a (distro) package that constrains the dependencies. For example you could have that package include all the specific dependencies the release testing was done with. This could proactively be updated to allow users to update to the latest “distro” when OSRF makes another distro package release.

The pseudo metadata of such a package could look like this:

name: "ros-jazzy-distro"
version: "2026.1.21"
constraints:
  # Pin specific versions
  # This is basically what you get with the debian package experience (without workarounds)
  # Extremely restrictive, but big guarentee on reproducibility.
  - cmake ==4.2.2
  - pcl ==x.x.x

  # Exclude known vulnerable packages
  # Best practice IMO, give space with a lower bound, and proactively avoid the use of harmed packages.
  - gcc >10, !11.1.1, !12.0.1  

  # Specify a limiting range
  # This could provide a good balance, but blocks users from moving forward without getting upstream updates.
  - python >3.10,<3.14

Note that a constraint is not a dependency, so it only says that when you install the ros-jazzy-distro package you can only install the constraint packages within their given spec.

The powerful part is that you can help the users with this package but it doesn’t block them form experimenting with other versions if they let go of this distro package. So, the same level of hand-holding to the current distro approach, with the flexibility for the user to unshackle their environments when needed.

The SAT solving used in the conda ecosystem allows you to both be more restrictive and more flexible than the other options. I do see that it takes some relearning and extra effort on maintaining the vulnerabilities. Big vulnerabilities are already taken care of by the conda-forge ecosystem. That said these vulnerabilities are something that a bigger body could perfect on, similarly to what Canonical is now doing for the Ubuntu ecosystem.

3 Likes