Seeking ROS2-Based Drone with Onboard SLAM Capability for Educational Use

Hello everyone,

I would like to have your recommendations on ROS2-compatible drones suitable for educational and research purposes. I’ve been through several options but haven’t found the ideal solution yet.

My Requirements:

  • ROS2 native support or well-maintained ROS2 integration

  • Onboard sensors capable of SLAM (3D LiDAR, RGBD camera, or stereo camera)

  • Ability to operate indoors without external positioning infrastructure

  • Budget: approximately $6,000 USD

What I’ve Tried/Considered:

I came across this helpful discussion: https://discourse.openrobotics.org/t/trying-to-find-pre-built-drones/44168, which recommends the Crazyflie platform. While Crazyflie is excellent for swarm research and basic control, it requires external infrastructure such as motion capture systems or marker-based localisation (e.g., Lighthouse or Loco Positioning), which isn’t practical for my use case.

Similarly, I’ve used DJI Tello drones, but they share the same limitation—reliance on external environmental setup for accurate localisation and mapping.

What I’m Currently Considering:

I’ve been looking at the ModalAI Starling 2 Max (https://www.modalai.com/products/starling-2-max?variant=48172375900484), which appears promising with its VOXL 2 flight computer, stereo cameras, and PX4/ROS2 support. However, I’d appreciate feedback from anyone who has hands-on experience with this platform, particularly regarding:

  • Ease of integration with ROS2

  • Reliability of onboard VIO/SLAM for indoor navigation

  • Suitability for student projects and coursework

  • Documentation quality and community support

Use Case:

The drones will be used for teaching autonomous navigation, path planning, and SLAM concepts to postgraduate students. Ideally, students should be able to develop and test algorithms in simulation (Gazebo/Webots/PyBullet) and deploy them on real hardware with minimal friction.

I’d greatly appreciate any recommendations, alternatives, or insights from those with experience in this area. If there are other platforms I should consider within this budget range, please do share.

Thank you in advance for your help!

Hi there,

I recently used the voxl2 board for my own project on object following last half of the year.

The voxl2 uses its own pipe architecture mpa architecture. Regarding ROS2 integration, you’ll need the voxl-mpa-to-ros2 node to get that data into the ROS2 ecosystem. The board that I worked with was on the 1.4.5 VOXL sdk and the board runs Ubuntu 18.04 and you can set up foxy on it. For me, I used a docker container on Ubuntu 22.04 and Humble instead. I run the voxl-mpa-to-ros2 bridge on the host and shared the network with the 22.04 container. I remember I had to set different DDS configurations for the voxl topics to bridge into the container. The bridging is relatively stable but sometimes it gets broken when I run foxglove bridge and inject a new subscriber, resulting in occasional std::bad_alloc errors. For PX4 side, firmware on the voxl2 board is on 1.14.0 so the sample scripts from px4_ros_com works. For simulation I did not try out the voxl2 simulations ( GitHub - modalai/PX4-SITL_gazebo-classic at voxl-dev ) or their hitl. I just ran the most updated px4 gazebo sitl so I did not simulate their vio module in simulation.

We mainly used their Relocalization | ModalAI Technical Docs module to compensate for vio drifts (dependent on the features present in the area) and I dont have the numbers anymore so I cant say how reliable the onboard vio is.