Manta-ray biomimetic ROS2 robot (pressure-compensated, should go pretty deep?)

Hello there,
I’ve been working on a pressure-compensated, ROS2-based biomimetic robot a manta-ray ROV/AUV thing. The whole idea is to build something cheap enough, quiet enough, and open enough that people who normally can’t afford to do underwater work (science, conservation, small NGOs, small science teams in budget constrained countries, whoever) can actually use it.

Right now it’s running full pressure compensation, ROS2, and I’m working on a monitoring-grade CTD based on the OpenCTD design. The camera is pressure-compensated too. I’m aiming for around 1 m/s cruise speed.

Runtime has been surprisingly decent: about 6 hours on a single 5300 mAh pack just for actuation (compute has its own identical battery). Scaling up the battery is easy, so mission time should grow pretty cheaply.

The robot in the video is actually the previous hull version. I already have a new design coming together with what i learned from the first version. And because literally everything is oil-filled and air-free, I think it can go VERY deep. How deep? No clue yet, but structurally it should handle way more than the usual hobby-AUV range. I also modified the electronic components (to make it more pressure tolerant).

Next steps:
– Replace the cheap knockoff IMU that just died on me
– Move from I²C to SPI or UART for reliability
– Build out dead-reckoning
– Add waypoint navigation to the GUI
– Make it work both tethered and fully AUV (tethered already works decently)
– And if I save some cash, I want to start playing with a DVL to get more interesting autonomous missions going

The GUI in the video is just NiceGUI interfaced with ROS2 for now, but it should work fine over Tailscale/Husarnet for remote control when tethered.

Few questions, did i really just get a bad cheap IMU (that purple bno08x chinese board), which had horrible clock stretching issues on raspberry pi 4. I tried to switch to spi but couldnt get the board to boot on spi. Or would an adafruit bno085 on i2c also have horrible issues? I did modify the firmware on the pi to make the clock slower, that helped, and used extended bus for bit banged i2c. that helped even more. But yesterday the sensor stopped initializing at all.

Note: Im not a roboticist. Im just a maker that will figure things out and loves the ocean.
Video on Youtube

7 Likes

Oh wow! This is super cool.

Are you aware of our marine robotics community group? I think they may be able to offer some advice.

1 Like

Hey There! thank you! I’m gonna check the community out. I have so many questions.

Hello @_bernardo. I am not very experienced with the bosch sensors: what pullups are on the bno08x? I have designed a custom IMU for the jetson based on an adafruit icm20948 breakout with success (pullups on my design were not correctly sized, but I changed the i2c clk rate to 100khz and it works!). You can find the breakout here. I have modified an existing icm209x ros2 wrapper driver (based on the adafruit driver) here

Unless you really need high rate data or your existing i2c bus is saturated with other sensors and peripherals, SPI is probably overkill. Even if you decide to go the Visual SLAM route, you only need IMU at ~200hz for the existing SLAM solutions.

As far as dead reckoning/localization, have you looked at this paper which discusses different visual SLAM modalities?

1 Like

Hello @Andrew_Brahim , thanks for all that info!. I dont have the IMU I used with me right now. Its that purple bno08x knockoff board you can find around. Getting a spec’s sheet was not easy, I only found pieces of data while hunting for sellers listings so I dont remember what pullups where present if at all. And even then the instructions on how to set it to SPI did not work. Its likely the board I had was actually different.

On the PI4 I had HORRIBLE clock stretching issues. I also modified the firmware/config.txt on the pi and found it to work better at between 40 to 80khz, any more than that and i would have just too many errors. I did move to bit banged i2c bus on the pi4 and that improved things significantly. But I did get new adafruit bno085 which will be installed in the PI5 next. So I will have to test all the settings and reliability again as it seems the pi5 handles i2c slightly differently. Is there a particular reason why you went with that specific chip? I was looking at the bno ones because i thought if the sensor does the sensor fusion for me, well, less user error mistakes in the future and offloading some of that workload from the pi.
Building my own IMU is out of my skillset. One good thing about having to deal with a faulty IMU is that i had to make my ros2 IMU Node pretty resilient (not sure if efficient) So I learned to handle different types of errors, retry, restarts, re configures, etc etc. To try to keep the node alive regardless of the hiccups with the sensor. And use approximation of data for the little gaps in the data stream while the sensor was in a hiccup. I will soon make the full repo available so people can take a look at the nodes (at their own peril).

Is great to read that I should be fine with i2c for now. I do have the IMU in its own bus (the digital one for now) since i wanted the data stream to be as clean and unaffected as possible, considering that I wanted to build the dead reckoning package later.

I have looked around at Optical flow and i thought of mixing it with dead reckoning for localization, but i have not looked or experimented with any type of slam yet. I tend to think that Optical based approaches rely on good underwater conditions for tracking, which may not always be available, and adding optical processing to an already saturated SBC might not always be the best given processing budgets. So not sure how practical it is in the field thinking of cost opportunity type of tradeoffs. I have yet to discuss the specific use cases with marine scientists / conservationists to see if this makes sense in their missions.

I still dont know if there are any conventions, best practices, etc that i should know about before just opening the full repo to the public so the ros community can take a look. But im planning to do so soon (as soon as i go back home to finish installing the new IMU on the new PI5 in the robot). But if people are willing to look at what i have, or interested at all i guess i could just do that now.

Thank you for stopping by and commenting!

Is there a particular reason why you went with that specific chip? I was looking at the bno ones because i thought if the sensor does the sensor fusion for me, well, less user error mistakes in the future and offloading some of that workload from the pi.

I use the icm20948 mainly since I have the most experience with the driver. BNO are perfectly fine granted you can find a decent ros2 wrapper around the driver.

I was looking at the bno ones because i thought if the sensor does the sensor fusion for me, well, less user error mistakes in the future and offloading some of that workload from the pi.

I usually do not rely on the onboard imu fusion: I am not even sure if this is compatible with existing ROS2 drivers atm (at least not the icm20948). For IMU fusion in ROS2, you can take a look at IMU tools repo. You can implement the magdwick filter and then you should be able to verify pose in Rviz2.

After this, you can check out the robot_localization package. Granted this repo was really geared towards ground robots that either have a GPS or wheel encoder. Optical flow could work but might have similar issues to CMOS sensors in murkier waters. But yes, optical flow is much less taxing on compute than SLAM, you are correct. If you can move up to the Jetson Orin boards you could leverage SLAM to the fullest extent and even do mapping, but also there are less taxing versions which mainly focus on localization that could work on the PI5.

If any of these doesn’t make sense, please feel free to DM me on here.

1 Like

Thanks a lot Andrew, I’ll look into those repos, definitely looks like I don’t need to re invent the wheel in most cases.
I’ll send a DM here. Thanks for your help.

Hello @_bernardo ! Really impressive work ! Thanks for sharing it.

I had a few questions, mostly about depth and reliability:

  • When you say “pretty deep”, what would you personally consider a success: tens of meters, hundreds, or more?

  • How is the pressure compensation physically implemented (bladder, piston, or something else)?

  • How do you ensure the system is truly air-free when filling it? Do you use vacuum filling, or a simpler bleed/purge method?

  • As you go deeper, which parts do you expect to be the first weak points (likely to leak or fail): cable exits, moving seals, etc.?

  • Are the batteries also oil-filled and pressure-compensated, or kept in a separate enclosure?

One extra thought : a lot of deep-ocean research targets the hadal range (up to ~7,000–11,000 m). The tricky part usually isn’t the housing, but interfaces : seals, joints, cable penetrations, and bonded assemblies. If you end up pushing depth testing, I’d be very curious how your design choices evolve around those interfaces.

Thanks again for posting this!

2 Likes

Hey there @AdeMBCH !

  • When you say “pretty deep”, what would you personally consider a success: tens of meters, hundreds, or more?

If i take as reference similar approaches (from Nic Bingham - Deep Sea Challenger) or Brennan T. Phillips ( Deep-I camera), in a perfect world, I would expect my approach to go as far as the Deep-I camera (5.5km). But I know things will not go as planned and there must be mistakes in my process, so I will progressively cycle test the robot when its ready, in the ocean. I can’t afford proper methodological testing at the moment.
I think in my design the weakest part is the different contracting rates of the different electronic components on the pcb. That can add strain to the connections that could eventually be fatal. Thats part of the reason why i modify my electronic components to make them more pressure tolerant, but without testing my estimations are just guesses.

  • How is the pressure compensation physically implemented (bladder, piston, or something else)?

I have a pretty good understanding of human physiology. So my approach was biomimetic. I designed compliant “organoids” that provide the pressure compensating medium. The organoids would probably resemble bladders more than any of the other methods you mentioned. The organoids are basically a mix of semi rigid and flexible materials. I designed them with the goal of being able to shrink by about 35%.

  • How do you ensure the system is truly air-free when filling it? Do you use vacuum filling, or a simpler bleed/purge method?

Since the organoids should be able to shrink by 35% I would assume that the loss of volume of the internal space due to hydrostatic pressure and thermal contraction should not be an issue as long as the air pockets are not obscene. But im working on an air bleeding mechanism in the organoids since what i foresee will be a bigger headache will be the bouyancy differences from surface to depths… Thats still a work in progress.

As you go deeper, which parts do you expect to be the first weak points (likely to leak or fail): cable exits, moving seals, etc.?

I’m trying to design the whole system with a pressure compensated approach. So seals, cable exits, even the moving seals should not be fighting hydrostatic pressure and have a version of the pressure compensating approach as medium. My concern right now is in the modifications to the electronic components (swapping components) and the choice of some potting materials. Some are somewhat compliant while others should be capable of substantial structural rigidity. But I can’t know for sure their performance until I start testing in actual hydrostatic pressure.

Are the batteries also oil-filled and pressure-compensated, or kept in a separate enclosure?

The batteries are packed in their own organoid, it makes it easier to swap and manage. But in the future I will add a bit of smarts to the packs to make management even easier.

One extra thought : a lot of deep-ocean research targets the hadal range (up to ~7,000–11,000 m). The tricky part usually isn’t the housing, but interfaces : seals, joints, cable penetrations, and bonded assemblies. If you end up pushing depth testing, I’d be very curious how your design choices evolve around those interfaces.

I do not think my design would survive those depths :sweat_smile:( greater than ~6km). Not without some critical design changes (and more R&D costs) My goal is to first reach 1km. Since even if the robot can do deeper than that, the payloads depth rating will become the weakest link in the chain, and since my goal is to help science and conservation teams that dont have a lot of cash, designing today for super deep might end up creating a robot that can’t “sense” anything at those depths since the sensor payloads would still price those teams out of the missions since any of those sensors cost many times what my robot would. My goal right now is to have the robot, CTD and camera capable of reaching 1000mts, and iterate progressively deeper from there.

Thank you for stopping by and asking these questions! seems like you know the space. I don’t really get to nerd out about these things anywhere :sweat_smile: