Energy-Efficient Autonomous Navigation Benchmarking

March 14

My current idea:

  • Get raw data from sensors (RealSense, LiDAR) and publish them to ROS topics.
  • Run a policy node that uses the camera image, depth map, IMU data, and LiDAR scan to auto-navigate.
  • Attach a power-measuring device to the battery.
  • Tweak CPU frequency, offload edge computing, and log the data.

What I did today: I spent some time wrestling with mixed dependencies between my Python virtual environment and the system-installed ROS 2 Jazzy. I also struggled to find my running ROS nodes, only to realize I just needed to run ros2 daemon stop to get them listening to each other.

After sorting out those dependency headaches, I successfully got the RealSense color image and depth map working:

Armed with a better understanding of the setup, getting the RPLIDAR A2M8 running was fairly quick:


So right now, I’ve got the raw sensor data successfully publishing via ROS topics!

Tomorrow’s Plan: I plan to dive deeper into researching the hardware setup, specifically focusing on the power supply for the RC chassis.

Stay tuned!

1 Like