3we: AI-First Python API for Mobile Robot Navigation (Open Source, $300 BOM)

Hi everyone,

I’d like to share **3we** — an open-source platform I’ve been building that provides an AI-First Python API on top of ROS2/Nav2, targeting Embodied AI researchers who want to focus on algorithms rather than ROS2 infrastructure.

## The Problem

AI researchers (especially those working with VLMs/VLAs) often want to deploy models on real robots but face:

- Steep ROS2 learning curve (launch files, topics, services, actions)

- No clean path from simulation to hardware

- Existing platforms are either too expensive (TurtleBot 4: $1,200+) or simulation-only (Habitat, Isaac Lab)

## What 3we Does

```python

from threewe import Robot

async with Robot(backend=“gazebo”) as robot:

image = robot.get_camera_image()     # (H,W,3) uint8

scan = robot.get_lidar_scan()        # LaserScan

**await** robot.move_to(x=5.0, y=3.0)   # Nav2 under the hood

```

Change `backend=“gazebo”` to `backend=“real”` — same code runs on physical hardware. The ROS2/Nav2 stack is fully transparent to the user.

**Four backends with identical API:**

- `mock` — zero-dependency 2D kinematics (no ROS2 needed, runs anywhere)

- `gazebo` — Gazebo Harmonic with full physics

- `isaac_sim` — NVIDIA Isaac Sim for GPU-accelerated RL training

- `real` — Physical hardware via ROS2 topics

## Architecture

```

┌─────────────────────────────────────────┐

│ AI-First Python API (user layer) │ ← Researchers write code here

├─────────────────────────────────────────┤

│ 3we-core (middleware) │ ← Backend dispatch, sensor fusion

├─────────────────────────────────────────┤

│ ROS2 / micro-ROS (infrastructure) │ ← Transparent to users

│ Nav2, slam_toolbox, ESP32 drivers │

└─────────────────────────────────────────┘

```

This is NOT a replacement for ROS2 — it’s a layer on top that makes ROS2 accessible to ML researchers while preserving full ROS2 compatibility for roboticists who want low-level access.

## VLM-Controlled Navigation

The killer feature for AI researchers: GPT-4o (or any OpenAI-compatible VLM) can directly control the robot through natural language:

```python

async with Robot(backend=“gazebo”) as robot:

result = **await** robot.execute_instruction(

    "find the red bottle and stop near it"

)

print(f"Success: {result.success}")

```

Internally this runs a perception-action loop: capture image → send to VLM → parse JSON action → execute → repeat until done. Works with GPT-4o, Qwen-VL, or local LLaVA.

## Hardware ($300 BOM)

Fully open reference hardware under CERN-OHL-P v2:

| Component | Selection |

|-----------|-----------|

| Compute | Raspberry Pi 5 (8GB) |

| AI Accelerator | Hailo-8L (13 TOPS) |

| MCU | ESP32-S3 + micro-ROS |

| LiDAR | LD06 (360°, 2D) |

| IMU | BNO055 (9-axis) |

| Drive | 4× N20 motors + Mecanum wheels + DRV8833 |

| Safety | Dual-channel relay (ISO 13850 E-stop) |

KiCad 8 PCB files, DXF mechanical drawings, and assembly docs all included.

## ROS2 Integration Details

For ROS2 developers who want to know what’s under the hood:

- **Navigation**: Nav2 with DWB planner, parameters tuned for mecanum kinematics

- **SLAM**: slam_toolbox (online async) with LD06

- **MCU bridge**: micro-ROS on ESP32-S3 via USB-C serial transport

- **Sensor fusion**: robot_localization EKF (IMU + wheel odometry)

- **Launch**: Composable nodes, configurable via YAML profiles

You can always drop down to raw ROS2 topics/services if needed — the Python API doesn’t hide or lock you out.

## Benchmark Suite

7 standardized scenes with reproducible baselines:

```bash

threewe benchmark run --task pointnav --scene office_v2 --episodes 100

```

Gymnasium-compatible environments for RL:

```python

import gymnasium as gym

env = gym.make(“3we/Navigation-v1”, scene=“office_v2”, backend=“mock”)

```

## Demo

![Navigation Demo](https://img.xuexiao.eu.org/1778836342754-19b737g.gif)

Autonomous point-to-point navigation in office_v2 scene with 360° LiDAR visualization.

## Links

- **GitHub**: GitHub - telleroutlook/3we-robot-platform: 3we Universal Modular Mobile Robot Platform — Open-source firmware, ROS2, hardware, and SDK · GitHub

- **Documentation**: https://3we.org

- **Paper**: Paper - 3we

- **PyPI**: `pip install threewe`

Feedback welcome — especially from Nav2 users on whether the API abstraction makes sense, and from AI researchers on what’s missing for their workflows.

Software: Apache 2.0 | Hardware: CERN-OHL-P v2 | Docs: CC-BY-SA 4.0

Why do people keep posting unformatted markdown (copy and paste from AI it seems) and not bother fixing it?

1 Like

I don’t know, but it is getting really annoying. I can tell they are AI generated because they all have some title like, “[Release] Some Title.” In this case I removed the unnecessary bracketed text. I don’t know why AI always inserts a bracketed tag in every title. We have built-in tags, please use them.

1 Like