AgenticROS: Connects ROS with OpenClaw, Claude (code, desktop, dispatch), and Google Gemini

,

I’ve been working on a new open source project called AgenticROS. It adds ROS2 support to multiple AI agents: OpenClaw, NemoClaw, Claude Code, Claude desktop & Dispatch, Google Gemini, and others.

AgenticROS supports four deployment modes depending on where your AI agent runs relative to the robot.

Mode A - Same Machine

Your AI agent runs on the robot’s computer. Plugin talks to ROS2 via local DDS — no network transport. Best for: Single-robot setups, embedded deployments, edge devices.

Mode B - Local Network

Your AI agent (e.g. OpenClaw, NemoClaw, Google Gemini, Claude Code, or Claude desktop / Dispatch) on a separate machine (laptop, server). Adapter connects to rosbridge_server on the robot via WebSocket over LAN. Best for: Development, testing, multi-robot labs.

Mode C - Cloud / Remote

AI agent on a cloud server; robot behind NAT. WebRTC with STUN/TURN for NAT traversal. AgenticROS Agent Node on robot. Best for: Production, remote operations, fleet management.

Mode D - Zenoh

Robot uses ROS 2 with Zenoh RMW. Plugin connects via zenoh-ts (WebSocket to Zenoh router). No rosbridge required. Best for: Robots already on Zenoh, low-latency pub/sub.

SKILLS

AgenticROS also has an open plugin Skills architecture. The AgenticROS plugin loads optional skill packages at startup. Each skill registers its own tools (e.g. follow_robot, follow_me_see) and reads config from config.skills.<skillId>. You install skills by adding them to skillPackages (npm names) or skillPaths (local directories) in the plugin config, then restarting the gateway.

A curated list of AgenticROS skills helps robotics developers discover available skills and add new ones via pull request. The Follow Me skill (agenticros-skill-followme) is the reference implementation: depth-based person following with optional Ollama/VLM, turn-to-follow and search-when-lost. Use it as a template to build and share your own skills.

Try It

Send a message to your robot (via OpenClaw, NemoClaw, Claude Code, Claude Desktop / Dispatch, Google Gemini, or any supported agent):

“Move forward 1 meter” — publishes velocity to /cmd_vel

“Follow Me” — skill to follow a person (depth + optional VLM)

“Navigate to the kitchen” — sends a Nav2 goal

“What do you see?” — captures a camera frame

“Check the battery” — reads /battery_state/estop — emergency stop (bypasses AI)

DEMO VIDEO

3 Likes

This is amazing Chris. Can we connect? I am working on something similar for my research.

1 Like

Interested in this as well, will give it a try

1 Like

:folded_hands: Yes, love to connect!

chris.matthieu@realsenseai.com

1 Like

Awesome! Please send feedback. :heart::robot: