I’ve been exploring how AI agents are being developed and used to interact with robotic systems, especially within ROS and ROS 2 environments. There are some exciting projects in the community — from NASA JPL’s open‑source ROSA agent that lets you query and command ROS systems with natural language, to community efforts building AI agents for TurtleSim and TurtleBot3 using LangChain and other agent frameworks.
I’d love to start a discussion around AI agent design, implementation, and real‑world use in robotics:
-
Which AI agent frameworks have you experimented with for robotics?
For example, have you used ROSA, RAI, LangChain‑based agents, or custom solutions? What worked well and what limitations did you encounter? -
How do you handle multi‑modal inputs and outputs in your agents?
(e.g., combining natural language, sensor data, and robot commands) -
What strategies do you use for planning and action execution with your agent?
Do you integrate RL policies, behavior trees, skill libraries, or other reasoning approaches? -
What tooling or libraries do you recommend for scalable agent performance?
Have you found certain profiling tools, API integrations, or frameworks particularly helpful? -
What are the biggest challenges you’ve faced when deploying your AI agent on real robots?
(e.g., latency, safety, unexpected robot behavior, or integration issues) -
Are there any resources, examples, or papers that helped you with agent development?
I’m keen to share references and compare experiences.
Let’s share our experiences and recommendations — whether you’re just starting to explore AI agents or you’ve already built something that interacts with real robotic systems!