Artifex: AI-generated URDFs with visual error checking - seeking feedback

I’ve been building a desktop tool for robot description generation and wanted to get feedback from people who actually work with URDFs regularly.

## What it does

1. Describe a robot in natural language

2. AI generates the URDF (using structured output with Zod schemas for validation)

3. 3D viewport renders the robot using React Three Fiber

4. AI takes a screenshot of the render via MCP tool call

5. AI analyzes the image for errors (joint axes, scale, orientation issues)

6. AI fixes errors and iterates

7. Export to colcon-ready ROS2 package

## The part I’m unsure about

The visual verification loop - having the AI “look” at the render to catch mistakes. In my testing it catches things like cameras mounted upside-down or wheel axes pointing the wrong way. But I’m not sure if this is genuinely useful or just a gimmick.

## Questions

- What URDF errors do you actually encounter that are annoying to debug?

- Is the “AI checking its own work visually” concept useful or overkill?

- Any workflow gaps that would make this more practical?

## Links

- **Download:** Download Artifex - AI-Powered Robot Design IDE | Kindly Robotics (macOS, Windows, Linux - free)

- Built with Electron, React Three Fiber, and multi-LLM support (OpenAI, Anthropic, Google)

**Disclosure:** I’m the developer. This is a commercial project but the tool is free to download. Would appreciate honest feedback, including “this doesn’t solve a real problem.”

1 Like