Creating an Agentic Skill Library for Robotics, Computer Vision, and Physical AI

Physical AI is exploding

At Telekinesis we strongly believe that VLAs, RL policies, and robotics foundation models will keep improving, but they will also become commoditized over time, much like LLMs today. On their own, they won’t make Physical / Embodied AI easy to build or scale.

We recently published an article where we share the perspective that emerged from more than two decades of research and from building real robotic systems over the past years. Our conclusion is that the real leverage won’t come from any single model, but from how capabilities are structured, composed, and reused.

That’s why we’re focusing on building an Agentic Skill Library: reusable, composable Skills for perception, motion, and control, with Physical AI Agents that reason over these Skills. The goal is to reduce the amount of integration glue between modules and AI reasoning, and make robotic systems easier to build, debug, scale, and evolve.

Article:

https://medium.com/@telekinesis-ai/creating-an-agentic-skill-library-for-robotics-computer-vision-and-physical-ai-1f3ecfbbbca0

We’ve also just opened early access to our first module - Vitreous, focused on 3D point cloud processing, and we’re looking for a small number of beta users who are willing to stress-test the approach and give honest feedback before we go further.

Automated relay soldering powered by Physical AI

Cable Soldering – Mit Clipchamp erstellt (1)

If you’d like to explore:

Docs:

https://docs.telekinesis.ai/

Examples:

https://github.com/telekinesis-ai/telekinesis-examples

Join our Discord-Community:

https://discord.gg/cxTMdkMs

We’re trying to build this together with a community of engineers, researchers, and builders who care about making Physical AI practical and reusable. If you’re working in robotics, computer vision, or embodied AI and want to experiment or contribute a Skill, we’d love to collaborate.

About us:

Telekinesis grew out of the Intelligent Autonomous Systems Lab at TU Darmstadt, led by Prof. Jan Peters. Together with a small team of researchers and industry veterans—including former leaders from KUKA and Universal Robots—we’re working on a skill-based SDK for Physical AI.

3 Likes

Exciting new update for those of you who need to create datasets for model training. We will soon be releasing Illusion, our Synthetic Data Generation skills in https://docs.telekinesis.ai/. We’re actively working on it!

Beyond Vision Language Action (VLA) Models: Moving Toward Agentic Skills for Zero-Error Physical AI: https://medium.com/@telekinesis-ai/beyond-vision-language-action-vla-models-moving-toward-agentic-skills-for-zero-error-physical-ai-45390f32c9d1

VLAs struggle with long-horizon tasks and OOD edge cases. We discuss how Agentic Skills can solve these for real-world manufacturing.

Mastering 3D Point Cloud Processing in Python with Vitreous: Filtering, 3D Detection, and 6D Pose Estimation: https://medium.com/@telekinesis-ai/mastering-3d-point-cloud-processing-in-python-with-vitreous-filtering-3d-detection-and-6d-pose-801e6a7271cd

Sharing an overview of Vitreous, a Telekinesis module for 3D point cloud processing in Python. The post covers filtering, clustering, segmentation, registration, 3D detection, and 6D pose estimation, with practical code examples for building perception pipelines in robotics applications such as pick and place, bin picking, and navigation.