The First Large-Scale Agentic Skill Library for Robotics, Computer Vision, and Physical AI

Physical AI is exploding

We are building the first large-scale Agentic Skill Library for Robotics, Computer Vision, and Physical AI.

Instead of betting on a single massive end-to-end policy, we believe the future of Physical AI will be built on reusable Skills, often known as Robotic Movement Primitives. To this end, the Skill library provides a massive atomic skills for robotics and computer vision, covering:

  • 3D perception including detection, registration, filtering and clustering
  • 2D perception including image processing, detection, and segmentation
  • Synthetic data generation
  • Model training tools
  • Motion planning, kinematics, and control
  • Physical AI agents (LLM agents and VLM agents)
  • Vision-Language Action Models (VLAs)
  • RL policies

On top of this low-level Skill Library sits a Physical AI Agentic layer for long-horizon reasoning. These Physical AI Agents, typically powered by Large Language Models (LLMs) or Vision Language Models (VLMs), are responsible for interpreting high-level text & image prompts and turning them into real-time task plans.

The library is designed so that roboticists, computer vision engineers, and hobbyists exploring Physical AI can access these capabilities without spending time integrating fragmented libraries, letting you focus on building solutions instead of patching together incompatible components.

To learn more (non-technical blog): https://medium.com/@telekinesis-ai/creating-an-agentic-skill-library-for-robotics-computer-vision-and-physical-ai-1f3ecfbbbca0

Resources

We’ve also just opened early access to our first modules and we’re looking for a small number of beta users who are willing to stress-test the approach and give honest feedback before we go further.

If you’d like to explore:

  1. Docs: https://docs.telekinesis.ai/
  2. Code examples: https://github.com/telekinesis-ai/telekinesis-examples

Join the Telekinesis Discord community:

We’re trying to build this together with a community of engineers, researchers, and builders who care about making Physical AI practical and reusable. If you’re working in robotics, computer vision, or embodied AI and want to experiment or contribute a Skill, we’d love to collaborate.

About Telekinesis

Telekinesis grew out of the Intelligent Autonomous Systems Lab at TU Darmstadt, led by Prof. Jan Peters. Together with a small team of researchers and industry veterans, including former leaders from KUKA and Universal Robots, we’re working on a skill-based SDK for Physical AI.

Examples

Synthetic Data Generation Skills

Segmentation Skills

Robotic Skills