Install
openclaw skills install embodied-osUnified operating system for controlling embodied intelligent robots with AI agents - the control hub bridging AI agents and physical world
openclaw skills install embodied-osThis skill enables you to control physical robots through AI agents with natural language commands. Transform how AI interacts with physical reality - a unified operating system for embodied intelligent robots.
Activate this skill when the user:
✅ Unified Robot Control - Single API for controlling any robot platform ✅ AI Agent Integration - Natural language control like talking to ChatGPT ✅ Multi-Modal Perception - Vision, audio, and tactile sensing ✅ High-Level Actions - Navigation, manipulation, and interaction primitives ✅ Task Planning - AI-powered task decomposition and execution ✅ Safety System - Multi-layer safety guarantees for physical robots
clawhub install embodied-os
Option A: Python (PyPI)
pip install openclaw-embodied-os
Option B: Node.js (npm)
npm install openclaw-embodied-os
Option C: From Source
git clone https://github.com/ZhenRobotics/openclaw-embodied-os.git
cd openclaw-embodied-os
pip install -e .
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENAI_API_KEY="sk-..."
Or create a .env file:
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
from embodied_os import EmbodiedOS
# Initialize the OS
os = EmbodiedOS()
# Connect to a robot
robot = os.connect_robot(
platform="simulated",
model="test_robot"
)
# Control the robot
robot.actions.move_to(x=0.5, y=0.3, z=0.2)
# Detect objects
objects = robot.perception.detect_objects()
# Pick and place
if objects:
robot.actions.pick(object_id=objects[0].id)
robot.actions.place(position=(0.7, 0.4, 0.1))
from embodied_os import AgentInterface
# Create AI agent
agent = AgentInterface(robot=robot, model="claude-sonnet-4")
# Natural language control
agent.execute("Pick up the red cube and place it in the box")
# Conversation
response = agent.chat("What do you see?")
print(response)
warehouse_robot = os.connect_robot(platform="mobile_manipulator")
agent.execute("""
Go to aisle 5, shelf B.
Pick up all items marked with red tags.
Transport them to the packing station.
Report the quantity and item IDs.
""")
care_robot = os.connect_robot(platform="service_robot")
agent.monitor_and_assist("""
Watch for the person calling for help.
If they ask for water, bring them a glass.
If they drop something, pick it up.
""")
lab_robot = os.connect_robot(platform="dual_arm_robot")
agent.execute("""
Set up the chemistry experiment:
1. Measure 50ml of solution A
2. Heat to 60 degrees
3. Add catalyst
4. Stir for 2 minutes
""")
# Navigation
robot.actions.navigate_to(x=2.0, y=1.5, theta=0)
# Manipulation
robot.actions.pick(object="cup")
robot.actions.place(location="table")
# Interaction
robot.actions.press_button(target="elevator")
robot.actions.open_door(handle_position=[1.0, 0.5, 1.0])
# High-level task
task = "Prepare coffee for the user"
# Automatic decomposition and execution
plan = robot.planner.create_plan(task)
robot.planner.execute(plan, monitor=True)
# Define safety constraints
robot.safety.set_workspace_bounds(
x_min=0, x_max=2.0,
y_min=-1.0, y_max=1.0,
z_min=0, z_max=1.5
)
# Force limits
robot.safety.set_max_force(50.0)
# Collision avoidance
robot.safety.enable_collision_avoidance()
┌─────────────────────────────────────────────────┐
│ AI Agent Layer │
│ (Claude, GPT, Custom Agents) │
└────────────────┬────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────┐
│ Embodied-OS Core │
│ ┌─────────┐ ┌─────────┐ ┌────────────┐ │
│ │ Natural │ │ Task │ │ Safety │ │
│ │Language │ │ Planner │ │ Validator │ │
│ └─────────┘ └─────────┘ └────────────┘ │
│ ┌─────────┐ ┌─────────┐ ┌────────────┐ │
│ │Perception│ │ Action │ │ State │ │
│ │ Module │ │Executor │ │ Manager │ │
│ └─────────┘ └─────────┘ └────────────┘ │
└────────────────┬────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────┐
│ Robot Abstraction Layer (RAL) │
│ Unified interface for all robot types │
└────────────────┬────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────┐
│ Physical Robots │
│ Manipulators | Mobile | Humanoids | Drones │
└─────────────────────────────────────────────────┘
Create a config.yaml file:
robot:
platform: universal_robot
model: UR5e
endpoint: 192.168.1.100
perception:
cameras:
- name: head_camera
type: realsense_d435
resolution: [1280, 720]
fps: 30
safety:
workspace:
x: [0, 2.0]
y: [-1.0, 1.0]
z: [0, 1.5]
max_velocity: 0.5 # m/s
max_force: 50 # N
agent:
model: claude-sonnet-4
api_key: ${ANTHROPIC_API_KEY}
EmbodiedOSMain interface to the system.
os = EmbodiedOS(config_path="config.yaml")
robot = os.connect_robot(platform, model, endpoint)
os.disconnect_all()
RobotRepresents a connected robot.
robot.actions.move_to(x, y, z)
robot.perception.get_image()
robot.state.get_joint_positions()
robot.safety.emergency_stop()
AgentInterfaceAI agent control interface.
agent = AgentInterface(model="claude-4", robot=robot)
agent.execute(task_description)
agent.chat(message)
See the examples/ directory:
basic_control.py - Basic robot controlagent_control.py - AI agent interactionRun examples:
python examples/basic_control.py
python examples/agent_control.py
MIT License - see LICENSE file for details.
Embodied-OS - Making robots as easy to control as talking to a friend.