VectorClaw MCP

MCP tools for Anki Vector: speech, motion, camera, sensors, and automation workflows.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
1 · 226 · 0 current installs · 0 all-time installs
byrobodan@danmartinez78
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name, description, and runtime instructions align: controlling a Vector robot via MCP reasonably requires python3, a VECTOR_SERIAL, an SDK config, and launching a Python MCP server. The listed tools (speak, drive, camera, sensors) match that purpose.
Instruction Scope
SKILL.md only instructs robot-related actions: installing the package, configuring the Vector SDK (~/.anki_vector/sdk_config.ini), setting VECTOR_SERIAL, and adding an MCP server entry. It does not ask to read unrelated system files or exfiltrate data.
Install Mechanism
Installation is via pip (vectorclaw-mcp). Using PyPI is expected for a Python MCP package but carries the normal risk that arbitrary code will be installed; the skill bundle itself contains no code to inspect, so the package should be audited (or installed in an isolated environment) before use.
Credentials
The only environment variable required is VECTOR_SERIAL, which is appropriate for addressing a particular robot. The SDK config path is expected for Vector SDK usage. No unrelated credentials or broad secrets are requested.
Persistence & Privilege
The skill is not always-enabled and does not request elevated platform privileges. It relies on launching its own MCP server process (normal for this use). Autonomous invocation is allowed by default but not combined with other concerning flags.
Assessment
This skill appears to do what it claims (control a Vector robot) and only requests the robot serial and an SDK config. However, the skill bundle does not include the actual Python package (vectorclaw-mcp) — SKILL.md instructs you to pip-install it. Before installing or running the MCP server: 1) review the package source (the GitHub repo linked) or the PyPI project to inspect the code; 2) install in an isolated environment (virtualenv, container, or VM) if you don't want arbitrary code on your system; 3) know that the SDK config (~/.anki_vector/sdk_config.ini) likely contains your robot's auth info, so treat it as sensitive; and 4) only use this if you own/trust the Vector hardware and Wire-Pod setup. These checks are why confidence is medium rather than high.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.4
Download zip
hardwarevk97677cdqk1mz1z1nabb5ftczx82264ylatestvk97677cdqk1mz1z1nabb5ftczx82264ymcpvk97677cdqk1mz1z1nabb5ftczx82264yroboticsvk97677cdqk1mz1z1nabb5ftczx82264yvectorvk97677cdqk1mz1z1nabb5ftczx82264y

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

VectorClaw MCP

VectorClaw connects OpenClaw to an Anki / Digital Dream Labs Vector robot through MCP. It provides practical robot control primitives for speech, movement, camera capture, and status/sensor reads.

What you can do

  • Speak text with vector_say
  • Move and position with vector_drive, vector_head, vector_lift
  • Capture camera images with vector_look and vector_capture_image
  • Read robot state with vector_status, vector_pose, vector_proximity_status, vector_touch_status
  • Build look → reason → act workflows

Vision requirement for look → reason → act

For see → reason → act workflows, the agent must either be vision-capable itself (e.g., a VLM) or have access to a separate vision model/image-interpretation tool to analyze camera images before choosing actions.

Requirements

  • Vector robot configured and reachable
  • Wire-Pod running
  • SDK configured at ~/.anki_vector/sdk_config.ini
  • VECTOR_SERIAL environment variable set

Quick setup

  1. Install package: pip install vectorclaw-mcp
  2. Configure SDK: python3 -m anki_vector.configure
  3. Export robot serial: export VECTOR_SERIAL=your-serial
  4. Add MCP server:
{
  "mcpServers": {
    "vectorclaw": {
      "command": "python3",
      "args": ["-m", "vectorclaw_mcp.server"],
      "env": { "VECTOR_SERIAL": "${VECTOR_SERIAL}" }
    }
  }
}

Tool coverage

Hardware-verified core tools vector_say, vector_drive_off_charger, vector_drive, vector_emergency_stop, vector_head, vector_lift, vector_look, vector_capture_image, vector_face, vector_scan, vector_vision_reset, vector_pose, vector_status, vector_charger_status, vector_touch_status, vector_proximity_status

Experimental tools vector_animate, vector_drive_on_charger, vector_find_faces, vector_list_visible_faces, vector_face_detection, vector_list_visible_objects, vector_cube

Current limitations

  • Charger return (vector_drive_on_charger) is currently unreliable
  • Face/object detection is currently inconsistent
  • Visual interpretation requires the vision capability described above

Documentation

Files

2 total
Select a file
Select a file to preview.

Comments

Loading comments…