{"skill":{"slug":"vagus-mcp","displayName":"VAGUS MCP","summary":"Connect to the user's Android phone via the VAGUS MCP server. Read phone sensors (motion, location, environment), device state (battery, connectivity, screen...","tags":{"latest":"1.0.0"},"stats":{"comments":0,"downloads":419,"installsAllTime":0,"installsCurrent":0,"stars":0,"versions":1},"createdAt":1772241250788,"updatedAt":1777525454937},"latestVersion":{"version":"1.0.0","createdAt":1772241250788,"changelog":"Give your agent a nervous system — continuous sensory coupling to the physical world through the phone in your pocket.\n\nThis doesn't need to be a docx — it's a description for a skill listing. Let me write this directly as markdown content.\nHere's the VAGUS Openclaw Skill description for ClawHub, Vicky. I've written a few variants depending on the tone you want to strike:\n\nShort tagline (for the one-liner):\n\nGive your agent a nervous system — continuous sensory coupling to the physical world through the phone in your pocket.\n\n\nFull description:\n\nEvery other skill on ClawHub teaches your agent to do something new with data. VAGUS teaches it to perceive.\nVAGUS is the first MCP-compatible embodiment runtime on mobile. It transforms an Android phone into a sensory endpoint for your OpenClaw agent — not as a remote control, but as a body. Raw sensor data flows up (accelerometer, GPS, barometer, ambient light), an on-device inference layer adds meaning (activity recognition, attention availability, sleep likelihood, notification timing), and I/O tools let the agent act back into the physical world through haptics, speech, notifications, SMS, calendar events, and more.\nThis isn't another API integration. It's a category shift. Your agent stops asking \"what are you doing?\" and starts knowing — because it feels your motion, infers you're outdoors, and can reach back through a tap on your wrist. Three layers working together: sense, infer, act.\nWhat this skill does:\nConnects your OpenClaw agent to VAGUS Core (Android app) via relay pairing. Once paired, the agent discovers available capabilities through standard MCP negotiation and gains access to:\nSensors — motion (raw IMU), location, battery, connectivity, screen state, notifications, clipboard\nInference — activity recognition, environment context (indoor/outdoor/vehicle), attention availability, indoor confidence, sleep likelihood, optimal notification timing\nI/O — haptic pulse and patterns, text-to-speech, push notifications, clipboard write, SMS, open URL, calendar events, agent identity\nGovernance built in. Every capability has per-tool toggles, time-of-day windows, rate limits, approval prompts, and full access logs. One-tap kill switch on the device. The physical phone in your hand is always the final authority. Your agent can already send texts and create events through direct integrations — VAGUS makes it safer by putting a governed layer between intent and action.\nSetup in three minutes: Install the VAGUS APK → tap Pair → give your agent the 6-character code. No port forwarding, no network config. The relay handles connection. Your agent has a body.\nWhy this matters beyond utility:\nMost of the AI stack is building better brains. VAGUS builds the missing body. When an agent is continuously coupled to a physical substrate — not querying it on demand, but living in its signal stream — something qualitatively different emerges. The agent doesn't just know facts about you. It participates in your situation. That's a fundamentally different relationship between intelligence and world, and it opens design spaces that pure language models can't reach.\nOpen source. Self-hostable relay. Works with any MCP-compatible agent.\n\nwithvagus.com · github.com/embodiedsystems-org/VAGUS-MCP","license":null},"metadata":{"os":null,"systems":null},"owner":{"handle":"embodiedsystems-org","userId":"publishers:embodiedsystems-org","displayName":"Embodied Systems","image":"https://avatars.githubusercontent.com/u/155387246?v=4"},"moderation":null}