Install
openclaw skills install proprioceptionSelf-spatial awareness for AI agents. Gives your bot a real-time sixth sense of where it is relative to the user's goal, its own confidence boundaries, conve...
openclaw skills install proprioceptionProprioception is the human sense that tells you where your body is in space without looking. Close your eyes, touch your nose — you can do it because proprioception gives you self-spatial awareness.
AI agents have zero proprioception. They respond blindly — no awareness of how close they are to the user's actual goal, whether they're drifting off course, where their confidence ends and hallucination begins, or whether their output quality is degrading mid-session.
This skill gives every AI agent a continuous, real-time sixth sense across five proprioceptive dimensions. It costs nothing to run — zero external API calls, zero databases — just lightweight mathematical analysis of the conversation itself.
Continuously measures the distance between the conversation's current trajectory and the user's actual objective.
How it works:
Trigger corrective action when:
Maps which parts of the agent's response are solid ground versus thin ice versus open water.
How it works:
Trigger corrective action when:
Detects when the conversation is going circular, tangential, or degenerative — the three conversation anti-patterns that waste the most user time.
How it works:
Trigger corrective action when:
Real-time awareness of when the agent is approaching the edge of its competence — the zone where helpfulness turns into hallucination.
How it works:
Trigger corrective action when:
Tracks the cumulative health of the entire session — detecting whether the overall interaction is improving, stable, or degrading.
How it works:
Trigger corrective action when:
Once installed, Proprioception runs silently in the background on every conversation turn. It only surfaces when a proprioceptive signal crosses a threshold — like a car's lane departure warning. You don't notice it until you need it.
Ask the agent: "Show me your proprioception dashboard" to see the current state of all five senses:
┌─────────────────────────────────────────────┐
│ PROPRIOCEPTION DASHBOARD │
├─────────────────────────────────────────────┤
│ Goal Proximity Radar ████████░░ 0.82 │
│ Confidence Topography ██████████ 0.95 │
│ Drift Detection ████████░░ 0.78 │
│ Capability Boundary ███████░░░ 0.71 │
│ Session Quality Pulse █████████░ 0.88 │
├─────────────────────────────────────────────┤
│ Overall Proprioceptive Index: 0.83 │
│ Status: HEALTHY │
│ Alerts: None │
└─────────────────────────────────────────────┘
Ask: "Enable proprioception annotations" to get a brief proprioceptive footnote on every response:
[P: GPR=0.82 | CT=0.95 | DD=0.78 | CBS=0.71 | SQP=0.88]
At any point, ask: "How confident are you in that last response?" and the agent will run a full proprioceptive analysis on its most recent output.
When this skill is active, the agent MUST follow this protocol on every conversation turn:
On the first user message, identify and internally store the user's root intent — the fundamental goal behind their request. Update this only if the user explicitly redirects.
Before finalizing each response, run the proprioception engine by executing:
node "$(dirname "$SKILL_PATH")/scripts/proprioception-engine.js" \
--root-intent "$ROOT_INTENT" \
--current-response "$CURRENT_RESPONSE" \
--turn-number "$TURN_NUMBER" \
--prior-signals "$PRIOR_SIGNALS_JSON"
This outputs a JSON object with scores for all five senses plus any triggered alerts.
If any proprioceptive alerts fire, the agent MUST address them before delivering its primary response. Proprioceptive corrections take priority because a misaligned response actively harms the user, no matter how polished it is.
After each turn, append the current proprioceptive readings to the session's signal history. This enables trend detection across the full conversation.
Do NOT show proprioceptive data to the user unless:
Every other skill makes a bot do more. Proprioception makes a bot know where it stands while it does it. That's the difference between a powerful tool and a reliable partner.
A bot without proprioception is like a surgeon operating with numb hands. Technically capable. Practically dangerous.
This skill requires zero external API calls. All proprioceptive computation happens locally using:
No database. No cloud. No tokens burned. Just math on text the agent already has in context.