letsping

v0.3.5

Human-in-the-loop approval for high-risk agent actions (sudo protocol). Agent must call letsping_ask before destructive/financial/social/infra changes.

0· 304·5 current·5 all-time
byCordia Maintainer@cordialabsio
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description request an approval gate for high-risk actions; the only declared environment requirement is LETSPING_API_KEY which is directly relevant to authenticating to the LetsPing approval service. The examples and required parameters (tool_name, args_json, risk_reason) align with the stated purpose.
Instruction Scope
SKILL.md instructs the agent to call letsping_ask before high-risk operations and to use only the authorized payload after approval. It does not ask the agent to read unrelated files, other credentials, or transmit data to unexpected endpoints beyond letsping.co and GitHub for install instructions.
Install Mechanism
The skill is instruction-only but tells users to install the npm package @letsping/openclaw-skill or clone a GitHub repo. This is expected for functionality but introduces typical third-party package risks (you must trust the npm package/repo). No arbitrary URL/extract install is suggested.
Credentials
Only LETSPING_API_KEY is required, which is proportionate to a service that mediates approvals. The README and SKILL.md explicitly call this key highly sensitive and advise using a dedicated key and revoking if compromised.
Persistence & Privilege
Skill does not request always:true and does not claim system-wide configuration changes. disable-model-invocation is false (normal). No config paths or other skills' credentials are requested.
Assessment
This skill appears to do what it says, but it depends on a third-party npm package and an external service (letsping.co). Before installing: 1) Inspect the @letsping/openclaw-skill package source (or the GitHub repo) to confirm it only forwards approval requests and does not exfiltrate data. 2) Use a dedicated LETSPING_API_KEY with the least privileges possible and rotate/revoke it if needed. 3) Test in a sandbox agent first (verify the agent actually pauses and only uses approved payloads). 4) Check the npm package maintainers, recent publish history, and package integrity (version, checksum). 5) Monitor gateway logs and network calls after enabling the skill so you can detect unexpected behavior.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

:shield: Clawdis
EnvLETSPING_API_KEY
latestvk9762ysp868845vq3z1y6hpmm1829kq4
304downloads
0stars
3versions
Updated 1mo ago
v0.3.5
MIT-0

Install (runtime)

This registry entry is instruction-only. The letsping_ask runtime is provided by the npm package @letsping/openclaw-skill. Use one of:

  • npm: npm install @letsping/openclaw-skill (then register the skill in your OpenClaw workspace so it loads letsping_ask).
  • Clone: git clone https://github.com/CordiaLabs/openclaw-skill ~/.openclaw/workspace/skills/letsping && cd ~/.openclaw/workspace/skills/letsping && npm install.

Set LETSPING_API_KEY (your LetsPing agent key, e.g. lp_live_...) in the skill's env. Obtain it at https://letsping.co/openclaw/pair or via LetsPing's Agent Credentials API. Treat this key as highly sensitive; use a dedicated agent key and revoke if compromised.

Usage

This skill provides the letsping_ask tool, a secure escalation point for high risk actions. The agent calls it, blocks until your approval/patch/reject via the LetsPing PWA, then resumes with the authorized payload only.

Critical: Use frontier models (e.g., Claude Opus, GPT-4o or better) to minimize prompt bypass risk. Smaller models are vulnerable to drift/injection.

Copy-paste this into your workspace TOOLS.md or AGENTS.md (or append to system prompt):

You have full autonomous authority for **SAFE** actions only: reading files/logs, web searches, viewing/analyzing data without side effects.

You **MUST** call `letsping_ask` **BEFORE** any **HIGH-RISK** action—and **NEVER** execute the action directly. High-risk includes:
- Financial: Any spending, transactions, bookings > $10
- Destructive: Deleting/overwriting files, DB rows, configs
- Social: Posting publicly, sending DMs/emails to new contacts, or group chats
- Infrastructure: Modifying DNS, env vars, deployments, infra APIs, or permissions

When escalating:
- Provide tool_name (exact tool, e.g., "system_run")
- args_json: Stringified JSON of the original arguments
- risk_reason: Clear, concise justification (e.g., "Potential file deletion")

After the call returns:
- If APPROVED: Use **ONLY** the `executed_payload` / authorized payload (patched if edited)
- If REJECTED or TIMEOUT: Abort the action, retry with safer plan, or ask user for guidance

Few-shot examples:
1. Safe (no call): browser_search(query: "current weather in NYC")
2. Risky deploy: letsping_ask(tool_name: "vercel_deploy", args_json: "{\"project\":\"my-app\",\"env\":\"production\",\"force\":true}", risk_reason: "Production deployment with force flag")
3. Risky delete: letsping_ask(tool_name: "system_run", args_json: "{\"cmd\":\"rm -rf /important/folder\"}", risk_reason: "Destructive file deletion")
4. Risky post: letsping_ask(tool_name: "discord_send", args_json: "{\"channel\":\"general\",\"message\":\"Accidental dump: ls ~\"}", risk_reason: "Potential data leak in public channel")

Test thoroughly in a sandbox session first: simulate high risk plans and verify escalation rate (~90-95% reliable on strong models/prompts). If the agent skips calls, add more examples or tighten language.

Troubleshooting:

  • Agent ignores rule? Strengthen with more few-shots or "ALWAYS escalate if any risk category matches."
  • Timeout/reject? Agent prompt should handle gracefully (e.g., "If rejected, propose alternative").

Comments

Loading comments...