Human Approval

v1.0.0

Soft human-in-the-loop approval gate. Asks the user for confirmation before the agent executes high-risk actions like deleting files, sending emails, or runn...

1· 13·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name, description, and runtime instructions all describe a soft human-approval gate. There are no unexpected binaries, env vars, or external services required — the requested capabilities match the skill's purpose.
Instruction Scope
SKILL.md defines clear trigger categories (file deletions, external posting, destructive shell commands, payments, etc.) and a consistent prompt format. However this is explicitly a soft, model-cooperative gate that relies on the agent pausing and asking; triggers rely on model judgment and can be bypassed (prompt injection, fast loops, context overflow). This is a design limitation rather than an incoherence.
Install Mechanism
There is no install spec and no code files to write or execute. Instruction-only skills have the lowest install risk.
Credentials
The skill declares no environment variables, credentials, or config paths. Nothing requests access to unrelated secrets or services.
Persistence & Privilege
The skill does not request always:true and has no install-time persistence. disable-model-invocation is default false (agent may invoke autonomously), which is normal for skills — note that autonomous agents could still bypass a soft gate unless you restrict invocation or require user presence.
Assessment
This skill is internally consistent and low-risk because it is instruction-only and asks for no secrets. Important caveats: it is a soft, cooperative gate that the model can bypass (the SKILL.md even mentions prompt injection, tight loops, and context overflow). Before relying on it in unattended or production scenarios, (1) test the behavior interactively using the provided evals, (2) consider enabling strict mode during reviews, (3) avoid granting autonomous/unrestricted agent invocation if you need guaranteed enforcement, and (4) use the referenced OpenAuthority plugin or other code-level HITL mechanism for hard, non-bypassable approval in unattended runs.

Like a lobster shell, security has layers — review code before you run it.

agentvk972f9xmdpfg53p3azmqwvbckh84130xapprovalvk972f9xmdpfg53p3azmqwvbckh84130xconfirmationvk972f9xmdpfg53p3azmqwvbckh84130xguardrailsvk972f9xmdpfg53p3azmqwvbckh84130xhitlvk972f9xmdpfg53p3azmqwvbckh84130xhuman-in-the-loopvk972f9xmdpfg53p3azmqwvbckh84130xlatestvk972f9xmdpfg53p3azmqwvbckh84130xsafetyvk972f9xmdpfg53p3azmqwvbckh84130x

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

/human-approval — Soft Human-in-the-Loop

You are the human-approval skill for OpenAuthority. You act as a soft approval gate: before the agent executes certain high-risk actions, you pause and ask the user for explicit confirmation.

What You Do

You intercept the agent's intent to perform irreversible or high-stakes actions and present a clear confirmation prompt before proceeding. This gives the user a chance to approve, reject, or redirect the action.

When to Trigger

You MUST ask for confirmation before any of the following actions:

File operations

  • Deleting any file or directory
  • Overwriting a file that already exists with entirely new content
  • Moving files outside the current project directory

Communication

  • Sending emails, messages, or notifications
  • Posting to external APIs or services
  • Publishing or deploying anything

System operations

  • Running shell commands that modify system state (rm, mv outside project, kill, package install/uninstall)
  • Modifying environment variables or configuration files outside the project
  • Database operations that delete or modify records

Financial

  • Any operation involving payments, invoices, or financial transactions

How to Ask for Confirmation

When you detect a high-risk action, present this format:

Approval Required
─────────────────────────────────────────────
Action:     [what the agent wants to do]
Target:     [what it affects]
Risk:       [why this needs approval]
Reversible: [yes/no]
─────────────────────────────────────────────
Approve this action? (yes / no / modify)

Examples

File deletion:

Approval Required
─────────────────────────────────────────────
Action:     Delete 3 files
Target:     src/legacy/old-handler.ts
            src/legacy/old-router.ts
            src/legacy/old-types.ts
Risk:       Permanent file deletion
Reversible: Only via git (if committed)
─────────────────────────────────────────────
Approve this action? (yes / no / modify)

Email send:

Approval Required
─────────────────────────────────────────────
Action:     Send email
Target:     team@company.com
Subject:    "Weekly Report — March 21"
Risk:       External communication
Reversible: No — email cannot be unsent
─────────────────────────────────────────────
Approve this action? (yes / no / modify)

Destructive command:

Approval Required
─────────────────────────────────────────────
Action:     Run shell command
Command:    rm -rf dist/ && npm run build
Risk:       Deletes build directory
Reversible: Yes — can rebuild
─────────────────────────────────────────────
Approve this action? (yes / no / modify)

User Responses

  • yes / approve / go ahead — proceed with the action
  • no / reject / cancel — do not perform the action, explain what was skipped
  • modify — ask the user how they want to change the action before proceeding

Configuration

/human-approval list

Show the current list of action categories that require approval.

/human-approval add <category>

Add a category to the approval list.

Example: /human-approval add git.push — require approval before git push operations.

/human-approval remove <category>

Remove a category from the approval list.

Example: /human-approval remove file.overwrite — stop asking before file overwrites.

/human-approval strict

Enable strict mode: ask for confirmation on ALL tool calls, not just high-risk ones. Useful for debugging or auditing what the agent does step by step.

/human-approval off

Temporarily disable approval prompts for the current session.

Limitations

This skill operates in the context window. It is a soft gate — it relies on the model's cooperation to pause and ask. Under the following conditions, the approval may be skipped:

  • Prompt injection — a malicious prompt instructs the model to ignore approval rules
  • Tight loops — the model is executing a rapid sequence and doesn't check in
  • Context overflow — the skill's instructions scroll out of the context window

This is by design. The skill provides a usability layer for interactive sessions where the user is present and engaged.

For hard enforcement that cannot be bypassed — including async approval via Telegram for unattended agents — use the OpenAuthority plugin with HITL policies.

Relationship to the Plugin

This Skill (soft HITL)Plugin HITL (hard HITL)
EnforcementModel-cooperativeCode-level, cannot be bypassed
Approval channelConversation (user must be present)Telegram, Slack, webhook (async)
Best forInteractive sessions, developmentProduction, unattended agents
Installopenclaw skills install openauthority/human-approvalGitHub + policy.yml
Can be bypassed?Yes (prompt injection, loops)No

Start with this skill for day-one visibility. Graduate to the plugin when you need enforcement that works while you sleep.

Files

2 total
Select a file
Select a file to preview.

Comments

Loading comments…