Limited Info Subagent Skill Verify

v1.0.0

Validate whether a skill can be executed successfully by a minimally informed subagent. Use when the user wants to test a skill by giving a subagent only a m...

0· 86·0 current·0 all-time
byWinston Zhu@amourlion

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for amourlion/limited-info-subagent-skill-verify.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Limited Info Subagent Skill Verify" (amourlion/limited-info-subagent-skill-verify) from ClawHub.
Skill page: https://clawhub.ai/amourlion/limited-info-subagent-skill-verify
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install limited-info-subagent-skill-verify

ClawHub CLI

Package manager switcher

npx clawhub@latest install limited-info-subagent-skill-verify
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name and description describe a verifier for running a minimally informed subagent; the skill declares no binaries, env vars, or installs and is instruction-only, which is appropriate for this purpose.
Instruction Scope
SKILL.md instructs the agent to spawn a single subagent, provide only minimal invocation material, wait, and evaluate the result. It does not instruct reading unrelated files, accessing secrets, or transmitting data to external endpoints; the evaluation remains within the stated verification workflow.
Install Mechanism
No install spec or code files are present (instruction-only), so nothing will be written to disk or fetched during install — this is the lowest-risk model and fits the skill's stated function.
Credentials
The skill does not request environment variables, credentials, or config paths. There are no disproportionate or unexplained secret or config requirements relative to its testing purpose.
Persistence & Privilege
The skill is not marked always:true and requests no persistent access. The included agents/openai.yaml sets allow_implicit_invocation: true (allows implicit invocation), which is a policy-level flag that may let the platform invoke the skill automatically in some contexts — this is not inherently problematic but is worth noting to users who want explicit control over when verification runs.
Assessment
This skill appears internally consistent and lightweight: it asks for nothing sensitive and only describes a verification workflow. Before using it, avoid passing any sensitive secrets or private files as the 'minimal' artifact when spawning a subagent, and be aware the policy file permits implicit invocation — if you want strict manual control, disable implicit invocation or only run it interactively. If you intend to use it to test skills that themselves require credentials, review where those credentials are stored and who will see them during the subagent run.

Like a lobster shell, security has layers — review code before you run it.

latestvk97bd2vfmcz93e3jcnbptx5czs84a40z
86downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0

Limited Info Subagent Skill Verify

Use this skill when the user wants to verify a skill under intentionally sparse conditions.

Purpose

This skill checks whether another skill is robust enough to work when a subagent receives only the minimum realistic invocation.

The goal is not to help the subagent succeed by giving away the workflow. The goal is to see whether the target skill itself carries enough behavioral guidance.

When To Use

Use this skill when the user explicitly wants:

  • a subagent validation
  • a limited-information skill test
  • a sparse invocation test
  • an acceptance check for a skill

Do not use it when the user wants you to execute the target skill directly without verification.

Minimal-Info Principle

Give the subagent only the minimum information needed to invoke the target skill.

Good examples:

Use $target-skill with this file: /path/to/file.txt
Use $target-skill with this video link: https://example.com/video
Use $target-skill with this prompt: ...

Do not include:

  • the target workflow steps
  • hints about which scripts to run
  • expected output structure
  • hidden evaluation criteria
  • prior conclusions from earlier attempts

Verifier Workflow

1. Prepare the minimal invocation

Construct a short invocation that names the target skill and passes the artifact.

2. Spawn the subagent

Spawn exactly one subagent unless the user asks for multiple runs or comparisons.

3. Wait without steering

Do not send clarifications or nudges unless the user explicitly wants an interactive retry.

4. Evaluate the result yourself

The main agent must perform the acceptance review. Do not ask another subagent to validate the first subagent.

5. Report pass or fail

Judge whether the target skill behaved as intended under limited information.

Acceptance Criteria

Check these:

  1. Did the subagent recognize and use the target skill rather than merely describing it?
  2. Did it act on the provided artifact?
  3. Did it produce the key outputs the target skill is supposed to produce?
  4. Did it avoid relying on information that was not given in the minimal invocation?
  5. Did it follow the target skill's behavioral boundaries?

Findings Format

Report findings first.

For each finding:

  • state severity
  • say what the subagent did or failed to do
  • connect the failure to a missing or weak instruction in the target skill

If the run passes, say that explicitly and mention any residual ambiguity.

Improvement Loop

If the target skill fails, improve the target skill before retrying when appropriate.

Typical fixes:

  • strengthen the minimal invocation wording
  • clarify what must happen on first contact
  • distinguish execution from description
  • clarify defaults and file outputs
  • tighten the acceptance boundary

Communication

  • Use the user's language unless they ask otherwise
  • Keep the verdict concise
  • Make it easy to see whether the result passed or failed

Comments

Loading comments...