autoagent
PassAudited by ClawScan on May 10, 2026.
Overview
Autoagent is a disclosed prompt-optimization helper, but it creates a user-chosen sandbox and runs recurring cron/subagent tests, so users should monitor where it writes and how long it runs.
Use this skill only when you want recurring prompt or agent-guidance optimization. Pick a fresh non-sensitive sandbox folder, avoid putting secrets or production data in fixtures or copied scripts, review the scoring criteria and cron schedule before approving, and stop the cron job when you are done.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
If the user chooses an existing or sensitive directory, the agent may create or modify sandbox files there.
The skill writes sandbox files at a user-specified path, including absolute paths. This is expected for the sandbox workflow, but users must choose the path carefully.
Absolute path: `/some/other/path/optimize/` → exact path ... Create folder at user-specified path
Use a new, disposable sandbox directory and verify the resolved path before allowing setup to continue.
Private prompts, expected outputs, or test data placed in the sandbox may remain there and continue steering future iterations.
The cron loop persists and reuses guidance, scoring history, and fixtures across iterations. This is central to the skill, but those files can influence future model runs and may retain sensitive task details.
Read these files from the sandbox: `current-guidance.md` ... `scores.md` ... `scoring.md` ... `fixtures/test-cases.json`
Keep sandbox contents non-sensitive where possible, restrict access to the sandbox folder, and review or delete stored files when finished.
Guidance and fixture data may be exposed to the subagent during testing, and unsafe guidance could affect the subagent’s behavior if tool permissions are broad.
The skill passes the optimized guidance and test inputs to a subagent. This is disclosed and purpose-aligned, but it is still an inter-agent data flow.
Each iteration spawns a subagent to: - Execute the task with guidance - Return output for scoring - Isolate test runs from main agent
Use test fixtures rather than secrets or production data, and confirm subagent/tool isolation settings if the platform exposes them.
The optimization loop may continue using model time and modifying sandbox files until it is paused, stopped, or reaches the plateau condition.
The skill intentionally creates recurring background automation. It is disclosed and tied to the optimization purpose, but it persists beyond the initial invocation.
The skill creates a cron job that runs every 5 minutes ... Invokes iteration-prompt.md with sandbox path
Choose an appropriate schedule, monitor `scores.md`, and know how to stop or remove the cron job when the optimization is complete.
