Apprentice
WarnAudited by ClawScan on May 10, 2026.
Overview
Apprentice is a coherent workflow-learning skill, but it needs review because it stores broad observations as permanent workflows and can run learned bash scripts with broad local access and weak path/credential boundaries.
Review this skill carefully before installing. Do not teach it workflows containing secrets, production credentials, destructive commands, or sensitive account actions unless you have audited the generated files. Inspect generated SKILL.md and run.sh files before running them, prefer preview/dry-run mode, and verify the installed paths are contained under the apprentice skill directory.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
A crafted workflow name or compromised workflow directory could cause the agent to run a shell script outside the intended learned-workflow library with the user's local permissions.
The workflow name is user-controlled and is joined into a filesystem path without slug validation, absolute-path rejection, path traversal checks, or containment under WORKFLOWS_DIR before executing run.sh with bash.
parser.add_argument("workflow", nargs="?", help="Workflow name to run") ... workflow_dir = WORKFLOWS_DIR / workflow_name ... run_script = workflow_dir / "run.sh" ... subprocess.run(["bash", str(run_script)], env=env, capture_output=False)Validate workflow names as safe slugs, resolve the target path, require it to remain inside the workflows directory, and require preview/approval before executing any learned script.
A learned or path-traversed script could read environment secrets available to the agent even though the skill declares no credential requirement.
Every workflow script inherits the full agent process environment, not just declared workflow variables, which may include tokens or service credentials.
env = os.environ.copy()
if variables:
for k, v in variables.items():
env[k.upper()] = str(v)
...
subprocess.run(["bash", str(run_script)], env=env, capture_output=False)Pass a minimal allowlist of required variables to workflow scripts, document any credential use, and avoid inheriting the full process environment by default.
Observation state and generated workflow skills may be created outside the expected apprentice folder, making them harder to audit, remove, or contain from other skills.
The provided file is at path observe.py, while the README/metadata describe scripts/observe.py. If installed as provided, parent.parent points outside the apprentice skill directory, so logs and workflows are written to a broader parent location.
SKILL_DIR = Path(__file__).parent.parent WORKFLOWS_DIR = SKILL_DIR / "workflows" ACTIVE_SESSION_FILE = SKILL_DIR / ".observation_active.json"
Make the package layout match the code, or compute paths from the actual skill root. Declare the generated workflow and state paths explicitly.
Private commands, project details, or secrets mentioned during teaching could be stored and reused by future agent sessions.
The skill is designed to capture broad user activity and convert it into persistent agent-readable workflow files, with no clear redaction, retention, or secret-exclusion mechanism in the artifacts.
It listens and records your actions. - Commands you run - Files you create, edit, or delete - Decisions you make and why ... Approved workflows live in `apprentice/workflows/` as named SKILL.md files.
Add explicit secret redaction guidance, retention/deletion controls, review gates for raw observations and generated skills, and warnings not to teach workflows containing credentials or sensitive account actions.
Users may teach sensitive workflows believing the data is strictly local, even though the observed content may be included in model context for synthesis.
The privacy statement is internally conflicting: it says observations are sent to the LLM session while also claiming nothing leaves the machine. That may be false if the user's LLM session is remote.
| None | Fully local | Nothing leaves your machine | ... The only thing it sends to the LLM is your described observation + synthesis request, using your existing session. Nothing leaves your machine.
Replace absolute 'nothing leaves your machine' claims with precise model-session data handling language, and clearly warn when observations may be sent to a cloud LLM.
