Senior Prompt Engineer
PassAudited by ClawScan on May 1, 2026.
Overview
This appears to be a purpose-aligned prompt-engineering toolkit, with minor things to notice: it includes local Python helper scripts, has limited provenance metadata, and contains agent tool-use examples that should be adapted with safeguards.
This skill looks reasonable for prompt engineering and LLM workflow work. Before installing, be aware that it includes Python scripts you may run locally; review them, run them only on files you choose, and add proper safety controls if you reuse the agent-orchestration examples in a real system.
Findings (3)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
You have less external context for trusting the bundled scripts, even though their documented purpose is coherent.
The skill includes runnable helper scripts, but the registry metadata gives limited provenance and no homepage or install specification.
Source: unknown; Homepage: none; Install specifications: No install spec — this is an instruction-only skill; Code file presence: 3 code file(s)
Review the bundled scripts before running them and prefer installing skills from publishers or repositories you trust.
Running the commands executes local code and processes the files you pass to it.
The skill instructs users to run local Python scripts. This is disclosed and central to the stated prompt/RAG/agent-design purpose.
python scripts/prompt_optimizer.py prompts/my_prompt.txt --analyze ... python scripts/rag_evaluator.py --contexts contexts.json --questions questions.json ... python scripts/agent_orchestrator.py agent_config.yaml --visualize
Run the scripts only from the reviewed skill directory and only against files you intend to analyze or transform.
If copied directly into a real agent, the pattern could allow repeated tool calls without a clear iteration cap or human approval for high-impact actions.
The reference documentation shows an automatic tool-calling loop. It is presented as agent-design pseudocode, not as hidden runtime behavior, but it needs guardrails if reused.
while True: ... response = llm.chat(... tool_choice="auto") ... registry.execute(call.function.name, json.loads(call.function.arguments))
When adapting the example, add max-iteration limits, tool allowlists, input validation, logging, and human approval for mutating or sensitive tools.
