Senior Prompt Engineer
AdvisoryAudited by Static analysis on Apr 30, 2026.
Overview
No suspicious patterns detected.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
You have less external context for trusting the bundled scripts, even though their documented purpose is coherent.
The skill includes runnable helper scripts, but the registry metadata gives limited provenance and no homepage or install specification.
Source: unknown; Homepage: none; Install specifications: No install spec — this is an instruction-only skill; Code file presence: 3 code file(s)
Review the bundled scripts before running them and prefer installing skills from publishers or repositories you trust.
Running the commands executes local code and processes the files you pass to it.
The skill instructs users to run local Python scripts. This is disclosed and central to the stated prompt/RAG/agent-design purpose.
python scripts/prompt_optimizer.py prompts/my_prompt.txt --analyze ... python scripts/rag_evaluator.py --contexts contexts.json --questions questions.json ... python scripts/agent_orchestrator.py agent_config.yaml --visualize
Run the scripts only from the reviewed skill directory and only against files you intend to analyze or transform.
If copied directly into a real agent, the pattern could allow repeated tool calls without a clear iteration cap or human approval for high-impact actions.
The reference documentation shows an automatic tool-calling loop. It is presented as agent-design pseudocode, not as hidden runtime behavior, but it needs guardrails if reused.
while True: ... response = llm.chat(... tool_choice="auto") ... registry.execute(call.function.name, json.loads(call.function.arguments))
When adapting the example, add max-iteration limits, tool allowlists, input validation, logging, and human approval for mutating or sensitive tools.
