Senior Prompt Engineer

PassAudited by VirusTotal on May 12, 2026.

Overview

Type: OpenClaw Skill Name: senior-prompt-engineer Version: 2.1.1 The 'senior-prompt-engineer' skill bundle is a comprehensive toolkit for prompt optimization and LLM evaluation. The included Python scripts (agent_orchestrator.py, prompt_optimizer.py, and rag_evaluator.py) perform static analysis, token estimation, and heuristic scoring of RAG systems without using dangerous functions like eval(), subprocess, or making network calls. The documentation and SKILL.md instructions are strictly technical and aligned with the stated purpose, showing no signs of malicious intent or prompt injection attacks.

Findings (0)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

You have less external context for trusting the bundled scripts, even though their documented purpose is coherent.

Why it was flagged

The skill includes runnable helper scripts, but the registry metadata gives limited provenance and no homepage or install specification.

Skill content
Source: unknown; Homepage: none; Install specifications: No install spec — this is an instruction-only skill; Code file presence: 3 code file(s)
Recommendation

Review the bundled scripts before running them and prefer installing skills from publishers or repositories you trust.

What this means

Running the commands executes local code and processes the files you pass to it.

Why it was flagged

The skill instructs users to run local Python scripts. This is disclosed and central to the stated prompt/RAG/agent-design purpose.

Skill content
python scripts/prompt_optimizer.py prompts/my_prompt.txt --analyze ... python scripts/rag_evaluator.py --contexts contexts.json --questions questions.json ... python scripts/agent_orchestrator.py agent_config.yaml --visualize
Recommendation

Run the scripts only from the reviewed skill directory and only against files you intend to analyze or transform.

What this means

If copied directly into a real agent, the pattern could allow repeated tool calls without a clear iteration cap or human approval for high-impact actions.

Why it was flagged

The reference documentation shows an automatic tool-calling loop. It is presented as agent-design pseudocode, not as hidden runtime behavior, but it needs guardrails if reused.

Skill content
while True: ... response = llm.chat(... tool_choice="auto") ... registry.execute(call.function.name, json.loads(call.function.arguments))
Recommendation

When adapting the example, add max-iteration limits, tool allowlists, input validation, logging, and human approval for mutating or sensitive tools.