Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Agent Reflection Engine

v1.0.0

Enables AI agents to self-audit decision steps, identify reasoning bottlenecks, and generate improvement patches via chain-of-thought critique.

0· 67·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for albionaiinc-del/agent-reflection-engine.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Agent Reflection Engine" (albionaiinc-del/agent-reflection-engine) from ClawHub.
Skill page: https://clawhub.ai/albionaiinc-del/agent-reflection-engine
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install agent-reflection-engine

ClawHub CLI

Package manager switcher

npx clawhub@latest install agent-reflection-engine
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description match the included code: both describe a reflection engine that reads a JSON trace and emits critiques. No unrelated credentials, binaries, or network access are requested. The inclusion of a small Python tool is proportional to the stated purpose.
!
Instruction Scope
SKILL.md example calls 'python agent_reflection_engine.py' but the provided file is tool.py (and the module's top comment names agent_reflection_engine.py). The code operates only on a provided trace file (no exfiltration), but there are clear logic bugs (e.g., building 'report' then referencing 'report' inside the same dict for summary — this will raise a NameError at runtime). The reflection heuristics are also naive (misspelled substring checks like 'inconsisten' and 'efficien'), which could produce unhelpful or misleading output. These inconsistencies mean the runtime behavior may differ from the documented usage and could crash.
Install Mechanism
No install spec — instruction-only plus a single Python file. No external downloads or package installs are requested, which minimizes supply-chain risk.
Credentials
No environment variables, credentials, or config paths are required. The tool operates on a local JSON trace file only, which is proportionate to the stated purpose.
Persistence & Privilege
The skill does not request persistent/always-on presence or privilege escalation. It is user-invocable and does not alter other skills or system-wide settings.
What to consider before installing
This skill is not obviously malicious, but it contains coherence and correctness problems you should fix before trusting it with real traces. Recommended steps: (1) Run it in a sandbox with non-sensitive demo traces to reproduce behavior. (2) Rename or update invocation examples so SKILL.md matches the actual filename (tool.py) or rename the file. (3) Fix the runtime bug where the report's summary references 'report' while the dict is being built (this currently causes a crash). (4) Review and improve the simplistic string checks (misspellings like 'inconsisten'/'efficien' make the heuristics unreliable). (5) Only run this on sensitive agent traces after the above fixes and after verifying output does not leak data externally. If you need higher assurance, request the author to provide a corrected release and unit tests or perform a code review focused on correctness and privacy.

Like a lobster shell, security has layers — review code before you run it.

latestvk97er8cxkyskab6n0ydqgfgytx8501x2
67downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Agent Reflection Engine

A lightweight, pluggable reflection engine that enables AI agents to self-audit their decision traces, identify reasoning bottlenecks, and generate improvement patches using chain-of-thought critique—ideal for developers tuning autonomous agents.

Usage

# Run reflection on an agent trace
python agent_reflection_engine.py traces/demo_trace.json -o reports/reflection.json --verbose

# Example trace format (demo_trace.json):
# [
#   {
#     "step_id": 1,
#     "thoughts": "I should search for the nearest coffee shop.",
#     "action": "search_web",
#     "value": "coffee shop near me",
#     "observation": "Found 'Brew Haven' 0.3 miles away."
#   }
# ]

Integrate into agent loops by logging each step and running periodic reflection to generate improvement heuristics.

Price

$4.99

Comments

Loading comments...