AGENTIC AI GOLD STANDARD
ReviewAudited by ClawScan on May 10, 2026.
Overview
The skill mainly contains marketing and simulated examples, yet promises always-on self-improving agents and security controls without clear implementation or boundaries.
Review carefully before installing. Do not rely on the claimed security gates or self-improvement features without seeing real implementation and tests. If you experiment, use an isolated environment, avoid sensitive data, pin dependencies, and do not grant API keys or background/persistent permissions until the behavior is auditable and user-approved.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The skill may encourage users to trust or enable background agent activity that keeps operating or changes behavior while they are not monitoring it.
This advertises autonomous, persistent behavior and self-improvement without clearly defining user approval, stopping conditions, rollback, or containment.
Our 4-member Persistent Council runs 24/7... The skill gets better without your intervention.
Do not enable persistent or self-improving operation unless you can inspect the real implementation and enforce explicit approval, logging, rollback, and shutdown controls.
A user could over-trust claimed security controls that are not demonstrated by the included artifacts.
The listing makes strong safety and production-readiness claims, but the supplied executable examples are simulations/printouts rather than evidence of actual gate enforcement.
17 dharmic security gates protecting every action... Version 4.0 — Production Ready
Treat the security and production-readiness claims as unverified marketing until actual source code, tests, and enforcement mechanisms are reviewed.
If implemented, user prompts, code, or other context could be retained and reused across tasks unless memory boundaries are configured.
Persistent memory is central to the advertised framework, but the artifacts do not specify what data is stored, retention, deletion, isolation, or poisoning controls.
5-Layer Memory Architecture... Agents that remember how they learned, not just what.
Use only with non-sensitive data until storage location, retention, deletion, and memory-isolation behavior are documented and verified.
Running the installer could pull changing third-party code, and failures may be hidden while the script continues.
The optional installer fetches multiple unpinned packages and suppresses install errors; dependency installation is purpose-aligned but has weak provenance/version control.
pip install -q langgraph openai-agents crewai pydantic-ai mem0 zep-python 2>/dev/null || true
Run in a virtual environment, pin and review dependency versions, and remove error suppression before relying on the installation.
Users may not realize the skill can involve a paid model-provider account or API usage costs.
The installer suggests an optional provider API key even though registry metadata declares no credentials or environment variables; this is expected for model-provider fallback but under-declared.
export OPENROUTER_API_KEY=your_key_here
Use a minimally scoped provider key, monitor usage, and confirm which code will access the key before setting it.
