Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Metaskill
v1.3.0Teaches AI agents how to learn better by enforcing deep correction, transfer learning, and proactive pattern recognition. Use when an error occurs and needs...
⭐ 0· 432·1 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The skill claims to implement self-correction, transfer, and success-capture using LLMs; the included scripts implement exactly that and therefore the provider API keys and a local workspace are reasonable requirements. However, the registry metadata declares no required environment variables or primary credential while the code clearly expects provider API keys (ANTHROPIC_API_KEY, OPENAI_API_KEY, GOOGLE_API_KEY) or a local Ollama instance. This metadata mismatch is important and unexplained.
Instruction Scope
Runtime instructions and scripts operate on a workspace detected via git or OPENCLAW_WORKSPACE (or $HOME/.openclaw/workspace), read and append to LEARNINGS.md / WINS.md / ERRORS.md, and perform LLM calls for extraction/transfer. That behavior aligns with the stated purpose. The scripts will write into skills/self-improving-agent/.learnings/ if present — i.e., they may read/write another skill's learning files — which is related to purpose but expands the scope beyond a purely local, isolated skill.
Install Mechanism
No install spec; skill is instruction+scripts only (no downloads or external installers). That lowers install-time risk — nothing is fetched or executed during install. Runtime does invoke local Python and network libraries as needed.
Credentials
Metadata lists no required env vars, but scripts/llm_provider.py expect environment API keys for providers (ANTHROPIC_API_KEY, OPENAI_API_KEY, GOOGLE_API_KEY) or a local Ollama service. The skill also uses OPENCLAW_WORKSPACE if set. Requesting LLM keys is proportionate to LLM use, but the omission from registry metadata is an incoherence. Additionally, using remote providers means user content (error descriptions, learnings) may be sent to third-party LLM endpoints — a privacy/exfiltration consideration that the metadata does not warn about.
Persistence & Privilege
The skill does not request 'always: true' and is user-invocable only. At runtime it creates and writes .learnings/ files under the workspace and will write into skills/self-improving-agent/.learnings/ if present. It does not modify other skills' configuration files, but it does read/write another skill's data area when available — this cross-skill file access is meaningful and should be acceptable only if you trust the skill and backups of that data exist.
What to consider before installing
Key points to consider before installing or running Metaskill:
- Metadata mismatch: The registry says no env vars are required, but the code expects API keys (ANTHROPIC_API_KEY, OPENAI_API_KEY, GOOGLE_API_KEY) or a local Ollama server. Ask the publisher to correct the metadata or be prepared to set these env vars if you want full LLM functionality.
- Data sent to external LLMs: When the scripts run in LLM mode they send error descriptions and excerpts of your LEARNINGS.md to third-party services (Anthropic, OpenAI, Google generative API) or to a local Ollama instance. If those messages may contain sensitive information, prefer using a local Ollama model or run in manual/fallback mode.
- File writes and cross-skill access: The skill will create and append files under your OpenClaw workspace, including writing into skills/self-improving-agent/.learnings/ if that directory exists. Back up any important learnings before running; review file paths in the scripts if you want different locations.
- Audit and test: Review or run the scripts in a sandboxed test workspace first. If you want to be extra cautious, run with environment variables unset to force offline/manual behavior, or configure providers in config.yaml to use a local Ollama instance.
- Trust & provenance: The source/homepage are unknown and the owner is an unfamiliar ID. If you require strong provenance, request publisher information or prefer skills from known sources.
What would make this benign: updating the registry metadata to list optional/required env vars, and explicit documentation of data flows (what is sent to LLMs). If you cannot confirm provenance or cannot tolerate sending learning content to remote LLMs, do not enable LLM mode and/or run in an isolated workspace.Like a lobster shell, security has layers — review code before you run it.
latestvk972kd4cza47senkewnna8amfd8204y4
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
