Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Claw Compactor
v1.0.0Automation skill for Claw Compactor.
⭐ 0· 114·2 current·2 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name/description (token compression + Engram LLM memory) align with the bundled code: a large compressor/engram implementation is present. However the skill package metadata declares no required env vars or credentials while the SKILL.md and code clearly expect LLM API keys and may call an LLM proxy (e.g., OPENAI_API_KEY/ANTHROPIC_API_KEY/OPENAI_BASE_URL). Also benchmark code auto-loads a .env file from the project root, which implies access to local secrets that was not declared.
Instruction Scope
SKILL.md instructs the agent to run scripts that read and compress an entire workspace (examples use ~/.openclaw/workspace), start daemons, ingest arbitrary files, and 'print injectable context' for system prompts. The code actively loads a .env file and supports ingesting arbitrary filesystem paths. Those behaviors go beyond a narrow helper and could expose secrets or large amounts of user data if run automatically.
Install Mechanism
There is no external install spec (no downloads), which lowers supply-chain risk. However the skill bundle includes 100+ source files that will be present on disk when the skill is installed; that means code will run from the skill directory and should be audited. No external archives or remote fetches were declared in the manifest.
Credentials
The registry metadata declares no required environment variables, but SKILL.md explicitly references ANTHROPIC_API_KEY, OPENAI_API_KEY, and OPENAI_BASE_URL and the code loads a .env file automatically. Requesting LLM API keys (and then not declaring them) is an incoherence and increases the risk that secrets could be read from the workspace or .env without the user's explicit consent.
Persistence & Privilege
The skill is not always:true and does not request elevated platform privileges. Autonomous invocation is permitted (default). That normal behavior combined with the instruction to 'run at session start' and the ability to process an entire workspace increases the impact if the skill is allowed to run without careful controls.
Scan Findings in Context
[system-prompt-override] unexpected: The SKILL.md contains patterns the scanner labeled as 'system-prompt-override' (prompt-injection style). A compression/memory tool should not need to embed or override system prompts; this suggests the instructions or included templates could attempt to influence agent/system prompts or produce content intended for system prompt insertion.
[unicode-control-chars] unexpected: Unicode control characters were detected in the SKILL.md. These are sometimes used to obfuscate text or hide injection strings and are not expected in normal tool documentation.
What to consider before installing
What to check before installing or running:
- Do not run the skill automatically at session start. Disable any auto-run hooks and run only in a sandbox/VM first.
- Audit the SKILL.md and code for network calls (look for http(s) requests, default base_url values) and for any places that read files like ~/.env, .env, or arbitrary filesystem paths. The benchmark code explicitly loads a .env file from the project root — remove or sanitize this if it could contain secrets.
- The SKILL.md and scripts reference LLM API keys (ANTHROPIC_API_KEY, OPENAI_API_KEY) but the skill metadata does not declare them. Treat LLM keys as sensitive: do not provide high-privilege keys to this skill until you confirm exactly which calls it will make and to which endpoints.
- Search the repository for any code that sends data to external endpoints (HTTP POST/PUT), for hard-coded URLs/IPs, and for code that constructs 'injectable context' strings. If the skill prints or packages workspace content into a system-prompt-ready string, that can leak secrets or system prompts; ensure such output is sanitized before use.
- If you must test it, run it on a non-sensitive dummy workspace and monitor outbound network traffic. Prefer ephemeral credentials and network isolation.
- If the author/owner is unknown (metadata says 'unknown'), be more conservative: require provenance, an upstream repo link, or third-party review before trusting.
What would change this assessment:
- If the skill metadata explicitly declared required env vars and the registry enforced least-privilege (scoped, limited API keys), and the SKILL.md removed prompt-injection strings and unicode obfuscation, I would raise confidence toward benign.
- If the owner is a known, reputable maintainer and there is an upstream GitHub repo with reproducible release artifacts and signed releases, that would reduce risk.proxy/compression-middleware.mjs:140
Shell command execution detected (child_process).
proxy/server.mjs:628
Shell command execution detected (child_process).
scripts/lib/fusion/nexus_model.py:144
Dynamic code execution detected.
tests/test_photon.py:63
Potential obfuscated payload detected.
references/compression-techniques.md:197
Prompt-injection style instruction pattern detected.
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.Like a lobster shell, security has layers — review code before you run it.
latestvk971v7yz2br4k2e2vgpq8xwq8s838vqp
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
