Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

claw-compactor

v6.0.0

Claw Compactor v6.0 — 50%+ savings through rule-based compression, dictionary encoding, session observation compression, and progressive context loading.

0· 1.3k·3 current·3 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (token/workspace compression, observation extraction) align with the included code: compression, dictionary, RLE, observation extraction, tiered summaries, and tokenizer optimizers are all present. File accesses (workspace markdown files and a memory/ folder) are expected for this purpose.
!
Instruction Scope
Runtime instructions and code will read session transcripts from ~/.openclaw/sessions and write compressed observations into workspace memory/.observed-sessions.json and memory/observations/. The compressed_context module explicitly produces 'decompression instructions' intended to be prepended to a model/system prompt — recommending edits to system prompts is powerful and can be abused for prompt-injection. The SKILL.md and source contain LLM prompt templates (COMPRESS_PROMPT) and text that could be used to call LLMs if a user enables that path; you should confirm that you do not inadvertently send sensitive session content to an external LLM. Overall the file operations are coherent with the stated purpose but the system-prompt modification guidance and presence of LLM prompts raise safety concerns.
Install Mechanism
No remote download/install spec is present — the skill is provided as code files (pyproject, scripts). Nothing in the manifest points to fetching arbitrary remote binaries or archives. That reduces supply-chain risk compared to an external download.
Credentials
The skill requests no environment variables, no external credentials, and no special config paths in the registry metadata. Its runtime behavior reads local workspace files and the user's session directory (~/.openclaw/sessions), which is proportionate to a session-transcript compressor but is sensitive data access — the absence of credentials is appropriate.
Persistence & Privilege
The skill is not always-enabled and does not request elevated platform privileges. It writes artifacts inside the workspace (memory/.observed-sessions.json, memory/observations/, memory/.codebook.json) which is expected. However the SKILL.md and compressed_context code recommend prepending decompression instructions to system prompts — that would change model behavior if applied; do not modify system prompts automatically without review. Autonomous invocation (default) is normal but combine caution with the instruction-scope concerns above.
Scan Findings in Context
[system-prompt-override] expected: The compressed_context module explicitly returns 'decompression instructions' intended to be prepended to a system prompt so a model can expand compressed notation. This is functionally related to the stated purpose (making compressed context usable by LLMs) but is also the exact pattern that can be used for prompt-injection — treat as potentially dangerous and review carefully before applying to a live agent's system prompt.
[unicode-control-chars] unexpected: Pre-scan flagged unicode-control characters in SKILL.md. The project uses emojis and non-ASCII characters legitimately (CJK handling, emojis in README), but control/unicode-steering characters can be used for hidden instruction injection. Inspect any README/SKILL.md and compress/decompress code for zero-width or control characters before using in an automated/system-prompt context.
What to consider before installing
What to consider before installing: - Coherence: The code implements what the skill claims (workspace compression, session transcript observation extraction). That part is internally consistent. - Sensitive file access: The skill will read session transcripts from ~/.openclaw/sessions and will write memory/.observed-sessions.json and memory/observations/ inside the target workspace. Those session files can contain private data — back up and inspect them first. - Prompt-injection risk: The compressed_context module and SKILL.md suggest prepending decompression instructions to model/system prompts. That is functionally necessary to make compressed notation readable by an LLM, but it is also a high-risk operation because it alters model steering. Do NOT automatically modify your agent’s system prompt or global model configuration without manual review. If you must use decompression instructions, keep them minimal and review their exact text. - LLM use paths: The code contains LLM prompt templates and both rule-based and LLM-driven extraction paths. By default many commands use rule-based functions, but some utilities can generate prompts for LLMs. Before enabling any LLM-based compression, verify there are no unintended network endpoints and confirm which prompts/data would be sent out. - Run safely first: Use the --dry-run/benchmark modes and run on a copy of your workspace. Inspect generated artifacts (memory/.observed-sessions.json, memory/.codebook.json, memory/observations/) and the code paths that would call an LLM (search for code that invokes network libraries or external APIs in the omitted files). - Check for hidden characters: Scan README/SKILL.md and compressed text for zero-width or control characters that could hide instructions (tools like `cat -v`, hexdump, or editors that show invisibles can help). - Review compressed_context behavior: aggressive 'ultra' compression removes many function words and applies abbreviations; ensure the resulting compressed text + decompression instructions preserve all facts you care about and that you trust the decompression guidance. If you are not comfortable auditing the code paths that might call an external LLM or that will modify model prompts, consider running the tool in an isolated environment or container and only using the purely rule-based (no-LLM) modes.

Like a lobster shell, security has layers — review code before you run it.

latestvk978haz4f6gtg0peg6ftde7fjh80xw1xlatest 5.11vk97ckmv5706x0kwawvvd14w1px80vat8

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments