Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Smart Compact
v1.0.0Smart context compaction for OpenClaw agents. 4-phase progressive strategy: Scan, Extract, Check, Compact. Before running /compact, this skill scans tool out...
⭐ 0· 30·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name/description match the SKILL.md: the skill scans tool outputs, extracts important facts, generates a pre-compact checklist, and optionally triggers /compact. The required capabilities (reading conversation and tool outputs, writing daily memory files) align with the stated purpose. Minor mismatch: registry metadata lists 'source: unknown' while README contains explicit GitHub install URLs (wavmson/openclaw-skill-smart-compact).
Instruction Scope
Instructions tell the agent to review all tool call results (exec, read, web_fetch, web_search) and extract items including IPs, endpoints, file paths and authentication tokens. The SKILL.md and README claim sensitive items will be redacted, but elsewhere (information classification table) they list '认证令牌 / authentication tokens' under 'Must save' — a direct contradiction that could lead to persisting secrets. The skill also instructs appending to memory/YYYY-MM-DD.md (persistent storage) which widens the scope of data retained.
Install Mechanism
There is no install spec in the registry (instruction-only skill), which is lowest-risk from an automatic install perspective. README suggests optional manual install via ClawHub or cloning/curling from GitHub/raw.githubusercontent.com. That is typical but does advise downloading files from an external repo; the registry's 'source unknown' vs README's GitHub link is an inconsistency you may want to verify before running those commands.
Credentials
The skill declares no required environment variables or credentials (good), but its runtime behavior explicitly seeks out authentication tokens and other sensitive items in tool outputs. Because it both (a) states it will redact sensitive info and (b) elsewhere classifies authentication tokens as 'must save', it's unclear whether secrets will be redacted or persisted. Persisting credentials into daily memory files (and then possibly consolidated later by Memory‑Dream) is disproportionate without explicit safeguards and clear redaction rules.
Persistence & Privilege
The skill writes extracted data to memory/YYYY-MM-DD.md (append-only), which creates persistent artifacts. Append-only and user-confirmation-before-compact are good principles, but persistent storage of potentially sensitive items (due to the contradiction noted above) increases long-term exposure. The skill does not request always:true or other elevated platform privileges, and it does not claim to modify other skills, which limits system privilege concerns — but persistence of secrets is still a practical risk.
What to consider before installing
What to consider before installing or using this skill:
- Clarify the redaction policy with the author: SKILL.md/README both say 'sensitive info will be redacted' and also list authentication tokens under 'must save' — ask which is true and request explicit examples of how tokens are redacted.
- Inspect the source before installing: README points to a GitHub repo; verify that the repo and files match the published SKILL.md and that there is no hidden code that exfiltrates data.
- Use 'compact check' / read-only mode first: exercise the scan and checklist phases without performing writes, to observe what the skill identifies as important.
- Audit memory files: if you enable it, monitor memory/YYYY-MM-DD.md for accidental secrets and set tight filesystem permissions on the memory folder (restrict to the agent user only).
- Disable automatic downstream consolidation: if you also use Memory‑Dream or other consolidation skills, ensure they are configured not to pull in these daily logs until you’re confident no secrets are stored.
- Prefer testing in an isolated environment: run the skill with non-production data and simulated tool outputs to confirm behavior.
- If you must allow it in production, require an explicit policy that the skill never persists raw credentials and add detection (alerting) for credential-like patterns in memory files.
Given the contradictory guidance about tokens, avoid enabling persistent writes until the author provides a clear, auditable redaction approach or you instrument and verify the memory outputs yourself.Like a lobster shell, security has layers — review code before you run it.
latestvk977rq5gnfax887gyjsjgd4xtn841pc9
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
