Context Optimizer

PassAudited by ClawScan on May 10, 2026.

Overview

The skill is coherent with its context-management purpose, but it persistently archives conversation content and uses third-party npm/model dependencies that users should review before use.

Before installing, decide whether you want conversation history archived on disk. If you use it, set an appropriate archive path, delete the archive when no longer needed, disable chat/content logging for sensitive work, and review the npm/model dependencies during installation.

Findings (3)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Past conversation content may remain on disk and may later be retrieved into prompts, which can preserve sensitive details or influence future responses.

Why it was flagged

The skill intentionally stores compacted conversation content in a searchable local archive by default.

Skill content
enableArchive: true ... archivePath: './context-archive' ... archiveMaxSize: 100 * 1024 * 1024 ... Automatic Storage: Compacted content automatically stored in archive
Recommendation

Use a controlled archive path, disable archive storage for sensitive chats if not needed, periodically delete the archive, and review any retrieved archive snippets before relying on them.

What this means

Sensitive snippets could appear in local logs or any chat/log capture connected to the runtime.

Why it was flagged

Archive storage logs the beginning of archived content to console, which may include user conversation text.

Skill content
console.log(`[Archive] Content: "${content.substring(0, 80)}${content.length > 80 ? '...' : ''}"`);
Recommendation

Disable or reduce logging for sensitive conversations, remove content-snippet logging if integrating this code, and set chat logging to none when privacy is important.

What this means

Installing or first running the skill may fetch external code/model assets, adding normal supply-chain and provenance considerations.

Why it was flagged

Full functionality depends on third-party npm packages and a downloaded embedding model, which is normal for semantic search but should be reviewed because the registry install spec does not declare it.

Skill content
Required Dependencies: - `tiktoken` for accurate token counting - `@xenova/transformers` for semantic embeddings and archive indexing ... First run downloads embedding model (~80MB)
Recommendation

Install from a trusted source, review package versions, consider pinning dependencies with a lockfile, and verify the model download path before using it in sensitive environments.