Agent-Skills-for-Context-Engineering

v1.0.0

This skill should be used when the user asks to "compress context", "summarize conversation history", "implement compaction", "reduce token usage", or mentions context compression, structured summarization, tokens-per-task optimization, or long-running agent sessions exceeding context limits.

1· 3k·13 current·14 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (context compression) align with SKILL.md and the included evaluation framework and evaluator script. The code and docs implement probe-based compression evaluation and structured-summary guidance, which is coherent with the stated purpose.
Instruction Scope
SKILL.md stays focused on summarization/compression strategy and evaluation. It recommends tracking files and artifact indices but does not instruct reading arbitrary system files or exfiltrating data. Note: the guidance expects agents to extract file paths and decisions from conversation history; if an operator implements the production judge, they must ensure the agent's runtime doesn't gain unnecessary filesystem or network access.
Install Mechanism
No install spec is provided (instruction-only skill with code files bundled). That is low-risk: nothing is downloaded at install time and no archives or remote install URLs are present.
Credentials
The skill declares no required environment variables, no primary credential, and no config paths. The code mentions calling an LLM judge in production, but those calls are documented as 'stubbed'; the packaged skill itself does not request credentials.
Persistence & Privilege
always is false and there is no indication the skill modifies other skills or system-wide agent settings. The skill does not request permanent presence or elevated privileges.
Assessment
This package appears internally consistent and does not request credentials or perform network installs, but consider the following before enabling in a production agent: 1) The evaluator code notes LLM judge API calls are 'stubbed' — if you (or someone) implement those calls later, they'll require API keys; only provide such keys with clear, limited access and audit logging. 2) The framework expects artifact/file-tracking; ensure the agent's runtime is not granted blanket filesystem or network access unless needed. 3) Regex-based extraction in the probe generator is brittle — expect false positives/negatives; validate results in your environment. 4) If you plan to adapt this for automated evaluation, review where compressed contexts and evaluation outputs are stored and who can access them to avoid accidental exposure of sensitive conversation content.

Like a lobster shell, security has layers — review code before you run it.

latestvk974jv6rn9yw1mytgqf3w4qr71802fcd

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments