Agent-Skills-for-Context-Engineering

PassAudited by VirusTotal on May 12, 2026.

Overview

Type: OpenClaw Skill Name: context-engineering Version: 1.0.0 The skill bundle is designed for context compression and evaluation for AI agents. The `SKILL.md` and `references/evaluation-framework.md` files provide detailed documentation and rubrics for context compression strategies and evaluation, without containing any prompt injection attempts against the OpenClaw agent. The `scripts/compression_evaluator.py` script implements the described logic, primarily using string manipulation and regular expressions for information extraction from conversation history. It explicitly stubs out external LLM API calls, meaning the provided code does not perform network requests or access sensitive file system locations, aligning with its stated purpose without exhibiting any malicious or high-risk behaviors.

Findings (0)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Sensitive project details, file paths, decisions, or unfinished tasks may be preserved in summaries and reused later.

Why it was flagged

The skill intentionally preserves conversation state and artifact history across compression cycles. This is central to the skill's purpose, but persistent summaries can retain sensitive details or carry forward stale or incorrect context.

Skill content
Maintain structured, persistent summaries with explicit sections for session intent, file modifications, decisions, and next steps.
Recommendation

Keep summaries scoped to the current task, avoid including secrets, and verify important file paths, decisions, and next steps before acting on compressed context.

What this means

If the optional judge workflow is implemented with an external model, private conversation or codebase context could be shared outside the local session.

Why it was flagged

The evaluation workflow describes sending compacted context and model responses to an LLM judge. This is purpose-aligned for evaluation, but compacted context may contain private conversation or project details if the workflow is connected to an external provider.

Skill content
Feed probe question + model response + compressed context to judge
Recommendation

Use trusted model providers, review what compacted context is sent, and redact secrets or sensitive customer/project data before judge evaluation.