Back to skill
Skillv0.1.0
ClawScan security
Azure Ai Evaluation Py · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
BenignFeb 11, 2026, 8:37 AM
- Verdict
- benign
- Confidence
- high
- Model
- gpt-5-mini
- Summary
- The skill's files, examples, and environment requirements align with an Azure-AI evaluation SDK — the requested credentials and behaviors are consistent with its stated purpose.
- Guidance
- This skill appears coherent with its stated purpose — it needs Azure/OpenAI endpoint and either an API key or DefaultAzureCredential, and optionally a Foundry connection string if you want safety logging. Before installing/using: 1) Confirm you trust the pip package 'azure-ai-evaluation' (review its upstream source) before pip installing. 2) Only run evaluations on datasets you control or have vetted, since data will be sent to your configured Azure OpenAI deployment. 3) Review any custom/prompt-based evaluators you add — they can send arbitrary text to the model, so avoid embedding secrets in evaluated data or prompts. 4) The documentation contains an example string used to demonstrate detecting prompt-injection — this is benign in context but be cautious when reusing prompts that include 'ignore previous instructions' patterns.
- Findings
[ignore-previous-instructions] expected: The SKILL.md and reference examples intentionally include a sample '[hidden: ignore previous instructions]' string to illustrate detection of indirect prompt-injection attacks (IndirectAttackEvaluator). This appears to be an example for a safety evaluator rather than malicious prompt injection.
Review Dimensions
- Purpose & Capability
- okThe name/description match the included docs and CLI script: evaluating generative AI with built-in and custom evaluators. The environment variables mentioned (AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_API_KEY, AZURE_OPENAI_DEPLOYMENT, AIPROJECT_CONNECTION_STRING) and imports (azure.ai.evaluation, azure.identity, azure.ai.projects) are appropriate and expected for the described functionality. No unrelated credentials, binaries, or config paths are requested.
- Instruction Scope
- okSKILL.md and scripts limit actions to building evaluator instances, calling evaluate(), and reading user-supplied data files (JSONL). Examples include prompt-based evaluators that send prompts to Azure OpenAI models (expected). There are no instructions to read arbitrary system files or post data to unexpected third-party endpoints. Note: examples include a sample that contains the phrase 'ignore previous instructions' as part of demonstrating an IndirectAttackEvaluator — this is a documentation/example of prompt-injection detection, not an instruction to ignore agent constraints.
- Install Mechanism
- noteThe skill is instruction-only and includes no platform install spec. SKILL.md recommends pip installing the 'azure-ai-evaluation' package and optional extras for remote evaluation; this is normal but means installation happens outside the platform. Verify the pip package provenance before installing to your environment.
- Credentials
- okRequested environment variables are limited to Azure/OpenAI and Foundry (AIPROJECT_CONNECTION_STRING). Those are proportional to evaluating models and logging to a Foundry project. No unrelated secrets or broad system credentials are requested.
- Persistence & Privilege
- okThe skill does not request persistent presence (always:false) and contains no code that modifies other skills or global agent configuration. It does not require elevated privileges beyond normal network calls to Azure services.
