Ai Act Risk Check

PassAudited by ClawScan on May 1, 2026.

Overview

The skill is coherent for preliminary EU AI Act classification, but users should know it relies on an undeclared Gemini/LLM CLI and sends the provided system description to that model.

This appears safe for its stated purpose, but verify that the Gemini CLI dependency is intentional and do not provide confidential AI-system descriptions unless you are comfortable sending them to the configured LLM provider.

Findings (2)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

The skill may fail or use whatever Gemini CLI installation is present on the machine, which the user should verify before relying on it.

Why it was flagged

The runtime depends on the Gemini CLI, while the provided requirements list no required binaries and SKILL.md states 'Dependencies: None'. This is a dependency declaration gap, although the LLM use is aligned with the skill's purpose.

Skill content
RESULT=$(gemini -p \"$PROMPT\")
Recommendation

Declare the Gemini CLI as a required dependency and document the expected setup so users can verify the binary and account context.

What this means

If the AI-system description contains confidential business, legal, or product details, those details may be provided to the configured LLM service.

Why it was flagged

The skill discloses that user-provided descriptions are sent to an LLM; the script shows this is done through the Gemini CLI. This is purpose-aligned, but users should treat the submitted description as data shared with an external model/provider.

Skill content
Passes the user's description to an LLM for classification against the hard-coded Annex III criteria.
Recommendation

Avoid submitting confidential details unless the configured LLM provider and account terms are acceptable for that data.