AI Safety Audit

v1.0.0

Performs a comprehensive AI safety audit mapping systems to EU AI Act risk tiers, assessing 30 controls across six domains, and generating a 90-day remediati...

0· 458·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name, description, and SKILL.md all describe an AI safety audit and the included controls, scoring, and outputs align with that purpose. There are no unrelated required binaries, environment variables, or install steps that contradict the stated function.
Instruction Scope
The SKILL.md is high-level and prescriptive about what to produce (inventory, classification, scorecard, roadmap) but does not define concrete data sources or safe boundaries. This gives the agent broad discretion to ask for or attempt to collect inventory and evidence; that is reasonable for an audit but creates a scope/privilege risk if the agent is allowed to autonomously access systems or credentials without constraints.
Install Mechanism
No install spec and no code files — instruction-only skill. This minimizes on-disk execution risk because nothing will be downloaded or installed by the skill itself.
Credentials
The skill declares no required environment variables, credentials, or config paths. That is proportionate to an instruction-only audit template. Note: at runtime the agent may request credentials or access from the user to gather evidence; those requests are not part of the package and should be evaluated before granting.
Persistence & Privilege
always:false and no installable components — the skill does not request permanent presence or system-level changes. The agent may still be allowed to invoke the skill autonomously (platform default); that alone is not flagged but users should be mindful of the agent's allowed actions when the skill is active.
Assessment
This instruction-only skill appears coherent for performing an AI safety audit, but its runtime instructions are intentionally high-level and will require the agent to gather evidence (model inventories, documentation, logs, etc.). Before using it: 1) Decide which data sources you permit the agent to access and avoid handing long-lived credentials; prefer scoped, read-only accounts or temporary credentials. 2) Be cautious if you allow autonomous invocation — the agent could repeatedly attempt to collect data. 3) Confirm whether you want the agent to contact any external links or services (the SKILL.md contains promotional links to paid packs). 4) Test the skill in a controlled environment (non-production data) first and review any requested actions or outputs. If you want a stronger assessment, request the skill author provide explicit runtime steps (what data sources are read, what evidence formats are expected) or include code that enforces safe, read-only collection methods.

Like a lobster shell, security has layers — review code before you run it.

EU AI Actvk9717dg41xckjyezqg58d55q2x81hh0eNISTvk9717dg41xckjyezqg58d55q2x81hh0ealignmentvk9717dg41xckjyezqg58d55q2x81hh0eauditvk9717dg41xckjyezqg58d55q2x81hh0ecompliancevk9717dg41xckjyezqg58d55q2x81hh0elatestvk9717dg41xckjyezqg58d55q2x81hh0esafetyvk9717dg41xckjyezqg58d55q2x81hh0e

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments