AI Compliance Readiness Assessment

AI Compliance Readiness Assessment — evaluate how prepared an organization is for AI governance regulations (EU AI Act, NIST AI RMF, HHS mandates, state bar...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
2 · 411 · 3 current installs · 3 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (AI compliance readiness) match the SKILL.md: it asks for company/industry, AI systems, jurisdictions and outputs a scored report and action plan. There are no unrelated dependencies, credentials, or binaries requested.
Instruction Scope
The runtime instructions ask the agent to collect user-supplied organizational inputs and produce a structured report. They do not instruct reading system files, environment variables, contacting third-party endpoints, or exfiltrating data beyond the user-provided inputs.
Install Mechanism
No install spec and no code files — instruction-only. Nothing will be written to disk or installed by the skill itself.
Credentials
The skill requests no environment variables, credentials, or configuration paths. Inputs are user-supplied organizational information, which is appropriate for the stated function.
Persistence & Privilege
always is false and there are no install steps that alter agent or system configuration. The skill can be invoked by the model (normal default), but it does not request persistent privileges or modify other skills' settings.
Assessment
This skill is an instruction-only assessment checklist and appears coherent for producing an AI compliance readiness report. Before using it: (1) verify the regulatory dates and jurisdictional applicability it lists (the SKILL.md includes specific deadlines that you should confirm against authoritative sources), (2) avoid pasting secrets or unnecessary sensitive data — only provide the organizational inputs requested (industry, systems, jurisdictions, governance posture), (3) note the skill's source/homepage is unknown; treat outputs as advisory and validate recommendations with your legal/compliance team, and (4) test the skill with sanitized or non-sensitive example data before running it on real internal information.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk970m28wmtkeam059ccmx33a7581nt5f

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

AI Compliance Readiness Assessment

Evaluate organizational readiness for AI governance regulations and generate an actionable compliance roadmap.

When to Use

  • Assessing AI compliance posture before an audit
  • Preparing for EU AI Act (Aug 2026), HHS AI mandates, NIST AI RMF
  • Building a governance roadmap for AI deployments
  • Evaluating risk exposure from current AI usage

How to Use

When asked to assess AI compliance readiness, gather these inputs:

Required Inputs

  1. Industry (legal, healthcare, financial-services, insurance, construction, manufacturing, government, other)
  2. Company size (employees or revenue range)
  3. AI systems in use (list: chatbots, document review, fraud detection, hiring tools, customer service, analytics, other)
  4. Jurisdictions (US-only, EU-exposed, both, global)

Optional Inputs

  • Current governance framework (if any)
  • Upcoming audit dates
  • Existing compliance certifications (SOC2, ISO 27001, HIPAA, etc.)
  • Number of AI vendors/tools in use

Assessment Framework

Score each dimension 1-5 (1=no controls, 5=mature):

8 Dimensions

  1. Risk Classification — Have you categorized AI systems by risk level per EU AI Act / NIST?
  2. Documentation — Technical docs, model cards, data lineage for each AI system?
  3. Human Oversight — Defined human-in-the-loop processes for high-risk decisions?
  4. Bias & Fairness — Regular bias audits, fairness metrics, disparate impact testing?
  5. Data Governance — Training data provenance, consent, retention, and deletion policies?
  6. Incident Response — AI-specific incident playbook, reporting procedures, rollback plans?
  7. Vendor Management — AI vendor risk assessments, contractual AI governance requirements?
  8. Audit Trail — Logging, explainability, decision traceability for AI-assisted outputs?

Scoring

  • 35-40: Compliance-ready — minor gaps to address
  • 25-34: Partially prepared — significant work needed in specific areas
  • 15-24: High risk — major gaps across multiple dimensions
  • 8-14: Critical — immediate action required before any regulatory review

Output Format

Generate a report with:

  1. Executive Summary — Overall score, risk level, top 3 gaps
  2. Dimension Scores — Table with score, evidence, and gap description per dimension
  3. Regulatory Exposure — Which regulations apply and key deadlines:
    • EU AI Act: Aug 2, 2026 (high-risk system requirements)
    • HHS AI Transparency: April 3, 2026 (healthcare)
    • NIST AI RMF: Ongoing (federal contractors + best practice)
    • State bar AI rules: Varies (legal industry)

Files

1 total
Select a file
Select a file to preview.

Comments

Loading comments…