Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Acrid's Skill Creator

v2.0.0

Creates robust, production-grade agent skills from natural language requests, handling design, error management, and code scaffolding for immediate use.

0· 100·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The skill's name and description match its instructions: it is a meta-skill that designs and scaffolds other skills. The SKILL.md, README, and templates all support that purpose. Required binaries/env/configs: none — this is proportional for a generator that only writes files and produces SKILL.md/README templates.
Instruction Scope
The runtime instructions explicitly direct the agent to parse requests, design contracts, and write SKILL.md/README and helper scripts. That scope is appropriate for a skill creator, but it inherently grants the agent discretion to include filesystem operations, external API calls, and environment-variable requirements in the generated skills. The SKILL.md does not itself instruct reading of unrelated system secrets, but generated skills may.
Install Mechanism
No install spec and no code files to execute are provided by the skill itself (instruction-only). This minimizes risk because nothing is downloaded or written by an installer at install-time.
Credentials
The skill declares no required environment variables or credentials, which is reasonable for a meta-skill. Note: generated skills are expected to request env vars or API keys if required by the target integration; those requests should be reviewed on a per-skill basis.
Persistence & Privilege
The skill is not force-included (always: false). It allows autonomous invocation (disable-model-invocation: false), which is the platform default. Because this is a meta-skill that can generate other skills, autonomous invocation increases potential blast radius (the agent could autonomously scaffold new skills), so it's prudent to review and restrict autonomous use if you want tighter control.
Assessment
This meta-skill is coherent with its stated purpose and does not request secrets or install code itself, but exercise caution: 1) Review every generated SKILL.md/README and any helper scripts before running them — generated skills may request API keys, access files, or call external endpoints. 2) If you allow autonomous invocation, consider limiting when/which prompts can trigger this skill so it cannot generate and run code without human review. 3) Verify the upstream source (package.json references a GitHub repo but registry metadata lists source as unknown/homepage none) before trusting outputs. 4) Run generated scripts in a sandbox or CI pipeline and audit any dependencies or network calls the generated skill introduces.

Like a lobster shell, security has layers — review code before you run it.

latestvk97ekrsd3mxs4ha0hyvyhnhydn83s2zw
100downloads
0stars
1versions
Updated 3w ago
v2.0.0
MIT-0

SKILL: skill-creator

Description

The foundational meta-skill that architects and generates production-grade agent skills from natural language. This is the factory floor — every skill in the Acrid ecosystem is born here. It doesn't just scaffold files; it thinks through design, enforces quality gates, generates battle-tested logic, and outputs skills that work on first run.

Usage

Invoke this skill when:

  • You need to create a new capability, tool integration, or automation
  • You're converting a manual workflow into a repeatable skill
  • You want to prototype a skill idea rapidly with full documentation
  • You need to refactor or rebuild an existing skill from scratch

Trigger phrases: "Create a skill...", "Build me a skill...", "I need a skill that...", "Scaffold a new skill for..."

Inputs

ParameterRequiredFormatDescription
nameYeskebab-caseSkill identifier (e.g., stock-checker, deploy-monitor)
descriptionYesNatural languageWhat the skill does, in detail
requirementsNoNatural languageTools, APIs, constraints, languages, auth needs
outputsNoNatural languageWhat the skill should return (defaults to structured text)
complexityNosimple | standard | advancedDetermines scaffold depth (default: standard)

Steps

Phase 1: Intelligence Gathering

  1. Parse the request — Extract:

    • Core purpose (single sentence, verb-first: "Fetches...", "Monitors...", "Generates...")
    • Required external APIs or services
    • Required tools (Bash, WebFetch, WebSearch, Read, Write, Grep, Glob, etc.)
    • Input parameters with types and validation rules
    • Expected output format (JSON, markdown, plain text, file)
    • Error scenarios (API down, bad input, rate limits, auth failure, empty results)
    • Edge cases specific to the domain
  2. Determine complexity tier:

    • Simple: Single tool, no external APIs, <20 lines of logic (e.g., file formatter)
    • Standard: 1-2 tools, may call external APIs, needs error handling (e.g., stock checker)
    • Advanced: Multiple tools, chained API calls, stateful logic, helper scripts required (e.g., deploy pipeline)
  3. Identify the execution model:

    • Direct: Skill logic runs entirely within SKILL.md steps (preferred for simple/standard)
    • Scripted: Complex logic lives in src/ scripts, SKILL.md orchestrates (required for advanced)
    • Hybrid: SKILL.md handles orchestration, delegates specific computations to scripts

Phase 2: Architecture

  1. Design the skill contract:

    • Define exact input schema with types, defaults, and validation
    • Define exact output schema — what does success look like?
    • Define error responses — what does each failure mode return?
    • Map the dependency chain (what calls what, in what order)
  2. Scaffold the directory:

    For simple skills:

    skills/<name>/
      SKILL.md
      README.md
    

    For standard skills:

    skills/<name>/
      SKILL.md
      README.md
      src/           # Only if computation is complex
    

    For advanced skills:

    skills/<name>/
      SKILL.md
      README.md
      src/
        main.py|js   # Core logic
        utils.py|js  # Shared helpers (only if genuinely needed)
      config/
        defaults.json
    

Phase 3: Generation

  1. Generate SKILL.md — The skill definition must include ALL of these sections:

    # SKILL: <name>
    
    ## Description
    <Single paragraph. First sentence is the hook — what it does in <15 words.
    Second sentence adds context. Third sentence covers key differentiator.>
    
    ## Usage
    <When to invoke. Include 2-3 specific trigger phrases.>
    
    ## Inputs
    <Table format with: Parameter | Required | Type | Default | Description>
    <Include validation rules inline>
    
    ## Outputs
    <What the skill returns on success. Include format specification.>
    
    ## Steps
    <Numbered, imperative steps. Each step must be:
    - Actionable (starts with a verb)
    - Atomic (does one thing)
    - Error-aware (includes failure handling where relevant)
    - Tool-specific (names the exact tool to use when applicable)>
    
    ## Error Handling
    <Explicit failure modes and recovery actions:
    - What to do when an API is unreachable
    - What to do with malformed input
    - What to do when results are empty
    - Retry logic if applicable>
    

    SKILL.md generation rules:

    • Steps MUST be deterministic — no ambiguity in what the agent does
    • Every external call MUST have a failure path
    • Steps should reference specific tools by name (WebFetch, Bash, Grep, etc.)
    • Include concrete examples of expected input/output in the steps where helpful
    • Never use vague instructions like "process the data" — specify HOW
    • If a step involves parsing, specify the exact format and extraction method
    • Rate limiting: if the skill calls external APIs, include a note about respecting rate limits
  2. Generate README.md:

    # <Skill Name (Title Case)>
    
    <One-line description>
    
    ## Quick Start
    <Minimal trigger example>
    
    ## Parameters
    <Full parameter docs with examples>
    
    ## Example Usage
    <2-3 real-world invocation examples with expected outputs>
    
    ## Setup
    <Environment variables, API keys, dependencies — only if needed>
    
    ## How It Works
    <Brief technical explanation of the skill's approach>
    
    ## Limitations
    <Honest about what it can't do>
    
  3. Generate helper scripts (if complexity requires):

    Python scripts must:

    • Use argparse for CLI arguments
    • Output JSON to stdout (parseable by the agent)
    • Include a if __name__ == "__main__" guard
    • Handle exceptions with meaningful error messages in JSON format: {"error": "...", "code": "..."}
    • Use type hints
    • Include a docstring

    Node.js scripts must:

    • Parse args from process.argv or use a minimal arg parser
    • Output JSON to stdout
    • Handle errors with try/catch, output: {"error": "...", "code": "..."}
    • Use strict mode

Phase 4: Quality Gates

  1. Run the Acrid Quality Checklist — Every generated skill must pass ALL gates:

    GateCheckFail Action
    AtomicDoes it do exactly ONE thing?Split into multiple skills
    NamedIs the name self-documenting? Does <name> tell you what it does?Rename
    Inputs ValidAre all inputs typed with clear validation rules?Add missing validation
    Outputs DefinedIs the output format explicitly documented?Add output spec
    Error-ProofDoes every external call have a failure path?Add error handling
    DocumentedDoes README.md have Quick Start + Examples?Flesh out docs
    DeterministicGiven the same input, does it always produce the same flow?Remove ambiguity
    No Dead CodeAre all generated files actually used?Remove unused files
    Dependency-LightDoes it minimize external dependencies?Simplify
    First-Run ReadyCan someone use this skill with zero setup beyond what's documented?Fix setup docs
  2. Final review — Read through the complete generated skill one more time. Ask:

    • Would this work if I ran it right now?
    • Is there anything I'd need to guess or assume?
    • Are the steps clear enough that a different agent could execute them?
    • If any answer is "no", fix it before delivering.

Phase 5: Delivery

  1. Write all files to the target directory using the Write tool.

  2. Report to user with:

    • Skill name and location
    • Quick summary of what was generated
    • Any setup steps required (API keys, env vars)
    • A ready-to-use invocation example

Error Handling

ScenarioAction
Name is not kebab-caseAuto-convert and warn user
Description is vague (<10 words)Ask for clarification before proceeding
Requested API has no free tierWarn user, suggest alternatives, proceed if confirmed
Complexity mismatch (user says simple but needs advanced)Override to correct tier, explain why
Generated skill fails quality gateFix automatically, do not deliver broken skills

Anti-Patterns — Do NOT Generate Skills That:

  • Have steps like "analyze the data" without specifying HOW
  • Depend on tools not available to the agent
  • Require manual intervention mid-execution (unless explicitly designed as interactive)
  • Have undocumented environment variables or secrets
  • Contain placeholder logic ("TODO: implement this")
  • Over-engineer with abstractions for single-use operations
  • Include unnecessary comments or boilerplate

Examples

Input:

name: stock-checker
description: Fetches the current price of a stock by ticker symbol using a free API
requirements: Must use a free API, return price in USD

Output: See examples/stock-checker/ for the complete generated skill.

Comments

Loading comments...