Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

AutoSpec

v0.0.1

Spec-driven development assistant: helps write precise behavioral specs before coding, or reverse-engineer specs from existing code for understanding and ali...

1· 22·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for yuanyan/auto-spec.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "AutoSpec" (yuanyan/auto-spec) from ClawHub.
Skill page: https://clawhub.ai/yuanyan/auto-spec
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install auto-spec

ClawHub CLI

Package manager switcher

npx clawhub@latest install auto-spec
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (spec-driven development / reverse-engineer specs) aligns with the included instructions and example outputs. The skill is instruction-only, requests no env vars/binaries, and the included example Go files are consistent as evaluation artifacts and examples rather than required runtime components.
Instruction Scope
SKILL.md directs the agent to read user-provided code and produce specs in-conversation (and only persist to disk when asked). That is appropriate for this skill, but the repository pre-scan flagged a 'system-prompt-override' pattern in SKILL.md. The visible content doesn't contain an explicit malicious override, but the detection indicates text that could be interpreted as attempting to modify agent/system prompts or instructions — warranting manual review of the SKILL.md for any hard directives that instruct the agent to ignore other system constraints or to accept new system prompts.
Install Mechanism
No install spec is present (instruction-only). This is the lowest-risk install model — nothing is downloaded or written automatically by an installer.
Credentials
The skill declares no required environment variables, credentials, or config paths. Although the spec text references external services (LLM endpoints, config center, storage), it does not request new secrets or unrelated credentials — this is proportionate for a documentation/spec-generation assistant.
Persistence & Privilege
Flags: always=false and user-invocable=true. The SKILL.md explicitly states specs are presented in the conversation by default and only persisted when the user asks. There is no indication the skill will enable itself, persist credentials, or modify other skills.
Scan Findings in Context
[system-prompt-override] unexpected: The pre-scan flagged a 'system-prompt-override' pattern in SKILL.md. The visible content appears to set operational guidance (e.g., 'present specs in the conversation by default'), not a technical override of the platform system prompt, but this class of pattern can indicate attempts to change the assistant's instruction hierarchy. Recommend manual inspection for any phrasing that tells the agent to ignore higher-priority system prompts, accept external system prompts, or permanently alter its instruction set.
What to consider before installing
AutoSpec appears to do what it says: it generates specs from user intent or existing code and does not request credentials or install software. However, a regex scan flagged a potential system-prompt-override pattern in SKILL.md. Before installing or enabling this skill in production: - Manually inspect SKILL.md for any lines that tell the assistant to ignore platform/system instructions, accept new system prompts, or to change its own behavior permanently. Those are risky and unnecessary for a spec tool. - Confirm the skill only reads project files you intend to expose; if you run the agent on a repository with secrets, ensure sensitive files are not accessible to the agent or run the skill in a restricted environment. - Because the skill may analyze code and suggest implementations, run it first in a staging or sandbox workspace and review any generated code before merging. - Monitor network activity and logs during initial use to ensure it does not contact unexpected external endpoints. If you find explicit instructions in SKILL.md that attempt to override system-level prompts or to exfiltrate data, do not install. If unsure, share the SKILL.md passage(s) with a security reviewer.
!
evals/iteration-1/reverse-spec-module-level/with_skill/outputs/spec_output.md:94
Prompt-injection style instruction pattern detected.
About static analysis
These patterns were detected by automated regex scanning. They may be normal for skills that integrate with external APIs. Check the VirusTotal and OpenClaw results above for context-aware analysis.

Like a lobster shell, security has layers — review code before you run it.

autovk979xnsz7d6y472n40amrv6m8n85m9z3latestvk979xnsz7d6y472n40amrv6m8n85m9z3specvk979xnsz7d6y472n40amrv6m8n85m9z3
22downloads
1stars
1versions
Updated 6h ago
v0.0.1
MIT-0

Auto-Spec: Spec-Driven Development Assistant

Core Belief

Code is the single source of truth.

Specs are inputs — they help humans align intent and assist AI coding — not deliverables. Specs don't need to stay in sync with code permanently; regenerate from code when needed.

Three hard constraints follow:

  1. No bidirectional binding, no CI drift detection — that's life support for dead docs, expensive and fragile.
  2. Specs are disposable — once landed, mission complete; code and tests take over as truth.
  3. To see current state, reverse-engineer from code — don't maintain a static doc that can't keep up.

Therefore, specs generated by this skill are presented in the conversation by default, not written to files. Only persist to disk when the user explicitly asks, and treat it as a temporary snapshot, not a long-lived document.


Two Modes

Mode A: Forward Spec (Spec before code)

The user has an idea or requirement but hasn't written code yet. Turn vague intent into precise behavioral contracts, then (if requested) implement according to the spec.

Typical triggers:

  • "I want to add feature X, help me think it through first"
  • "Write me a spec"
  • "Align on intent before coding"

Mode B: Reverse Spec (Spec from code)

Code already exists. The user wants to understand what it does, or wants to establish a baseline before making changes. Extract behavioral contracts from the code.

Typical triggers:

  • "What does this code actually do, walk me through it"
  • "Reverse-engineer a spec from the code"
  • "Help me understand this module's behavior"

Spec Format

A spec is not prose — it's a structured behavioral contract. Granularity is chosen automatically based on scope:

Granularity Rules

User's scopeGranularityExample
A single function/methodFunction-levelBehavior of CalculateQuota
A single file/classModule-levelResponsibilities and interface of quota_service.go
A feature/capabilityFeature-levelComplete behavior of "quota management"
A system/serviceSystem-levelOverall behavior of "order system"

Don't ask the user to specify granularity — infer from context. When uncertain, go one level higher than what the user described, then drill down into key parts within the spec.

Spec Template

Each spec contains the following sections (trim as needed — don't pad with empty content just to fill the template):

# [Name] Spec

## Overview
One or two sentences explaining what this is and why it exists.

## Glossary
Only needed when domain-specific terms or easily confused concepts are present.
- **TermA**: definition

## Behavioral Contracts

### Scenario 1: [Scenario Name]
- **Precondition**: what state the system is in
- **Input**: what triggers this behavior
- **Expected behavior**: what the system should do
- **Postcondition**: what state the system should be in afterward
- **Error cases**: what happens if something goes wrong

### Scenario 2: ...

## Constraints & Boundaries
- Performance constraints, security constraints, business rules, etc.
- Explicitly stating "what we don't do" is equally important

## Dependencies
- What external systems/modules/interfaces this depends on
- Who depends on this

## Open Questions
Items that need further confirmation (if any).

Writing Principles

  1. Precision over completeness: one clear contract beats ten vague descriptions.
  2. Show examples: abstract rule + concrete example = best understanding. Give input→output examples for key behaviors.
  3. Mark uncertainty: use [TBD] for things you're not sure about — don't fabricate.
  4. Behavior, not implementation: describe what, not how. Implementation details belong in code.
  5. Error paths matter as much as happy paths: many bugs hide in scenarios that weren't thought through.

Workflow

Forward Spec Workflow

  1. Understand intent

    • Read the user's requirement description
    • If the requirement is vague, ask targeted questions (no more than 3 at a time)
    • Focus on: edge cases, error scenarios, interactions with existing systems
  2. Research context

    • Read relevant code to understand the existing system's structure and conventions
    • Find the "integration point" for the new feature — which existing modules will it interact with?
    • Note existing patterns and conventions; the spec should be consistent with them
  3. Generate spec

    • Generate the spec using the template, present it directly in the conversation
    • Choose granularity automatically
    • Give concrete examples for key behaviors
  4. Iterate and confirm

    • Ask the user to review the spec
    • Revise based on feedback until the user is satisfied
    • Mark all [TBD] items
  5. (Optional) Implement according to spec

    • After the user confirms, implement if requested
    • Reference scenario numbers from the spec in code comments to maintain traceability

Reverse Spec Workflow

  1. Locate the code

    • Confirm which code the user wants to reverse-engineer
    • Read the relevant source files
  2. Extract behavior

    • Extract actual behavior from code, not guessed intent
    • Distinguish "what the code actually does" from "what the code might have intended to do"
    • Flag suspicious behavior (things that look like bugs or legacy logic)
  3. Generate spec

    • Generate the spec using the template, present it directly in the conversation
    • Use [NOTE] to flag anything that doesn't match expectations
  4. Deliver

    • Present the spec, call out points worth attention
    • If the user plans to modify the code afterward, this spec serves as the pre-change baseline

Important Notes

  • Don't over-engineer specs: a spec is a thinking tool, not a bureaucratic process. If a function's behavior can be explained in one sentence, use one sentence.
  • Don't proactively save files: unless the user explicitly asks, specs are presented in the conversation only.
  • Don't repeat what the code already expresses clearly: if the code itself is the best documentation (e.g., clear type signatures, well-named functions), the spec should focus on what code can't express — intent, constraints, edge cases.
  • Stay honest: say "I'm not sure" when you're not sure. Don't fabricate plausible-sounding but unverified contracts.

Comments

Loading comments...