Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Skill Auditor & Enhancer

v1.0.0-alpha

Periodically audit all workspace skills, learnings, memory, and configuration files to recommend refactoring, new skill ideas, and workflow improvements. Tri...

0· 191·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for omaression/skill-enhancer.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Skill Auditor & Enhancer" (omaression/skill-enhancer) from ClawHub.
Skill page: https://clawhub.ai/omaression/skill-enhancer
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install skill-enhancer

ClawHub CLI

Package manager switcher

npx clawhub@latest install skill-enhancer
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill claims to perform a weekly audit of workspace skills, memory, and config which matches the included scripts (build audit state, merge evaluations, format Telegram). However the SKILL.md also promises automatic delivery to Telegram and cron scheduling while the package declares no required env vars, credentials, or delivery tooling; sending messages externally normally requires a bot token/chat id or a configured platform integration, which is not declared here.
!
Instruction Scope
Runtime instructions explicitly read broad workspace surface (skills/*/SKILL.md, .learnings, SOUL.md, AGENTS.md, USER.md, memory/*.md, etc.), compute hashes, run multi-model evaluation steps, and then 'send recommendations directly to Telegram without user prompting.' Reading those files is coherent for an auditor, but the instruction to send automatically to Telegram (and the cron command that runs the full pipeline autonomously) grants the skill the ability to transmit potentially sensitive workspace data off-agent without any declared transport auth or per-run confirmation.
Install Mechanism
No install spec is present (instruction-only with helper scripts). This is low-risk from an installation/download perspective: nothing is fetched from external URLs or written to unusual system locations by an installer.
!
Credentials
The skill declares no required environment variables or credentials, yet its behavior requires a Telegram delivery channel (bot token / chat id) and will likely need the agent's ability to call external models or the network. Absence of any declared TELEGRAM_* env vars or delivery configuration is a mismatch and hides an implicit need for sensitive credentials. Also the agent will read potentially sensitive workspace files (memory, USER.md, etc.) — this is expected for auditing, but combined with automatic external delivery increases exfiltration risk.
Persistence & Privilege
always:false (good) and disable-model-invocation:false (normal). However the SKILL.md recommends adding a scheduled cron job via the agent (openclaw cron add ...) to run weekly; that grants persistent scheduled execution and requires the agent platform to allow creation of such jobs. Scheduling itself is reasonable for a periodic auditor, but users should be aware the skill requests recurring autonomous runs and an automatic delivery channel.
What to consider before installing
This skill largely implements an internal audit pipeline (scripts are benign and unit-tested), but there are two gaps you should address before installing: - Telegram delivery: the SKILL.md promises automatic Telegram messages but the skill declares no TELEGRAM_BOT_TOKEN, CHAT_ID, or equivalent. Decide where audit messages should go and require explicit, securely stored credentials. Do not rely on implicit or global agent integrations unless you trust them. - Automatic scheduling & data flow: the skill recommends adding a cron job that will read many workspace files (including memory and USER.md) and then deliver results externally. If those files contain secrets or private content, automatic external delivery could leak information. Require a dry-run mode and human approval before enabling scheduled runs or external delivery. Limit the set of files scanned (or redact sensitive files) and verify the 'deliver' step uses only the approved destination. - Validation steps: run the included tests and a dry-run locally to confirm outputs, and inspect any agent-level permissions required to add cron jobs. Add explicit environment variable requirements to the skill metadata (e.g., TELEGRAM_BOT_TOKEN, TELEGRAM_CHAT_ID) and an opt-in confirmation before first send. If you cannot confirm a controlled Telegram target and a secure way to store/send tokens, do not enable the automatic delivery/scheduling features — keep the skill manual and dry-run only.

Like a lobster shell, security has layers — review code before you run it.

latestvk972em3gcwprgdfah25j1sdkgn833psf
191downloads
0stars
1versions
Updated 23h ago
v1.0.0-alpha
MIT-0

Skill Auditor

Automated weekly workspace health check. Evaluates skills, learnings, memory, and config files. Delivers actionable recommendations to Telegram.

Pipeline architecture

4-phase sequential pipeline with internal parallelism:

Phase 1: Digest (opencode-go/kimi-k2.5)

Ingest all workspace files in one long-context call:

  • skills/*/SKILL.md and associated scripts/tests
  • .learnings/LEARNINGS.md, ERRORS.md, FEATURE_REQUESTS.md
  • SOUL.md, AGENTS.md, USER.md, TOOLS.md, MEMORY.md, HEARTBEAT.md
  • recent memory/*.md files (last 14 days)

Output: audit-state.json with per-file summaries, staleness scores, overlap detection, gap analysis.

Optimization: hash watched files against state.json from last run. Skip unchanged files to prevent token burn.

Also: web_search for best practices relevant to detected gaps.

Phase 2: Evaluate (parallel)

Phase 2A (opencode-go/glm-5): Score each skill on effectiveness, token efficiency, coverage, staleness, overlap, alignment with USER.md goals. Propose new skill ideas.

Phase 2B (openai-codex/gpt-5.3-codex): Score independently. Generate concrete refactor proposals. Propose new skill ideas.

Both output structured evaluation JSON.

Phase 3: Judge (openai-codex/gpt-5.4)

Receives: audit-state.json + both evaluation outputs.

  • Cross-validate proposals, resolve conflicts
  • Filter: only recommend changes with clear ROI
  • Classify each recommendation:
    • 🟢 safe refactor — low-risk, can PR directly after approval
    • 🟡 needs review — structural change or new skill creation
    • 🔴 informational — trend or observation, no action yet
  • Confidence threshold: ≥0.7 to recommend, ≥0.85 for safe-refactor classification

Output: final-recommendations.json

Phase 4: Deliver (main session)

Format recommendations as Telegram message and send. Archive to memory/audits/YYYY-MM-DD.json.

Recommendation format

Each recommendation:

{
  "id": "rec-001",
  "type": "refactor | new-skill | config-update | deprecate | merge",
  "severity": "green | yellow | red",
  "target": "skills/context-optimizer/SKILL.md",
  "title": "compress context-optimizer references section",
  "rationale": "...",
  "proposed_action": "...",
  "confidence": 0.87,
  "agreed_by": ["glm-5", "gpt-5.3-codex"]
}

Telegram delivery format

📋 Weekly Skill Audit — YYYY-MM-DD

🟢 Safe refactors (N):
  1. [title] → [one-line action]

🟡 Needs review (N):
  2. [title]

🔴 Informational (N):
  3. [title]

Reply with a number for details, or "approve 1,2" to greenlight.

If no strong recommendations: send "no action needed this week" one-liner.

If quality score is low across all recommendations: send nothing.

Scheduling

Primary: OpenClaw cron, every 7 days (Sunday 10:00 AM ET):

openclaw cron add --schedule "0 10 * * 0" --model openai-codex/gpt-5.4 --label skill-auditor-weekly --prompt "Read skills/skill-auditor/SKILL.md and execute the full audit pipeline. Deliver results to Telegram."

State tracking: memory/audits/last-run.json records last execution timestamp. Heartbeat checks if last run was >10 days ago and alerts.

Manual trigger: User says "audit skills" or "review workflow".

Evaluation criteria

Each file/skill scored on:

  1. Effectiveness — achieves stated purpose? (1-5)
  2. Token cost — bloated? shorter without losing value? (1-5)
  3. Coverage — workflow gaps not addressed by any skill? (binary + description)
  4. Freshness — last meaningful update vs relevance decay
  5. Overlap — duplicates content in another file/skill? (list pairs)
  6. Alignment — matches USER.md goals and SOUL.md persona? (1-5)

Safety rules

  • No automatic file edits. Recommendations are advisory until approved.
  • Green recommendations produce diff previews; actual changes require explicit "approve" reply.
  • Respect all workspace GitHub handling rules — no repo-visible changes without Omar's approval.

File structure

skills/skill-auditor/
├── SKILL.md
├── scripts/
│   ├── build_audit_state.py
│   ├── merge_evaluations.py
│   └── format_telegram.py
└── tests/
    ├── test_build_audit_state.py
    ├── test_merge_evaluations.py
    └── test_format_telegram.py

Runtime artifacts (not tracked in repo):

memory/audits/
├── last-run.json
├── YYYY-MM-DD.json
└── state.json (file hashes for change detection)

Validation checklist

  1. All 3 helper scripts exist and pass unit tests.
  2. Dry-run mode completes full pipeline without sending messages.
  3. At least one real audit cycle delivers a well-formatted Telegram message.
  4. Recommendations are advisory-only (no auto-edits without approval).
  5. Unchanged files are skipped via hash comparison.
  6. Confidence thresholds are enforced.

Comments

Loading comments...