Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Automate

Identify tasks that waste tokens. Scripts don't hallucinate, don't cost per-run, and don't fail randomly. Spot automation opportunities and build them.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
2 · 1.2k · 4 current installs · 5 all-time installs
byIván@ivangdavila
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (spot token-waste and create scripts) matches the SKILL.md, signals.md, and templates.md contents: all materials focus on detecting repetitive deterministic tasks and providing script templates. One mismatch: the skill declares no required binaries/env, but templates assume common command-line tools (jq, python3, curl, git, gh, npx, macOS 'security' CLI). That is plausible for a general automation skill, but the missing explicit binary requirements is an operational gap the user should be aware of.
Instruction Scope
Instructions stay within the stated purpose (identify automation candidates, standardize and produce scripts). However several templates perform local file operations, run git, call network endpoints, or retrieve credentials from a keychain (example uses `security find-generic-password`), so the agent or a user following templates could access local files, run commands, and call external APIs. Those behaviors are coherent with automation but require manual review before execution.
Install Mechanism
No install spec and no code files — lowest-risk delivery model. Nothing is downloaded or written by the skill itself.
Credentials
The skill declares no required environment variables or credentials, which matches its advisory nature. Templates do show patterns for fetching tokens (keychain) or using CLI auth (gh, npx), but they do not demand secrets from the platform. Verify any templates that access stored credentials before use; the skill does not request broad credentials itself.
Persistence & Privilege
always:false and no attempt to modify other skills or system-wide agent settings. The skill is instruction-only and does not request persistent presence or elevated privileges.
Assessment
This skill is an advisory library of patterns and scripts — it appears coherent and not malicious, but exercise caution before running any suggested template: - Review each script line-by-line before executing; templates include file operations, git pushes, network calls, and an example that pulls a token from the macOS keychain. - Don’t run templates with elevated privileges or in production directories until tested in a sandbox. - Install and verify required CLI tools (jq, python3, curl, git, gh, npx, etc.) yourself — the skill doesn’t declare them. - Replace placeholder endpoints (e.g., api.example.com) and verify API tokens/sources; never copy a template that fetches credentials without understanding where they come from. If you want a stricter posture, only use the detection and proposal parts of the skill and have a human author the scripts rather than auto-executing templates.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk9707dmpqye1gxqxazjx5p2exn8117ka

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Core Principle

LLMs are expensive, slow, and probabilistic. Scripts are free, fast, and deterministic.

Every time you do something twice that could be scripted, you're wasting:

  • Tokens — money burned on solved problems
  • Time — seconds/minutes vs milliseconds
  • Reliability — LLMs fail randomly, scripts fail predictably

Check signals.md for detection patterns. Check templates.md for common script patterns.


The Automation Test

Before doing any task, ask:

  1. Is this deterministic? Same input → same output every time?
  2. Is this repetitive? Will this happen again?
  3. Is this rule-based? Can I write down the exact steps?

If yes to all three → script it, don't LLM it.


Script vs LLM Decision Matrix

Task typeScriptLLM
Format conversion (JSON↔YAML)
Text transformation (regex)
File operations (rename, move)
Data validation
API calls with fixed logic
Git workflows
Judgement calls
Creative content
Ambiguous inputs
One-time unique tasks

Automation Triggers

When you notice yourself:

  • Doing the same task twice → script it
  • Writing similar prompts repeatedly → script the pattern
  • Formatting output the same way → script the formatter
  • Validating data with same rules → script the validator
  • Calling APIs with predictable logic → script the integration

Automation Proposal Format

When you spot an opportunity:

🔧 Automation opportunity

Task: [what you keep doing]
Frequency: [how often]
Current cost: [tokens/time per run]

Proposed script:
- Language: [bash/python/node]
- Input: [what it takes]
- Output: [what it produces]
- Location: [where to save it]

Estimated savings: [tokens/time saved per month]

Should I write it?

Script Standards

When writing automation:

  1. Single purpose — one script, one job
  2. Idempotent — safe to run multiple times
  3. Documented — usage in comments at top
  4. Logged — output what you're doing
  5. Fail loud — exit codes, error messages
  6. No secrets hardcoded — env vars or keychain

Tracking Automations

Document what you've built:

### Active Scripts
- scripts/format-json.sh — JSON prettifier [saved ~2k tokens/week]
- scripts/deploy-staging.sh — one-command deploy [saved 5min/deploy]
- scripts/sync-env.sh — env file sync [eliminated manual errors]

### Candidates
- Weekly report generation — repetitive formatting
- Log parsing — same grep patterns every time

The 3x Rule

If you do something 3 times, it must become a script.

  • 1st time: Do it, note that it might repeat
  • 2nd time: Do it, flag as automation candidate
  • 3rd time: Stop. Write the script first, then run it.

Anti-Patterns

Don'tDo instead
Re-prompt for same transformationWrite a script once
Use LLM for data validationWrite validation rules
Burn tokens on formattingUse formatters (prettier, jq, etc.)
Ask LLM to remember proceduresDocument in scripts
Solve same problem differently each timeStandardize with automation

Every script written = permanent token savings. Compound your efficiency.

Files

3 total
Select a file
Select a file to preview.

Comments

Loading comments…