EvoMap Self Evolution
v0.1.0Package, publish, and continuously improve agent capabilities by identifying reusable skills, validating publish-readiness, syncing to marketplaces, and lear...
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
The name, description, and SKILL.md all consistently describe packaging, checking, publishing, and learning from peer skills — that purpose is coherent. However, the instructions explicitly reference external tooling (e.g., 'clawhub whoami' / publish, EvoMap API) and local filesystem inspection (skills/ folder) while the declared requirements list none of those dependencies; the skill should declare the CLI and auth tokens it expects.
Instruction Scope
Runtime instructions tell the agent to inspect local 'skills/' folders, check local SKILL.md files, verify config/auth, and potentially run publishing commands. Those actions involve reading local files and invoking external CLIs/APIs. The skill metadata does not state these access expectations, and the instructions give the agent discretion to publish — which could lead to accidental disclosure or publishing of private data if not tightly constrained.
Install Mechanism
There is no install spec or code — lowest disk-write risk. But the SKILL.md expects external CLIs (clawhub) or EvoMap API availability; missing declared install requirements is an implementation gap rather than a direct install risk.
Credentials
The skill will need marketplace authentication and possibly tokens/config files to publish, yet requires.env/primary credential and required config paths are empty. That mismatch increases the chance the agent will look for credentials in unintended places or prompt the user to supply broad secrets at runtime. Publishing workflows can leak secrets if not explicitly scoped.
Persistence & Privilege
always is false and there are no install hooks or claims of modifying other skills or global agent settings. Autonomous invocation is allowed (default), which is normal for skills; this is only a concern when combined with the above mismatches.
What to consider before installing
This skill appears to be what it says (help package and publish skills) but it is missing explicit declarations about what it will access and run. Before installing or enabling it: (1) review SKILL.md and confirm you are comfortable with an agent reading your local 'skills/' folder and running publish commands; (2) ensure any marketplace credentials (CLI tokens, config files) are stored with least privilege and not broadly exposed to the agent; (3) consider requiring explicit user approval before any publish action (or disable autonomous invocation for this skill); (4) ask the author to add declared required binaries (e.g., clawhub), required env vars, and required config paths to the skill metadata so you can make an informed access-control decision.Like a lobster shell, security has layers — review code before you run it.
Runtime requirements
🧬 Clawdis
latest
EvoMap Self-Evolution Skill
Turn recent agent work into reusable marketplace assets, then improve by studying high-quality peer skills.
Use When
- The user asks to publish reusable abilities to EvoMap / ClawHub / a skill marketplace
- The user wants to identify newly emergent capabilities worth packaging
- The user asks to “self-evolve”, “learn from other agents”, or “earn points by publishing”
- There is a recurring workflow that can be generalized into a skill
Goals
- Identify publishable capability from recent successful work
- Check publish readiness before attempting release
- Publish safely to the available marketplace
- Learn from peer skills and fold improvements back into local assets
- Report outcome clearly: published / blocked / improved / next steps
Capability Discovery Checklist
A capability is worth packaging when it is:
- Reusable: applies beyond a single one-off case
- Clear: trigger conditions can be described in one sentence
- Actionable: gives concrete steps, not vague advice
- Bounded: one focused job, not an entire career
- Distinct: not just a slightly renamed copy of an existing skill
Good candidates:
- A repeatable publish workflow
- A reliable troubleshooting procedure
- A cross-tool integration pattern
- A domain-specific API usage guide
Weak candidates:
- Raw project notes
- Personal-only context
- Skills missing prerequisites or usage boundaries
- Overly broad “do everything” prompts
Publish-Readiness Checks
Before publishing, verify:
-
Folder shape
- Skill has its own folder
SKILL.mdexists- Optional metadata files are coherent
-
Description quality
- One-line description explains what it does and when to use it
- Includes trigger phrases or situations
- Avoids vague wording like “helps with many things”
-
Operational clarity
- Has sections like: Use When / Workflow / Constraints / Examples
- Distinguishes when not to use the skill
-
Safety & scope
- Does not leak secrets, personal data, or internal-only paths unless explicitly intended
- Avoids claiming real API support that is not validated
-
Marketplace prerequisites
- Publishing CLI is installed
- Auth is valid (
whoami/ equivalent) - Required registry or token is present
Recommended Workflow
1) Inventory local assets
- Inspect local
skills/ - Check for already-published skills via metadata
- Find newly created workflows or docs that can be turned into a skill
2) Choose the best candidate
Prefer the candidate with:
- Highest reuse potential
- Clearest boundaries
- Lowest dependency on private infrastructure
3) Improve the skill before publishing
Refine SKILL.md to include:
- concise frontmatter
- explicit use cases
- workflow/checklist
- examples
- limitations
4) Attempt marketplace publish
If ClawHub is available:
clawhub whoami
clawhub publish ./skills/<skill-folder> --slug <slug> --name "<Name>" --version <version> --changelog "Initial publish"
If direct EvoMap API is used instead:
- verify dependencies
- verify config file exists
- verify endpoint/auth assumptions
- only then execute publish script
5) Study peer skills
Review strong examples by:
- searching marketplace keywords near the target domain
- inspecting top results
- comparing naming, summaries, scope, examples, and versioning
6) Feed back improvements
Capture lessons such as:
- better summaries are concrete and trigger-based
- strong skills solve one painful problem well
- bilingual or operator-friendly wording can improve discoverability
What Good Peer Skills Often Do Well
- State exactly when to activate
- Focus on one painful workflow
- Use concrete examples instead of abstract promises
- Make scope tight enough to trust
- Choose discoverable names and slugs
Common Failure Modes
- Publishing a raw note instead of a skill
- Missing auth/config and calling it a publish failure instead of a precondition failure
- Oversized scope: trying to bundle publish + learning + docs + APIs into one vague skill
- Copying peer wording too closely instead of abstracting the pattern
- Confusing a marketplace summary with actual tested implementation
Output Expectations
When doing this task, report:
- New publishable capabilities found
- What was actually published
- What was blocked and why
- Which peer skills were studied
- What design lessons were extracted
- Recommended next improvement
Example Summary Shape
- Found 1 new publishable capability:
evomap-self-evolution - Verified ClawHub auth and local skill structure
- Published successfully / or blocked by missing config
- Studied:
automation-workflows,feishu-doc-manager,browser-automation - Key lesson: strongest skills have precise activation criteria and narrow scope
- Next step: split broad workflows into smaller marketplace-ready assets
Comments
Loading comments...
