Frankenstein

v1.2.0

Combine the best parts of multiple skills into one. Searches ClawHub, GitHub, skills.sh, skillsmp.com and other AI skill repos. Analyzes each safely, compares features, and builds a combined 'Frankenstein' skill with the best of each. Uses skill-auditor for security scanning and sandwrap for safe analysis. Use when: (1) Multiple skills exist for same purpose, (2) Want best-of-breed combination, (3) Building a comprehensive skill from fragments.

4· 1.4k·3 current·3 all-time
byRuben Quispe@rubenaquispe
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The skill's stated purpose (search, analyze, combine other skills) legitimately requires network access, CLIs (clawhub, skill-auditor, sandwrap, skill-creator), and the ability to install and read other skills. However the registry metadata declares no required binaries, no install instructions, and no environment variables. That is an incoherence: the runtime instructions assume tools and capabilities that the metadata does not request or justify.
!
Instruction Scope
SKILL.md instructs the agent to search many public repos, install each discovered skill into a temp directory, run security scans, analyze code, and 'include scripts from winners' when building the combined skill. Those steps involve writing third-party code to disk, executing analysis on it, and copying code into a new artifact — actions that go well beyond simple read-only querying. The instructions also call for spawning sub-agents (sessions_spawn) and using a powerful reasoning model by default. The scope is broad and grants the agent discretion to fetch and assemble arbitrary code.
!
Install Mechanism
The skill itself has no install spec (instruction-only), which is low-risk for the skill bundle. However the runtime workflow explicitly installs third-party skills discovered at runtime into temporary directories and then copies scripts into a new skill. That runtime download-and-extract behavior (not captured in the metadata) is a high-risk activity because the sources may be arbitrary and the install mechanism/URLs are not constrained or validated in the metadata.
!
Credentials
Metadata lists no required environment variables, but the SKILL.md workflow will likely encounter rate-limited or private repositories (GitHub, ClawHub, skills marketplace) that commonly require tokens or credentials. The skill does not declare these needs nor explain how credentials will be used or protected. Additionally, copying third-party scripts into a new skill could necessitate storing secrets/credentials in the generated artifact if not handled carefully.
Persistence & Privilege
The skill is not always: true and does not request persistent system-wide privileges in metadata. However its runtime behavior (spawning sub-agents by default, using high-capability models, installing and writing code) increases its effective privilege and blast radius if run autonomously. This is not flagged in metadata and should be considered when allowing autonomous invocation.
What to consider before installing
What to consider before installing or enabling this skill: - Metadata mismatch: SKILL.md requires tools (clawhub, skill-auditor, sandwrap, skill-creator) and network installs, but the skill package declares none. Confirm those CLIs exist and are trustworthy before running. - Credential needs: The workflow may need GitHub or registry tokens to access or clone some skills. Do not provide secrets unless you understand how they will be used and stored; insist the skill declare any required env vars and explain their use. - Arbitrary third-party code: This skill explicitly downloads, inspects, and then incorporates scripts from other skills. Even with automated scanning, that creates risk of including malicious or poorly-sanitized code. Only run in an isolated environment (sandbox, ephemeral VM/container) and require human review and approvals for any generated artifact. - Verify tooling: Ask the publisher (or your admin) for provenance of the tools it references (skill-auditor, sandwrap, skill-creator). Who maintains them? Are they installed from trusted sources? - Limit autonomy: Prefer manual/manual-approval mode (disable autonomous invocation or require user confirmation before performing downloads, installs, or creating new skills) and avoid using high-capability models automatically to reduce blast radius. If you want to proceed safely: run the workflow in a tightly controlled sandbox with network and credential restrictions, confirm the presence and integrity of the referenced scanning/sandbox tools, and require that any generated skill be reviewed and approved by a human before it is saved or given runtime privileges.

Like a lobster shell, security has layers — review code before you run it.

latestvk971xbkk2zs8123q6p2zvhy8fn80qk9y
1.4kdownloads
4stars
3versions
Updated 1mo ago
v1.2.0
MIT-0

Frankenstein

Model Requirements

Default: Opus (or best available thinking model)

Frankenstein requires deep reasoning to:

  • Compare multiple skill approaches
  • Identify subtle methodology differences
  • Synthesize the best parts creatively
  • Catch security/quality issues others miss

Only use a smaller model if user explicitly requests it for cost reasons. The synthesis quality depends heavily on reasoning depth.

Create monster skills by combining the best parts of existing ones.

Quick Start

Frankenstein me an SEO audit skill

How It Works

Step 1: Search All Sources

Search EVERY AI skills repository for matching skills:

1. ClawHub (primary)

clawhub search "[topic]" --registry "https://clawhub.ai"

2. GitHub

Search: "[topic] AI skill" OR "[topic] claude skill" OR "[topic] agent skill"
Look for: SKILL.md, CLAUDE.md, or similar agent instruction files

3. skills.sh

https://skills.sh/search?q=[topic]

4. skillsmp.com (Skills Marketplace)

https://skillsmp.com/search/[topic]

5. Other sources to check:

  • Anthropic's skill examples
  • OpenAI GPT configurations (convert to skill format)
  • LangChain agent templates
  • AutoGPT/AgentGPT skill repos

Gather all candidates before filtering. More sources = better Frankenstein.

Step 2: Security Scan

Run each skill through skill-auditor. Skip any with HIGH risk scores.

For each skill found:

  • Install to temp directory
  • Run skill-auditor scan
  • Score >= 7 = SAFE (proceed)
  • Score < 7 = RISKY (skip with warning)

Step 3: Safe Analysis

Analyze safe skills in sandwrap read-only mode.

For each safe skill, extract:

  • Core features (what it does)
  • Methodology (how it approaches the problem)
  • Scripts/tools (reusable code)
  • Unique strengths (what makes it special)
  • Weaknesses (what's missing)

Step 4: Compare

Build comparison matrix:

Featureskill-Askill-Bskill-CWINNER
Feature 1YesNoYesA, C
Feature 2BasicAdvancedNoneB
Feature 3NoNoYesC

Step 5: Synthesize

Take the winning approach for each feature:

  • Feature 1 methodology from skill-A
  • Feature 2 implementation from skill-B
  • Feature 3 approach from skill-C

Step 6: Build Initial Draft

Use skill-creator to assemble the Frankenstein skill:

  • Combine winning features
  • Resolve conflicts (if two approaches clash)
  • Write unified SKILL.md
  • Include scripts from winners
  • Document sources

Step 7: Vetting Loop (CRITICAL)

Run plan → test → improve loop until 3 stable passes:

Pass 1:
  1. Read draft
  2. Try to break it (find holes, contradictions, gaps)
  3. Document issues
  4. Fix them
  
Pass 2:
  1. Read improved version
  2. Actively try to find MORE issues
  3. Fix any found
  
Pass 3+:
  Continue until you genuinely try to improve
  but can't find significant issues

What to look for each pass:

  • Missing features that sources had
  • Contradictions between combined approaches
  • Vague instructions that aren't actionable
  • Token waste (verbose where concise works)
  • Security gaps
  • Broken references to files/scripts

Document in VETTING-LOG.md:

  • Each pass number
  • Issues found
  • Fixes applied
  • Why considered stable

Only proceed when:

  • 3 consecutive passes with no major issues
  • Minor issues documented as known limitations

Step 8: Human Review

Present the vetted skill for approval:

  • Show what came from where
  • Highlight conflicts resolved
  • Show vetting summary
  • Ask for final OK before saving

Output

Creates a new skill with:

  • Best features from all analyzed skills
  • Clear attribution (credits source skills)
  • Security-scanned components only
  • Unified documentation

Example Session

User: "Frankenstein me an SEO audit skill"

Frankenstein:

Searching ClawHub for "SEO audit"...
Found 5 skills:
  1. seo-audit (coreyhaines31)
  2. audit-website (squirrelscan)
  3. seo-optimizer (local)
  4. technical-seo (unknown)
  5. seo-checker (unknown)

Security scanning...
  ✓ seo-audit: SAFE (8/10)
  ✓ audit-website: SAFE (7/10)
  ✓ seo-optimizer: SAFE (9/10)
  ✗ technical-seo: RISKY (4/10) - skipping
  ✗ seo-checker: RISKY (3/10) - skipping

Analyzing 3 safe skills...

Comparison:
| Feature              | seo-audit | audit-website | seo-optimizer |
|----------------------|-----------|---------------|---------------|
| Methodology depth    | ★★★★★     | ★★★           | ★★★★          |
| Automated rules      | None      | 230+          | 50+           |
| Auto-fix capability  | No        | No            | Yes           |
| CLI tool             | No        | squirrel      | No            |
| E-E-A-T coverage     | Yes       | Yes           | Partial       |

Recommended combination:
- Methodology: seo-audit (best framework)
- Rules engine: audit-website (230+ rules via squirrel CLI)
- Auto-fix: seo-optimizer (automated fixes)
- E-E-A-T: seo-audit (comprehensive)

Build this Frankenstein? [Yes/No]

Dependencies

This skill uses:

  • clawhub CLI (search/install)
  • skill-auditor (security scanning)
  • sandwrap (safe analysis)
  • skill-creator (building)

Spawning Sub-Agents

When spawning analysis sub-agents, always use Opus (or best thinking model) unless user explicitly requests otherwise:

sessions_spawn(
  task: "FRANKENSTEIN ANALYSIS: [topic]...",
  model: "opus"
)

Cheaper models miss nuances between skills and produce shallow combinations.

Limitations

  • Only combines publicly available skills
  • Skips skills that fail security scan
  • Cannot resolve deep architectural conflicts
  • Human judgment needed for final synthesis
  • Quality depends on available skills

Credits

When a Frankenstein skill is built, it includes attribution:

## Sources
Built from best parts of:
- seo-audit by coreyhaines31 (methodology)
- audit-website by squirrelscan (rules engine)
- seo-optimizer (auto-fix)

Comments

Loading comments...