Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Hire

Interactive hiring wizard to set up a new AI team member. Guides the user through role design via conversation, generates agent identity files, and optionally sets up performance reviews. Use when the user wants to hire, add, or set up a new AI agent, team member, or assistant. Triggers on phrases like "hire", "add an agent", "I need help with X" (implying a new role), or "/hire".

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 1.8k · 4 current installs · 4 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description match the SKILL.md instructions: the skill guides an interview, picks a model, and generates agent identity files under agents/<name>/. All required actions (model discovery, role mapping, file generation) are appropriate for a hiring/setup wizard.
Instruction Scope
Instructions are largely scoped to interviewing the user, choosing a model via `openclaw models list` or the gateway config, and creating files under agents/<name>/. Potentially sensitive steps: (1) 'check the gateway config' may read platform configuration; (2) creating an optional cron job to trigger reviews implies creating scheduled system tasks (unclear where/how). The skill does not ask for unrelated files, env vars, or credentials.
Install Mechanism
Instruction-only skill with no install spec or code files — nothing is downloaded or written at install time. This is the lowest install risk.
Credentials
No environment variables, credentials, or external tokens are requested. The operations described (model discovery, directory creation, symlinks) are proportional to the stated purpose.
Persistence & Privilege
always:false and autonomous invocation allowed (platform default). The skill writes files into agents/<name>/ and suggests symlinking shared USER.md and MEMORY.md; it also optionally creates cron jobs for periodic reviews. Those are normal for a generator but do create persistent artifacts and system-level scheduling changes if executed — confirm where jobs will be created and what filesystem paths are used.
Assessment
This skill appears coherent for creating and onboarding an AI agent. Before installing or running it, confirm: (1) where it will write files (agents/<name>/) and whether those paths and any symlink targets (../../USER.md, ../../MEMORY.md) are acceptable and don't expose sensitive data; (2) what 'check the gateway config' will read — ensure it won't disclose platform secrets; (3) how the optional periodic reviews are implemented (system crontab vs. platform scheduler) and whether you want the skill to create scheduled system jobs. If you want to be cautious, run it in a sandboxed workspace, review the generated files before applying any symlinks, and decline the cron-job option or implement scheduling yourself.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.1
Download zip
agentsvk977z9e6xzy447c4hqwwk04ct980j9nwhiringvk977z9e6xzy447c4hqwwk04ct980j9nwlatestvk977z9e6xzy447c4hqwwk04ct980j9nwonboardingvk977z9e6xzy447c4hqwwk04ct980j9nwteamvk97ax43mc3qd4pj63kaq7vatjd80hw41workflowvk977z9e6xzy447c4hqwwk04ct980j9nw

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

hire

Set up a new AI team member through a guided conversation. Not a config generator - a hiring process.

When to Use

User says something like:

  • "I want to hire a new agent"
  • "I need help with X" (where X implies a new agent role)
  • "Let's add someone to the team"
  • /hire

The Interview

3 core questions, asked one at a time:

Q1: "What do you need help with?" Let them describe the problem, not a job title. "I'm drowning in code reviews" beats "I need a code reviewer."

  • Listen for: scope, implied autonomy level, implied tools needed

Q2: "What's their personality? Formal, casual, blunt, cautious, creative?" Or frame it as: "If this were a human colleague, what would they be like?"

  • Listen for: communication style, vibe, how they interact

Q3: "What should they never do?" The red lines. This is where trust gets defined.

  • Listen for: boundaries, safety constraints, access limits

Q4: Dynamic (optional)

After Q1-Q3, assess whether anything is ambiguous or needs clarification. If so, ask ONE follow-up question tailored to what's unclear. Examples:

  • "You mentioned monitoring - should they alert you immediately or batch updates?"
  • "They'll need access to your codebase - any repos that are off-limits?"
  • "You said 'casual' - are we talking friendly-professional or meme-level casual?"

If Q1-Q3 were clear enough, skip Q4 entirely.

Summary Card

After the interview, present a summary:

🎯 Role: [one-line description]
🧠 Name: [suggested name from naming taxonomy]
🤖 Model: [selected model] ([tier])
⚡ Personality: [2-3 word vibe]
🔧 Tools: [inferred from conversation]
🚫 Boundaries: [key red lines]
🤝 Autonomy: [inferred level: high/medium/low]

Then ask: "Want to tweak anything, or are we good?"

Model Selection

Before finalizing, select an appropriate model for the agent.

Step 1: Discover available models

Run openclaw models list or check the gateway config to see what's configured.

Step 2: Categorize by tier

Map discovered models to capability tiers:

TierModels (examples)Best for
reasoningclaude-opus-, gpt-5, gpt-4o, deepseek-r1Strategy, advisory, complex analysis, architecture
balancedclaude-sonnet-*, gpt-4-turbo, gpt-4o-miniResearch, writing, general tasks
fastclaude-haiku-, gpt-3.5, local/ollamaHigh volume, simple tasks, drafts
codecodex-, claude-sonnet-, deepseek-coderCoding, refactoring, tests

Use pattern matching on model names - don't hardcode specific versions.

Step 3: Match role to tier

Based on the interview:

  • Heavy reasoning/advisory/strategy → reasoning tier
  • Research/writing/creative → balanced tier
  • Code-focused → code tier (or balanced if not available)
  • High-volume/monitoring → fast tier

Step 4: Select and confirm

Pick the best available model for the role. In the summary card, add:

🤖 Model: [selected model] ([tier] - [brief reason])

If multiple good options exist or you're unsure, ask: "For a [role type] role, I'd suggest [model] (good balance of capability and cost). Or [alternative] if you want [deeper reasoning / faster responses / lower cost]. Preference?"

Notes

  • Don't assume any specific provider - work with what's available
  • Cheaper is better when capability is sufficient
  • The user's default model isn't always right for every agent
  • If only one model is available, use it and note it in the summary

Optional Extras

After the summary is confirmed, offer:

  1. "Want to set up periodic performance reviews?"

    • If yes: ask preferred frequency (weekly, biweekly, monthly)
    • Create a cron job that triggers a review conversation
    • Review covers: what went well, what's not working, scope/permission adjustments
    • At the end of each review, ask: "Want to keep this schedule, change frequency, or stop reviews?"
  2. Onboarding assignment (if relevant to the role)

    • Suggest a small first task to test the new agent
    • Something real but low-stakes, so the user can see them in action

What to Generate

Create an agent directory at agents/<name>/ with:

Always unique (generated fresh):

  • AGENTS.md - Role definition, responsibilities, operational rules, what they do freely vs ask first
  • IDENTITY.md - Name, emoji, creature type, vibe, core principles

Start from template, customize based on interview:

  • SOUL.md - Base from workspace SOUL.md template, customize vibe/boundaries sections
  • TOOLS.md - Populated with inferred tools and access notes
  • HEARTBEAT.md - Empty or with initial periodic tasks if relevant to role

Symlink to shared (default, opinionated):

  • USER.md../../USER.md (they need to know who they work for)
  • MEMORY.md../../MEMORY.md (shared team context)

Mention to the user: "I've linked USER.md and MEMORY.md so they know who you are and share team context. You can change this later if you want them more isolated."

Naming

Use craft/role-based names. Check TOOLS.md for the full naming taxonomy:

  • Research: Scout, Observer, Surveyor
  • Writing: Scribe, Editor, Chronicler
  • Code: Smith, Artisan, Engineer
  • Analysis: Analyst, Assessor, Arbiter
  • Creative: Muse, Artisan
  • Oversight: Auditor, Reviewer, Warden

Check existing agents to avoid name conflicts. Suggest a name that fits the role, but let the user override.

Team Awareness

Before generating, check agents/ for existing team members. Note:

  • Potential overlaps with existing roles
  • Gaps this new hire fills
  • How they'll interact with existing agents

Mention any relevant observations: "You already have Scout for research - this new role would focus specifically on..."

After Setup

  1. Tell the user what was created and where

  2. Automatically update the OpenClaw config via gateway config.patch (do not ask the user to run a manual command). You must:

    • Add the new agent entry to agents.list using this format:
      {
        "id": "<name>",
        "workspace": "/home/lars/clawd/agents/<name>",
        "model": "<selected-model>"
      }
      
    • Add the new agent ID to the main agent's subagents.allowAgents array
    • Preserve all existing agents and fields (arrays replace on patch)

    Required flow:

    1. Fetch config + hash
      openclaw gateway call config.get --params '{}'
      
    2. Build the updated agents.list array (existing entries + new agent) and update the main agent's subagents.allowAgents (existing list + new id).
    3. Apply with config.patch:
      openclaw gateway call config.patch --params '{
        "raw": "{\n agents: {\n  list: [ /* full list with new agent + updated main allowAgents */ ]\n }\n}\n",
        "baseHash": "<hash-from-config.get>",
        "restartDelayMs": 1000
      }'
      
  3. If monthly reviews were requested, confirm the cron schedule

  4. Update any team roster if one exists

Important

  • This is a CONVERSATION, not a form. Be natural.
  • Infer as much as possible from context. Don't ask what you can figure out.
  • The user might not know what they want exactly. Help them figure it out.
  • Keep the whole process under 5 minutes for the simple case.

Files

1 total
Select a file
Select a file to preview.

Comments

Loading comments…