Skill Auto-Injection

Auto-match user delivery task with available skills for better agent responses

Audits

Pass

Install

openclaw plugins install clawhub:skill-ai-inject

Skill Auto-Injection Plugin

Automatically match user delivery tasks with available skills using embedding similarity

Version

v0.4.0 · OpenClaw 2026.5.x compatible


Overview

skill-auto-injection automatically matches user input against available skills (from local SKILL.md files) and injects the top matches into the agent's prompt context. It uses a two-tier cascade: L1 keyword match (zero cost, instant) → L2 embedding fallback (semantic, cross-language).

Key features:

  • L1 keyword match with LLM-extracted trigger keywords (no manual whitelist maintenance)
  • L2 embedding fallback for semantic matching across languages
  • Smart translation: skips translation when query already contains English characters
  • Ollama-only: no external API keys required

Prerequisites

Ollama models

ollama pull bge-m3          # for embedding similarity
ollama pull qwen2.5:7b      # for translation and keyword extraction

Installation

1. Build plugin

cd ~/projects/skill-auto-injection
npm install
npm run build

2. Link plugin to OpenClaw

openclaw plugins install --link .

3. Configure openclaw.json

{
  "plugins": {
    "allow": ["memory-recall", "skill-auto-injection", "policy-layer", "minimax", "browser"],
    "bundledDiscovery": "allowlist",
    "entries": {
      "skill-auto-injection": {
        "enabled": true,
        "config": {
          "embedding": {
            "baseURL": "http://localhost:11434",
            "model": "bge-m3",
            "dimensions": 1024
          },
          "translate": {
            "enabled": true,
            "model": "qwen2.5:7b"
          },
          "matching": {
            "skillMatchThreshold": 0.6,
            "maxSkills": 3,
            "minKeywordMatch": 1,
            "l2CandidateCount": 20
          },
          "keyword": {
            "enabled": true,
            "model": "qwen2.5:7b"
          }
        }
      }
    }
  }
}

4. Restart gateway

openclaw gateway restart

5. Verify

openclaw plugins inspect skill-auto-injection
# Should show: Status: loaded

Configuration

Config Parameters

ConfigDescriptionDefault
enabledEnable plugintrue
embedding.baseURLEmbedding API URLhttp://localhost:11434
embedding.modelEmbedding modelbge-m3
embedding.dimensionsVector dimensions1024
translate.enabledEnable translationtrue
translate.modelTranslation modelqwen2.5:7b
matching.skillMatchThresholdSkill match threshold (0-1)0.6
matching.maxSkillsMax skills to inject3
matching.minKeywordMatchMin keyword hits for L1 match1
matching.l2CandidateCountMax candidates for L2 embedding stage20
keyword.enabledEnable L1 keyword matchingtrue
keyword.modelLLM model for keyword extractionqwen2.5:7b
keyword.baseURLOverride baseURL for keyword LLMnull (uses embedding.baseURL)

Note: All LLM operations use Ollama locally — no external API keys required.


OpenClaw Configuration Notes

Plugin ID Mismatch

Problem: plugin id mismatch (config uses "skill-ai-inject", export uses "skill-auto-injection")

Solution: The plugin ID must match exactly between openclaw.plugin.json and the exported JavaScript object. Always use skill-auto-injection as the canonical ID (matching the GitHub repo name). Update openclaw.json entries accordingly:

"entries": {
  "skill-auto-injection": { ... }
}

bundledDiscovery: "allowlist"

When bundledDiscovery is set to "allowlist" (default), the plugins.allow list filters ALL plugins. Make sure skill-auto-injection is listed:

"plugins": {
  "allow": ["skill-auto-injection", ...]
}

policy-layer AllowPromptInjection

If policy-layer blocks prompt injection, skill matching results won't appear in context:

"entries": {
  "policy-layer": {
    "enabled": true,
    "config": {
      "hooks": {
        "allowPromptInjection": true
      }
    }
  }
}

Workflow

User Message → before_prompt_build hook
  │
  ├── L1: Keyword Match (zero cost)
  │     Extract English tokens from prompt
  │     Match against skill trigger keywords (hit ratio scoring)
  │     → HIT → Inject matched skills immediately
  │
  └── L2: Embedding Fallback (only if L1 misses)
        Query has English chars? → Skip translation
        Otherwise → Translate to English
        Get embedding → Cosine similarity → Filter by threshold
        → Inject top-N matched skills

Keywords are extracted by LLM when skills are loaded (cached for 5 min) — no manual maintenance required.


Skills Source

Plugin scans SKILL.md from:

  1. ~/.openclaw/skills/ — Global skills
  2. ~/.openclaw/workspace/.openclaw/skills/ — Workspace skills

Note: Currently only scans local directories. OpenClaw bundled skills (acp-router, coding-agent, etc.) are not included.


Injection Format

When skills are matched, prepends:

[Skill Auto-Injection] The current conversation may involve these available skills:
- [skill-name]: skill description...

Please consider using relevant skills to fulfill the user's request if applicable.

Debugging

# View plugin logs
openclaw logs 2>&1 | grep skill-auto-injection

# Check plugin status
openclaw plugins inspect skill-auto-injection

# List available skills
openclaw skills list

# Restart gateway
openclaw gateway restart

Known Issues

  • Only scans local skill directories; bundled OpenClaw skills are not included
  • No exclusion list for specific skills
  • No user feedback loop for learning from corrections

Version History

VersionDateChanges
0.1.02026-04-22Initial: embedding-based skill matching
0.2.02026-04-22Add multi-provider translation (ollama/minimax/openai), optimize logging
0.3.02026-04-25L1 keyword match (zero-cost) + L2 embed cascade; LLM keyword extraction on skill load; skip translation for English queries
0.4.02026-05-10Reverse L1 matching direction; switch to before_prompt_build hook; hit-ratio scoring; l2CandidateCount config