Install
openclaw skills install claw-turboZero-latency regex-based skill routing middleware for OpenClaw. Intercepts known user commands (deploy, restart, print, check logs, etc.) using compiled rege...
openclaw skills install claw-turboclaw-turbo is a zero-latency, zero-ML skill routing middleware that sits between OpenClaw and your local LLM (Ollama). It intercepts user messages using regex pattern matching and executes skill scripts directly — no LLM inference needed.
Simple commands get instant, perfect execution. Complex queries still go to your LLM.
User message → claw-turbo (regex match, <0.01ms)
├── MATCH → execute script directly (0ms, 100% accurate)
└── NO MATCH → forward to LLM (normal processing)
| Approach | Latency | Accuracy | Dependencies |
|---|---|---|---|
| claw-turbo | 5 us | 100% | PyYAML only |
| LLM routing (Gemma/Llama) | 2-10s | ~80% | Ollama + VRAM |
| Semantic routing | 50-200ms | ~95% | embedding model |
Local LLMs are unreliable for simple, repetitive commands:
git clone https://github.com/jacobye2017-afk/claw-turbo.git
cd claw-turbo
pip install -e .
routes.yamlroutes:
- name: deploy-staging
description: "Deploy a service to staging"
patterns:
- 'deploy\s+(?P<service>\w+)\s+(?:to\s+)?staging'
command: 'bash /opt/scripts/deploy.sh {{service}} staging'
response_template: "Deployed {{service}} to staging"
- name: restart-service
patterns:
- 'restart\s+(?P<service>[\w-]+)'
command: 'systemctl restart {{service}}'
response_template: "Restarted {{service}}"
claw-turbo test "deploy auth-service to staging"
# MATCHED: deploy-staging
# Captures: {'service': 'auth-service'}
# Time: 4.8us
claw-turbo serve --port 11435
Then change OpenClaw's Ollama baseUrl to http://127.0.0.1:11435.
claw-turbo serve [--port 11435] Start HTTP proxy
claw-turbo test "message" Test pattern matching
claw-turbo routes List all routes
claw-turbo add-skill <path> Generate route from SKILL.md