Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Local Coding

v1.0.1

Local coding assistant — run DeepSeek-Coder, Codestral, StarCoder, and Qwen-Coder across your device fleet. Code generation, review, refactoring, and debuggi...

1· 45·1 current·1 all-time
byTwin Geeks@twinsgeeks
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (local coding across a fleet) matches the instructions (start a router, run herd-node, expose an OpenAI-compatible local API). Requiring curl/wget and optionally python/pip is reasonable. However the SKILL.md metadata lists config paths under ~/.fleet-manager which implies the skill expects access to local fleet state; the registry summary earlier reported 'Required config paths: none' — this mismatch is noteworthy.
!
Instruction Scope
Runtime instructions instruct use of local endpoints that can return recent request traces and running-model state (e.g., /dashboard/api/traces, /api/ps). Those endpoints can contain user code and potentially secrets from prior requests. While accessing them is coherent for a fleet router, it raises clear privacy risk: the skill's operation includes reading sensitive artifacts (request traces/logs) from the fleet.
Install Mechanism
This is an instruction-only skill (no install spec). The docs show users should run 'pip install ollama-herd' (PyPI). Installing from PyPI is common, but pip-installed packages execute arbitrary code and should be audited before installation; no signed release or pinned URL is provided in the SKILL.md.
!
Credentials
The skill declares no required environment variables or credentials, which is good. However SKILL.md metadata lists configPaths (e.g., ~/.fleet-manager/latency.db and ~/.fleet-manager/logs/herd.jsonl) that expose local logs/state. Requesting access to logs/traces is proportionate to a fleet manager but also grants access to potentially sensitive user code; the manifest/registry inconsistency about config paths increases uncertainty.
Persistence & Privilege
The skill does not ask for always:true or elevated persistent privileges. It's user-invocable and allows autonomous invocation (platform default). There is no install spec that writes to system-wide locations in the skill bundle itself.
What to consider before installing
This skill appears to implement a local fleet router for code models, which can legitimately read fleet status and traces — but those traces may contain snippets of user code or secrets. Before installing or running: 1) Inspect the upstream repository (https://github.com/geeks-accelerator/ollama-herd) and review the ollama-herd PyPI package contents and recent commits; prefer a pinned release or source you trust. 2) Verify what data the local endpoints (/dashboard/api/traces, /api/ps, log paths) expose and who can access them on your network; restrict access if they contain sensitive code. 3) Confirm the listed config paths are correct and intended; the SKILL.md metadata mentioning ~/.fleet-manager conflicts with the registry summary. 4) If you will pip-install paket on multiple devices, audit the package and run in a controlled environment first. If you need a cleaner safety posture, ask for a version that documents exactly which local files/endpoints it reads and add network access controls for the router.

Like a lobster shell, security has layers — review code before you run it.

aidervk97df9pk7j8159cebdghq7p7m183zd6aapple-siliconvk97df9pk7j8159cebdghq7p7m183zd6aclinevk97df9pk7j8159cebdghq7p7m183zd6acode-generationvk97df9pk7j8159cebdghq7p7m183zd6acodestralvk97df9pk7j8159cebdghq7p7m183zd6acoding-assistantvk97df9pk7j8159cebdghq7p7m183zd6acontinue-devvk97df9pk7j8159cebdghq7p7m183zd6adeepseek-codervk97df9pk7j8159cebdghq7p7m183zd6aidevk97df9pk7j8159cebdghq7p7m183zd6alatestvk97akmcyn6cg06x2447zgnxszn845hn7local-codevk97df9pk7j8159cebdghq7p7m183zd6aollamavk97df9pk7j8159cebdghq7p7m183zd6aqwen-codervk97df9pk7j8159cebdghq7p7m183zd6astarcodervk97df9pk7j8159cebdghq7p7m183zd6a

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

keyboard Clawdis
OSmacOS · Linux · Windows
Any bincurl, wget

SKILL.md

Local Coding Assistant — Code Models Across Your Fleet

Run the best open-source coding models on your own hardware. DeepSeek-Coder, Codestral, StarCoder, and Qwen-Coder routed across your devices — the fleet picks the best machine for every code generation request.

Your code never leaves your network. No GitHub Copilot subscription, no cloud API costs.

Coding models available

ModelParametersOllama nameStrengths
Codestral22Bcodestral80+ languages, fill-in-the-middle, Mistral's code specialist
DeepSeek-Coder-V2236B MoE (21B active)deepseek-coder-v2Matches GPT-4 Turbo on code tasks
DeepSeek-Coder6.7B, 33Bdeepseek-coder:33bPurpose-built for code (87% code training data)
Qwen2.5-Coder7B, 32Bqwen2.5-coder:32bStrong multi-language code generation
StarCoder23B, 7B, 15Bstarcoder2:15bTrained on The Stack v2, 600+ languages
CodeGemma7BcodegemmaGoogle's code-focused Gemma variant

Quick start

pip install ollama-herd    # PyPI: https://pypi.org/project/ollama-herd/
herd                       # start the router (port 11435)
herd-node                  # run on each device — finds the router automatically

No models are downloaded during installation. All pulls require user confirmation.

Code generation

Write new code

from openai import OpenAI

client = OpenAI(base_url="http://localhost:11435/v1", api_key="not-needed")

response = client.chat.completions.create(
    model="codestral",
    messages=[{"role": "user", "content": "Write a thread-safe LRU cache in Python with TTL support"}],
)
print(response.choices[0].message.content)

Code review

curl http://localhost:11435/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "deepseek-coder-v2:16b",
    "messages": [{"role": "user", "content": "Review this code for bugs and security issues:\n\n```python\ndef process_payment(amount, card_number):\n    ...\n```"}]
  }'

Refactoring

curl http://localhost:11435/api/chat -d '{
  "model": "qwen2.5-coder:32b",
  "messages": [{"role": "user", "content": "Refactor this to use async/await: ..."}],
  "stream": false
}'

Works with your IDE tools

The fleet exposes an OpenAI-compatible API at http://localhost:11435/v1. Point any coding tool at it:

ToolConfig
Aideraider --openai-api-base http://localhost:11435/v1 --model codestral
Continue.devSet API base to http://localhost:11435/v1 in VS Code settings
ClineSet provider to OpenAI-compatible, base URL http://localhost:11435/v1
Open WebUISet Ollama URL to http://localhost:11435
LangChainChatOpenAI(base_url="http://localhost:11435/v1", model="codestral")

Pick the right model for your RAM

Cross-platform: These are example configurations. Any device (Mac, Linux, Windows) with equivalent RAM works.

DeviceRAMBest coding model
MacBook Air (8GB)8GBstarcoder2:3b or deepseek-coder:6.7b
Mac Mini (16GB)16GBcodestral or starcoder2:15b
Mac Mini (32GB)32GBqwen2.5-coder:32b or deepseek-coder:33b
Mac Studio (128GB)128GBdeepseek-coder-v2 — frontier code quality

Check what's running

# Models loaded in memory
curl -s http://localhost:11435/api/ps | python3 -m json.tool

# All available models
curl -s http://localhost:11435/api/tags | python3 -m json.tool

# Recent coding request traces
curl -s "http://localhost:11435/dashboard/api/traces?limit=5" | python3 -m json.tool

Also available on this fleet

General-purpose LLMs

Llama 3.3, Qwen 3.5, DeepSeek-R1, Mistral Large — for non-code tasks through the same endpoint.

Image generation

curl http://localhost:11435/api/generate-image \
  -d '{"model": "z-image-turbo", "prompt": "developer workspace illustration", "width": 512, "height": 512}'

Speech-to-text

curl http://localhost:11435/api/transcribe -F "file=@standup.wav" -F "model=qwen3-asr"

Full documentation

Guardrails

  • Model downloads require explicit user confirmation — coding models range from 2GB to 130GB+. Always confirm before pulling.
  • Model deletion requires explicit user confirmation.
  • Never delete or modify files in ~/.fleet-manager/.
  • No models are downloaded automatically — all pulls are user-initiated or require opt-in.
  • Your code stays local — no prompts or generated code leave your network.

Files

1 total
Select a file
Select a file to preview.

Comments

Loading comments…