Deepseek Deepseek Coder

v1.0.3

DeepSeek DeepSeek-Coder — run DeepSeek-V3, DeepSeek-R1, DeepSeek-Coder across your local fleet. 7-signal scoring routes every request to the best device. Cro...

2· 135·2 current·2 all-time
byTwin Geeks@twinsgeeks

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for twinsgeeks/deepseek-deepseek-coder.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Deepseek Deepseek Coder" (twinsgeeks/deepseek-deepseek-coder) from ClawHub.
Skill page: https://clawhub.ai/twinsgeeks/deepseek-deepseek-coder
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install deepseek-deepseek-coder

ClawHub CLI

Package manager switcher

npx clawhub@latest install deepseek-deepseek-coder
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description, examples (curl/OpenAI SDK), and required tools (curl/wget, optional python/pip) all align with running a local fleet router and calling a localhost Ollama-compatible API. The referenced config paths (~/.fleet-manager/latency.db, ~/.fleet-manager/logs/herd.jsonl) are consistent with a fleet manager's state and logs.
Instruction Scope
SKILL.md instructs installing 'ollama-herd' and running local processes (herd, herd-node) and making requests to localhost:11435. It does not instruct reading unrelated system files, exporting secrets, or sending data to external endpoints other than pulling models on demand (which the docs say requires confirmation).
Install Mechanism
This is instruction-only with no install spec; the docs instruct users to 'pip install ollama-herd'. That is coherent but introduces typical supply-chain risk because installing a PyPI package runs third-party code on your machine and may trigger on-demand model downloads. The SKILL itself does not include or pin any binaries.
Credentials
No environment variables or credentials are requested. Example code uses a localhost base_url and sets api_key to 'not-needed'. There are no unexpected credential requests in the SKILL.md or metadata.
Persistence & Privilege
always is false (not force-included). The skill does not request elevated platform privileges or attempt to modify other skills' configuration. Autonomous invocation is allowed by default, which is normal; nothing else indicates persistent privileged presence.
Assessment
This skill is internally consistent with its purpose, but before installing: (1) verify the PyPI package (ollama-herd) and the GitHub repo (review recent commits, maintainer identity, and issues); (2) install in a virtualenv or isolated VM if you are unsure; (3) be prepared for very large model downloads and ensure disk space and bandwidth; (4) confirm any model downloads prompt you before proceeding (the docs claim confirmation is required); (5) check what filesystem paths the herd service writes to (~/.fleet-manager/...) and restrict permissions if needed; (6) consider running the router behind a firewall or localhost-only interface to avoid exposing models to the network. If you want higher confidence, request the actual repository code or a pinned package artifact for review before installing.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

brain Clawdis
OSmacOS · Linux · Windows
Any bincurl, wget
latestvk972jpg12519djm4pq9zqqy6tx844d7y
135downloads
2stars
4versions
Updated 3w ago
v1.0.3
MIT-0
macOS, Linux, Windows

DeepSeek — Run DeepSeek Models Across Your Local Fleet

Run DeepSeek-V3, DeepSeek-R1, and DeepSeek-Coder on your own hardware. The fleet router picks the best device for every request — no cloud API needed, zero per-token costs, all data stays on your machines.

Supported DeepSeek models

ModelParametersOllama nameBest for
DeepSeek-V3671B MoE (37B active)deepseek-v3General — matches GPT-4o on most benchmarks
DeepSeek-V3.1671B MoEdeepseek-v3.1Hybrid thinking/non-thinking modes
DeepSeek-V3.2671B MoEdeepseek-v3.2Improved reasoning + agent performance
DeepSeek-R11.5B–671Bdeepseek-r1Reasoning — approaches O3 and Gemini 2.5 Pro
DeepSeek-Coder1.3B–33Bdeepseek-coderCode generation (87% code, 13% NL training)
DeepSeek-Coder-V2236B MoE (21B active)deepseek-coder-v2Code — matches GPT-4 Turbo on code tasks

Setup

pip install ollama-herd
herd              # start the router (port 11435)
herd-node         # run on each machine

Package: ollama-herd | Repo: github.com/geeks-accelerator/ollama-herd

Models are pulled on demand — the router auto-pulls when a request arrives for a model not yet on any node, or you can pull manually via the dashboard. No models are downloaded during installation.

Use DeepSeek through the fleet

OpenAI SDK

from openai import OpenAI

client = OpenAI(base_url="http://localhost:11435/v1", api_key="not-needed")

# DeepSeek-R1 for reasoning
response = client.chat.completions.create(
    model="deepseek-r1:70b",
    messages=[{"role": "user", "content": "Prove that there are infinitely many primes"}],
    stream=True,
)
for chunk in response:
    print(chunk.choices[0].delta.content or "", end="")

DeepSeek-Coder for code

response = client.chat.completions.create(
    model="deepseek-coder-v2:16b",
    messages=[{"role": "user", "content": "Write a Redis cache decorator in Python"}],
)
print(response.choices[0].message.content)

Ollama API

# DeepSeek-V3 general chat
curl http://localhost:11435/api/chat -d '{
  "model": "deepseek-v3",
  "messages": [{"role": "user", "content": "Explain quantum computing"}],
  "stream": false
}'

# DeepSeek-R1 reasoning
curl http://localhost:11435/api/chat -d '{
  "model": "deepseek-r1:70b",
  "messages": [{"role": "user", "content": "Solve this step by step: ..."}],
  "stream": false
}'

Hardware recommendations (optional — choose models that fit your RAM)

Cross-platform: These are example configurations. Any device (Mac, Linux, Windows) with equivalent RAM works. The fleet router runs on all platforms.

DeepSeek offers models at every size. Pick the one that fits your available memory — smaller models work great for most tasks:

ModelMin RAMRecommended hardware
deepseek-r1:1.5b4GBAny Mac
deepseek-r1:7b8GBMac Mini M4 (16GB)
deepseek-r1:14b12GBMac Mini M4 (24GB)
deepseek-r1:32b24GBMac Mini M4 Pro (48GB)
deepseek-r1:70b48GBMac Studio M4 Max (128GB)
deepseek-coder-v2:16b12GBMac Mini M4 (24GB)
deepseek-v3256GB+Mac Studio M3 Ultra (512GB)

The fleet router automatically sends requests to the machine where the model is loaded — no manual routing needed.

Why run DeepSeek locally

  • Zero cost — DeepSeek API charges per token. Local is free after hardware.
  • Privacy — code and business data never leave your network.
  • No rate limits — DeepSeek API throttles during peak hours. Local has no throttle.
  • Availability — DeepSeek API has had outages. Your hardware doesn't depend on their servers.
  • Fleet routing — multiple machines share the load. One busy? Request goes to the next.

Fleet features

  • 7-signal scoring — picks the optimal node for every request
  • Auto-retry — fails over to next best node transparently
  • VRAM-aware fallback — routes to a loaded model in the same category instead of cold-loading
  • Context protection — prevents expensive model reloads from num_ctx changes
  • Request tagging — track per-project DeepSeek usage

Also available on this fleet

Other LLM models

Llama 3.3, Qwen 3.5, Phi 4, Mistral, Gemma 3 — any Ollama model routes through the same endpoint.

Image generation

curl -o image.png http://localhost:11435/api/generate-image \
  -H "Content-Type: application/json" \
  -d '{"model":"z-image-turbo","prompt":"a sunset","width":1024,"height":1024,"steps":4}'

Speech-to-text

curl http://localhost:11435/api/transcribe -F "audio=@recording.wav"

Embeddings

curl http://localhost:11435/api/embeddings -d '{"model":"nomic-embed-text","prompt":"query"}'

Dashboard

http://localhost:11435/dashboard — monitor DeepSeek requests alongside all other models. Per-model latency, token throughput, health checks.

Full documentation

Agent Setup Guide

Guardrails

  • Model downloads require explicit user confirmation — DeepSeek models range from 1GB (1.5B) to 400GB+ (671B). Always confirm before pulling.
  • Model deletion requires explicit user confirmation — never remove models without asking.
  • Never delete or modify files in ~/.fleet-manager/.
  • If a DeepSeek model is too large for available memory, suggest a smaller variant (e.g., deepseek-r1:7b instead of :70b).
  • No models are downloaded automatically — all pulls are user-initiated or require opt-in via the auto_pull setting.

Comments

Loading comments...