DeepSeek — DeepSeek-V3, DeepSeek-R1, DeepSeek-Coder on Your Local Devices
v1.0.1DeepSeek models on your local fleet — DeepSeek-V3, DeepSeek-V3.2, DeepSeek-R1, DeepSeek-Coder routed across multiple devices via Ollama Herd. 7-signal scorin...
⭐ 3· 76·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
medium confidencePurpose & Capability
Name/description (running DeepSeek via an Ollama Herd router) align with the runtime instructions: installing ollama-herd, running herd/herd-node, and using ollama pull to fetch models. Declared binaries (curl/wget, optional python/pip) make sense for interacting with local HTTP endpoints and installing the Python package.
Instruction Scope
SKILL.md contains only setup and usage steps for a local fleet router and examples showing how to call localhost endpoints. It does not instruct reading or exfiltrating unrelated system files or environment variables; it even warns not to delete/edit ~/.fleet-manager. Sample code points at localhost (http://localhost:11435).
Install Mechanism
Installation is via pip install ollama-herd (PyPI) and running local binaries (herd, herd-node). Using PyPI is a common approach but carries moderate supply‑chain risk — the package and its GitHub repo should be reviewed before installation.
Credentials
The skill declares no required environment variables or unrelated credentials. Metadata lists config paths under ~/.fleet-manager, which are consistent with a fleet manager and are not excessive for the stated purpose.
Persistence & Privilege
No 'always' privilege requested; the skill is user‑invocable only. It does not request writing to other skills' configs or system‑wide settings in the instructions.
Assessment
This skill appears to be what it claims: a guide to running DeepSeek models locally via an Ollama Herd router. Before installing, verify the ollama-herd PyPI package and its GitHub repository (review code, recent activity, and maintainers). Be prepared for large downloads and big disk/RAM usage when pulling models. Run installations on a trusted machine or isolated environment, check network access (model pulls will download large artifacts), and inspect the ~/.fleet-manager directory and any created services before granting broader network access. If you need higher assurance, review the package source or run it in a VM/container first.Like a lobster shell, security has layers — review code before you run it.
apple-siliconvk976wtf7g8m8bwfh7nqsaf7hss83x0b6code-generationvk976wtf7g8m8bwfh7nqsaf7hss83x0b6deepseekvk976wtf7g8m8bwfh7nqsaf7hss83x0b6deepseek-codervk976wtf7g8m8bwfh7nqsaf7hss83x0b6deepseek-coder-v2vk976wtf7g8m8bwfh7nqsaf7hss83x0b6deepseek-r1vk976wtf7g8m8bwfh7nqsaf7hss83x0b6deepseek-v3vk976wtf7g8m8bwfh7nqsaf7hss83x0b6deepseek-v3.2vk976wtf7g8m8bwfh7nqsaf7hss83x0b6latestvk976wtf7g8m8bwfh7nqsaf7hss83x0b6local-llmvk976wtf7g8m8bwfh7nqsaf7hss83x0b6ollamavk976wtf7g8m8bwfh7nqsaf7hss83x0b6reasoningvk976wtf7g8m8bwfh7nqsaf7hss83x0b6
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
brain Clawdis
OSmacOS · Linux
Any bincurl, wget
