Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Windows Ai

v1.0.0

Windows AI — run local AI on Windows with LLM inference, image generation, and embeddings. Windows AI server for Llama, Qwen, DeepSeek, Phi, Mistral. Turn Wi...

0· 13·1 current·1 all-time
byTwin Geeks@twinsgeeks
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (local Windows AI cluster, routing LLM requests across Windows machines) aligns with the instructions: pip install ollama-herd, running herd / herd-node, local OpenAI-compatible API endpoints, and local model/image/embedding examples. Required binaries (curl/wget, optional python3/pip/nvidia-smi) and config paths (~/.fleet-manager/*) are consistent with a fleet manager.
!
Instruction Scope
Instructions tell the user to open inbound TCP port 11435 and to run herd-node which 'joins the cluster automatically' — there is no mention of authentication, encryption, or how nodes are authorized. Automatic joining + opening an inbound port can let untrusted hosts connect or allow data to leave your machine if misconfigured. The SKILL.md also suggests environment changes and persistent services but does not describe access controls for the dashboard, node registration, or logs in ~/.fleet-manager.
Install Mechanism
The registry has no install spec (instruction-only), but SKILL.md explicitly instructs 'pip install ollama-herd'. That is a third-party PyPI install triggered by the user/agent; it's reasonably expected but carries the usual risks of executing/ installing externally published packages. The skill does not provide a vetted release URL or hash, nor does it advise reviewing the package source before installing.
Credentials
The skill declares no required credentials or secrets. Example environment variables are service configuration (OLLAMA_KEEP_ALIVE, OLLAMA_MAX_LOADED_MODELS) rather than credentials. No unrelated API keys or secrets are requested.
!
Persistence & Privilege
The skill asks users to run persistent services (herd, herd-node), change user environment variables, and open a firewall port. While always:false and no permanent platform-level privileges are requested, these actions increase the system's network footprint and long-term exposure. The SKILL.md also references persistent config/log files (~/.fleet-manager) that could contain sensitive data but provides only a 'do not modify' note, not guidance on securing them.
What to consider before installing
This skill appears to implement a local Windows AI cluster, which is plausible — but it increases your machine's network exposure. Before installing or running anything: (1) Review the 'ollama-herd' package source on the provided GitHub link and PyPI listing; don't pip-install blindly. (2) Confirm how herd-node authenticates/authorizes nodes; avoid automatic-join behavior on machines in untrusted networks. (3) When adding a firewall rule, restrict it to private/local interfaces or specific IPs rather than opening the port to all networks or the public internet. (4) Inspect and protect ~/.fleet-manager (logs/DB) — they may contain request data; back them up and set appropriate file permissions. (5) Prefer testing in an isolated VM or non-sensitive machine first. If you need higher assurance, ask the maintainer how node enrollment, TLS/authentication, and admin access control are implemented and request guidance for secure deployment.

Like a lobster shell, security has layers — review code before you run it.

latestvk9761j2fkpymhf7ghh0d32jqzh845edb

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

computer Clawdis
OSWindows
Any bincurl, wget

SKILL.md

Windows AI — Local AI on Your Windows PCs

Run AI entirely on Windows. No cloud APIs, no subscriptions, no data leaving your network. Windows AI via Ollama Herd routes LLM requests across your Windows machines — your gaming PC, your work desktop, your laptop. One Windows AI endpoint serves them all.

Why Windows AI locally

  • Zero cost — no per-token charges. Your Windows PC runs unlimited AI inference.
  • Privacy — prompts and responses never leave your Windows network.
  • No rate limits — cloud APIs throttle. Your Windows AI hardware doesn't.
  • NVIDIA GPU support — Windows AI uses your RTX GPU via CUDA for fast inference.
  • Fleet routing — multiple Windows PCs share the AI workload automatically.

Windows AI quick start

# Install Windows AI router
pip install ollama-herd

# Start Windows AI on your main PC
herd          # Windows AI router on port 11435
herd-node     # register this Windows AI node

# On other Windows PCs
herd-node     # joins the Windows AI cluster automatically

Windows Firewall: Allow port 11435 — netsh advfirewall firewall add rule name="Windows AI" dir=in action=allow protocol=tcp localport=11435

Use Windows AI

OpenAI SDK

from openai import OpenAI

# Your Windows AI endpoint
client = OpenAI(base_url="http://localhost:11435/v1", api_key="not-needed")

# Windows AI routes to the best available GPU
response = client.chat.completions.create(
    model="qwen3.5:32b",
    messages=[{"role": "user", "content": "Explain local AI vs cloud AI for Windows users"}],
    stream=True,
)
for chunk in response:
    print(chunk.choices[0].delta.content or "", end="")

Windows AI for coding

# Windows AI code generation
response = client.chat.completions.create(
    model="codestral",
    messages=[{"role": "user", "content": "Write a C# Windows service that monitors GPU temperature"}],
)
print(response.choices[0].message.content)

curl (PowerShell)

# Windows AI chat
curl http://localhost:11435/api/chat -d '{
  "model": "llama3.3:70b",
  "messages": [{"role": "user", "content": "Hello from Windows AI"}],
  "stream": false
}'

Windows AI hardware guide

Windows PCGPURAMBest Windows AI models
Gaming desktopRTX 4090 (24GB)32GB+llama3.3:70b, qwen3.5:32b — full quality Windows AI
Gaming desktopRTX 4080 (16GB)16GB+phi4, codestral, qwen3.5:14b
Work laptopRTX 4060 (8GB)16GBphi4-mini, gemma3:4b — fast Windows AI
Office desktopIntel/AMD (no GPU)16GBphi4-mini, gemma3:1b — CPU Windows AI

Windows AI works with or without a GPU. NVIDIA GPUs dramatically accelerate inference.

Windows AI environment setup

# Optimize Windows AI performance
[System.Environment]::SetEnvironmentVariable("OLLAMA_KEEP_ALIVE", "-1", "User")
[System.Environment]::SetEnvironmentVariable("OLLAMA_MAX_LOADED_MODELS", "-1", "User")
# Restart Ollama from the Windows system tray

Windows AI features

  • 7-signal scoring — picks the best Windows PC for every AI request
  • 15 health checks — monitors all Windows AI nodes in real-time
  • Auto-retry — transparent failover between Windows AI machines
  • vRAM-aware routing — knows which Windows GPU has room for the model
  • Request tagging — track per-project Windows AI usage
  • Web dashboardhttp://localhost:11435/dashboard

Windows AI integrations

Works with any OpenAI-compatible tool on Windows:

  • Continue.dev (VS Code) — set endpoint to http://localhost:11435/v1
  • Cursor — Windows AI as local backend
  • LangChain — drop-in OpenAI replacement
  • CrewAI — multi-agent workflows on Windows AI
  • Open WebUI — chat interface for Windows AI

Also available on Windows AI

Image generation

curl http://localhost:11435/api/generate-image `
  -d '{"model": "z-image-turbo", "prompt": "futuristic Windows desktop", "width": 1024, "height": 1024}'

Embeddings

curl http://localhost:11435/api/embed `
  -d '{"model": "nomic-embed-text", "input": "Windows AI local inference embeddings"}'

Full documentation

Contribute

Ollama Herd is open source (MIT). Windows AI enthusiasts welcome:

Guardrails

  • Windows AI model downloads require explicit user confirmation.
  • Windows AI model deletion requires explicit user confirmation.
  • Never delete or modify files in ~/.fleet-manager/.
  • No models are downloaded automatically — all pulls are user-initiated or require opt-in.

Files

1 total
Select a file
Select a file to preview.

Comments

Loading comments…