Rtx Local Ai

v1.0.0

RTX Local AI — turn your gaming PC into a local AI server. RTX 4090, RTX 4080, RTX 4070, RTX 3090 run Llama, Qwen, DeepSeek, Phi, Mistral locally. Gaming PC...

0· 64·2 current·2 all-time
byTwin Geeks@twinsgeeks
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description describe turning an RTX gaming PC into a local AI server via Ollama Herd. The SKILL.md tells the user to pip install 'ollama-herd' and run 'herd'/'herd-node', references local endpoints (localhost:11435), optional tools (python3, pip, nvidia-smi), and config paths under ~/.fleet-manager — all are consistent with that purpose.
Instruction Scope
Instructions are narrowly scoped to installing and operating Ollama Herd, querying local endpoints, and configuring environment variables or systemd to keep models resident. Two things to note: (1) herd-node auto-discovers the router via mDNS, which opens network discovery and can expose the service to other LAN hosts; (2) the doc asks users to edit systemd (sudo) and set persistent environment variables — these are expected for persistent model hosting but require elevated privileges and care. There is no attempt to read unrelated files or exfiltrate secrets in the provided instructions.
Install Mechanism
The skill is instruction-only (no install spec). It tells users to run 'pip install ollama-herd' (PyPI). That is expected for this functionality but does mean you will install third-party code from the public package index — moderate risk if the package or its version is unvetted. The skill itself doesn't bundle code or download arbitrary archives.
Credentials
The skill declares no required credentials and the environment variables it references (OLLAMA_KEEP_ALIVE, OLLAMA_MAX_LOADED_MODELS) are directly related to the runtime behavior described. The config paths (~/.fleet-manager/...) are consistent with a fleet manager and are not system-wide secrets.
Persistence & Privilege
The skill does not request 'always: true' or elevated persistent privileges on its own. However, following its instructions (editing systemd, setting persistent environment variables, enabling mDNS-based discovery) requires administrator privileges and will make the service persist and potentially accessible on the local network. That is coherent with the stated goal but increases exposure.
Assessment
This skill appears to be what it says: guidance for running Ollama Herd on RTX GPUs. Before proceeding: (1) Inspect the referenced project and the exact PyPI package/version (https://pypi.org/project/ollama-herd/ and the GitHub repo) to ensure you trust the publisher. (2) Prefer installing into a virtualenv or container rather than system Python. (3) Be aware herd-node uses mDNS auto-discovery — only enable it on trusted LANs, or firewall/bind the service to localhost if you don't want other devices to access it. (4) Editing systemd requires sudo; back up service files and understand the change. (5) Confirm model downloads are truly opt-in and that large models won't be pulled automatically. If you need a higher-assurance review, provide the PyPI package contents or the GitHub repo code for deeper analysis.

Like a lobster shell, security has layers — review code before you run it.

latestvk977gzdng0wbj2cyzmymvex5zx8448hh

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

joystick Clawdis
OSLinux · Windows
Any bincurl, wget

Comments