Linux Ai Server

v1.0.0

Linux AI Server — turn Linux servers into a local AI inference cluster. Headless Linux AI with systemd, NVIDIA CUDA, and zero GUI overhead. Linux AI server f...

0· 87·2 current·2 all-time
byTwin Geeks@twinsgeeks
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (Linux AI Server) match the instructions: installing Ollama, pip installing an 'ollama-herd' router/node component, configuring systemd, firewall, and using GPU tooling. Required binaries (curl/wget, optional python3/pip/systemctl/nvidia-smi) are appropriate for this purpose.
Instruction Scope
SKILL.md stays within the stated purpose (install, run, and monitor a local inference fleet). It does instruct editing systemd, enabling services, opening a network port, and querying local APIs. No steps ask for unrelated files, cross-skill configs, or unrelated credentials. One scope note: it instructs running an install script fetched from the network (curl | sh), which executes remote code — this is within scope of installation but is an operational risk to inspect before running.
Install Mechanism
There is no registry install spec, but the runtime instructions recommend network installs: `curl -fsSL https://ollama.ai/install.sh | sh` and `pip install ollama-herd`. These are common for this kind of project (ollama.ai is the expected upstream), but piping a remote shell script and installing Python packages from PyPI are moderate-risk operations and should be reviewed before execution.
Credentials
The skill does not request environment variables, credentials, or config paths beyond local fleet logs/db under the user's home. The example SDK usage does not request secret keys. Suggested systemd environment variables (OLLAMA_*) are configuration for Ollama and are proportional to the task.
Persistence & Privilege
The skill does not request always:true and does not modify other skills or system-wide agent settings. It instructs enabling systemd services (router/node) so the software runs at boot — this is expected for server software and is within scope.
Assessment
This skill appears coherent for running an Ollama-based inference fleet, but you should be cautious before running the network install steps. Before installing: (1) review the contents of https://ollama.ai/install.sh rather than piping it blindly to sh; (2) verify the 'ollama-herd' PyPI package and its maintainers; (3) run initial installs in an isolated VM or test server; (4) prefer least-privileged service users (the systemd examples use 'ollama'); (5) restrict the exposed port (11435) to your internal network or put an authentication/reverse-proxy in front of it; and (6) keep an eye on logs and resource usage. If you want a lower-risk approach, obtain the installers from official release artifacts, verify checksums/signatures, or use distro-packaged alternatives where available.

Like a lobster shell, security has layers — review code before you run it.

latestvk97145sj0g25r5hqh0bvdmrzxx845f69

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

server Clawdis
OSLinux
Any bincurl, wget

Comments