{"skill":{"slug":"linux-ai-server","displayName":"Linux Ai Server","summary":"Linux AI Server — turn Linux servers into a local AI inference cluster. Headless Linux AI with systemd, NVIDIA CUDA, and zero GUI overhead. Linux AI server f...","tags":{"latest":"1.0.0"},"stats":{"comments":0,"downloads":141,"installsAllTime":2,"installsCurrent":2,"stars":0,"versions":1},"createdAt":1775256822456,"updatedAt":1775256840803},"latestVersion":{"version":"1.0.0","createdAt":1775256822456,"changelog":"Initial release of linux-ai-server — turn Linux machines into a headless AI inference cluster.\n\n- Run headless Linux AI servers using systemd with no GUI or Docker overhead.\n- Supports distributed inference with routing and fleet management across Ubuntu, Debian, RHEL, Fedora, and more.\n- Native support for NVIDIA CUDA GPUs (experimental ROCm & CPU support).\n- REST and OpenAI API compatibility; works with common AI models (Llama, Qwen, DeepSeek, Phi, Mistral).\n- Includes admin tools, firewall setup, monitoring endpoints, and setup guides.\n- Guardrails require explicit user confirmation for all model downloads and deletions.","license":"MIT-0"},"metadata":{"os":["linux"],"systems":null},"owner":{"handle":"twinsgeeks","userId":"s17dgy27g44azc3tday4qh394d83ensj","displayName":"Twin Geeks","image":"https://avatars.githubusercontent.com/u/261838102?v=4"},"moderation":null}