{"skill":{"slug":"rtx-local-ai","displayName":"Rtx Local Ai","summary":"RTX Local AI — turn your gaming PC into a local AI server. RTX 4090, RTX 4080, RTX 4070, RTX 3090 run Llama, Qwen, DeepSeek, Phi, Mistral locally. Gaming PC...","tags":{"latest":"1.0.0"},"stats":{"comments":0,"downloads":105,"installsAllTime":2,"installsCurrent":2,"stars":0,"versions":1},"createdAt":1775256830519,"updatedAt":1775256852876},"latestVersion":{"version":"1.0.0","createdAt":1775256830519,"changelog":"Initial release of rtx-local-ai — turn your gaming PC with an NVIDIA RTX GPU into a local AI server.\n\n- Run large language models (LLMs) like Llama, Qwen, DeepSeek, Phi, Mistral locally on RTX GPUs (4090, 4080, 4070, 3090, etc.)\n- Supports single or multiple RTX PCs for AI workload sharing (fleet mode)\n- Easy setup on Windows and Linux using Ollama Herd\n- No cloud costs: your GPU is the server for inference, code generation, image generation, and embeddings\n- Includes monitoring dashboard and CLI status tools\n- Explicit user control for model download and deletion; no automatic downloads","license":"MIT-0"},"metadata":{"os":["linux","windows"],"systems":null},"owner":{"handle":"twinsgeeks","userId":"s17dgy27g44azc3tday4qh394d83ensj","displayName":"Twin Geeks","image":"https://avatars.githubusercontent.com/u/261838102?v=4"},"moderation":null}