{"skill":{"slug":"wsl2-local-ai","displayName":"Wsl2 Local Ai","summary":"WSL2 Local AI — run LLMs on Windows via WSL2 with NVIDIA GPU passthrough. WSL2 AI development with Ollama, CUDA, and Docker. WSL2 Ollama fleet routing for Wi...","tags":{"latest":"1.0.0"},"stats":{"comments":0,"downloads":112,"installsAllTime":2,"installsCurrent":2,"stars":0,"versions":1},"createdAt":1775256845582,"updatedAt":1775256866621},"latestVersion":{"version":"1.0.0","createdAt":1775256845582,"changelog":"Initial release of WSL2 Local AI, enabling full-stack local LLM development for Windows users via WSL2 and NVIDIA GPUs.\n\n- Run LLMs on Windows through WSL2 with native GPU passthrough for near-Linux performance.\n- Integrated Ollama Herd for routing AI requests across WSL2 and Windows environments.\n- Step-by-step setup including Ollama, CUDA, Docker, and Ollama Herd node registration.\n- Out-of-the-box support for popular LLMs, AI image generation, and embeddings.\n- Clear guidance for Windows developers to use local AI endpoints from VS Code, Python, curl, and Docker.\n- Safety guardrails: all model downloads and deletions require user confirmation; no automatic model pulls.","license":"MIT-0"},"metadata":{"os":["windows"],"systems":null},"owner":{"handle":"twinsgeeks","userId":"s17dgy27g44azc3tday4qh394d83ensj","displayName":"Twin Geeks","image":"https://avatars.githubusercontent.com/u/261838102?v=4"},"moderation":null}