Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Local AI Stack

v1.0.0

Transform your Mac into an offline AI workstation with Ollama and OpenCode, running curated local models for coding and reasoning without internet or API costs.

0· 61·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for jaysclawd-cloud/local-ai-stack.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Local AI Stack" (jaysclawd-cloud/local-ai-stack) from ClawHub.
Skill page: https://clawhub.ai/jaysclawd-cloud/local-ai-stack
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install local-ai-stack

ClawHub CLI

Package manager switcher

npx clawhub@latest install local-ai-stack
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The skill claims a "fully offline" workstation and "no internet required," yet the runtime instructions explicitly require network access to download and install Ollama, pull multiple models, and (optionally) auto-update them. The overall requested artifacts (Ollama, OpenCode, model downloads) are coherent with the name, but the offline claim is misleading.
Instruction Scope
SKILL.md gives concrete shell commands to run, references user-local files (~/.ollama/*), and suggests adding a cron job for auto-updates. It does not ask for unrelated files or credentials. However, it directs the user to execute networked install/pull steps and to schedule periodic updates — actions that have side effects beyond a one-time install and could fetch arbitrary remote content.
!
Install Mechanism
There is no formal install spec, but the instructions tell the user to run 'curl -fsSL https://ollama.com/install.sh | sh' (piping a remote script directly to sh) and to pull large models via 'ollama pull'. While the domain used (ollama.com) appears to be the official site, piping remote scripts to a shell is a high-risk pattern because it executes whatever the remote endpoint returns. Model downloads are unspecified and could fetch large binaries from remote hosts.
Credentials
The skill declares no required environment variables, no credentials, and no config paths beyond standard Ollama locations (~/.ollama). The requested access appears proportionate to installing and running local models: no secrets or unrelated credentials are requested.
Persistence & Privilege
The skill does not set always:true and does not autonomously install itself, but it instructs users to add a cron job to run an update script periodically. That creates persistent, scheduled network activity (model updates) under the user's account and should be reviewed before enabling.
What to consider before installing
This skill mostly does what it says: it installs Ollama/OpenCode and pulls local models. Before running anything, be aware of two things: (1) the README's claim "fully offline" is incorrect — you must fetch installers and model files from the Internet to set it up and to auto-update; (2) the install step uses 'curl ... | sh', which executes a live remote script on your machine — only run that if you trust the exact URL and have inspected the script. If you want true offline usage, download and verify installers and model files on a networked machine first, inspect any install scripts, and avoid or carefully audit the cron-based auto-update behavior so it doesn't later fetch unexpected content.

Like a lobster shell, security has layers — review code before you run it.

latestvk97e0ke5pmwe8scz3pe9f5fpmx84ske9
61downloads
0stars
1versions
Updated 2w ago
v1.0.0
MIT-0

SKILL.md — Local AI Stack

Purpose

Transform any Mac into a powerful offline AI workstation. Installs Ollama (local model runner) + OpenCode (terminal coding agent) with the best pre-selected models. Fully offline — no API costs, no internet required.

What You Get

  • Ollama — Local model runner (14GB models, ~$0 to run)
  • OpenCode — Terminal coding agent with free built-in models
  • 4 curated models — qwen2.5-coder, mistral, gemma3, llama3.2
  • Bi-weekly auto-updates — New models pulled automatically
  • OpenClaw integration — Works with your existing agent

Requirements

  • macOS (Apple Silicon recommended)
  • 24GB+ RAM (for larger models)
  • 50GB+ free disk space
  • Homebrew installed

Installation

Step 1: Install Ollama

curl -fsSL https://ollama.com/install.sh | sh

Or download from: https://ollama.com/download

Step 2: Pull Models

ollama pull qwen2.5-coder    # Best for coding
ollama pull mistral          # Fast tasks
ollama pull gemma3          # Reasoning
ollama pull llama3.2        # General purpose

Step 3: Install OpenCode

brew install opencode

Step 4: Configure OpenCode

# Test free built-in model
opencode run "Hello" --model opencode/big-pickle

Usage

Ollama Commands

# Run a local model
ollama run qwen2.5-coder "Write a Python function..."

# List installed models
ollama list

# Pull latest model version
ollama pull qwen2.5-coder

# Remove a model
ollama rm mistral

OpenCode Commands

# Interactive coding session
opencode

# Single command
opencode run "Write a React component" --model opencode/big-pickle

# List available models
opencode models

# Help
opencode --help

Model Selection Guide

ModelSizeBest For
qwen2.5-coder4.7GBCoding (primary)
mistral4.4GBFast responses
gemma33.3GBReasoning
llama3.22.0GBGeneral purpose

When to Use Local vs Cloud

Use Local When:

  • Offline (no internet)
  • Privacy-sensitive work
  • Quick coding tasks
  • Cost-sensitive (zero API fees)
  • Simple to medium complexity tasks

Use Cloud When:

  • Complex multi-step reasoning
  • Web search required
  • Long creative writing
  • Image generation
  • Advanced AI capabilities

Bi-Weekly Auto-Update

Add to cron for automatic model updates:

# Edit crontab
crontab -e

# Add this line (1st and 15th of each month at 9 AM)
0 9 1,15 * * /path/to/update-models.sh

Troubleshooting

Ollama won't start

# Check if running
ps aux | grep ollama

# Start manually
ollama serve

# Check logs
cat ~/.ollama/ollama.log

Model runs out of memory

  • Close other apps
  • Use smaller model (llama3.2 instead of qwen2.5-coder)
  • Check available RAM: top | head -20

OpenCode not found

# Find installation
which opencode

# Reinstall if needed
brew reinstall opencode

Files

  • Models stored: ~/.ollama/models/
  • Config: ~/.ollama/config.json
  • Logs: ~/.ollama/ollama.log

License

Ollama: MIT OpenCode: MIT

Author

Built with ❤️ for the OpenClaw community

Notes

  • Models load into RAM when used, unload when idle
  • Only one model runs at a time by default
  • For best performance, use Apple Silicon Mac with 24GB+ RAM

Comments

Loading comments...