Grago
WarnAudited by ClawScan on May 10, 2026.
Overview
Grago is transparent about its design, but it gives an agent broad local shell-command and file-access power that should be reviewed carefully before use.
Install Grago only on a trusted, single-user machine where you are comfortable letting your agent run local shell commands. Review the installer first, keep the model endpoint local or trusted, avoid pointing it at sensitive files, and consider sandboxing or adding approval gates before use.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
If the agent is tricked or given a bad command, it could run commands with your local user privileges, including reading, changing, deleting, or transmitting files.
The pipe command executes agent-supplied shell strings directly, and similar eval-based transforms are used in other modes. This gives the invoking agent broad local command execution without an allowlist or approval gate.
data=$(eval "$fetch_cmd") || err "Fetch command failed" ... data=$(echo "$data" | eval "$transform_cmd") || err "Transform failed"
Use only on an isolated machine you control. Add sandboxing, command allowlists, path restrictions, and explicit user confirmation before high-impact commands.
Installing the skill can execute code fetched from the network and download model artifacts, so the install result depends on external sources at that moment.
The installer executes an external remote install script without pinning or local review. This is related to the stated Ollama setup, but it expands trust to a live third-party script at install time.
curl -fsSL https://ollama.ai/install.sh | sh
Prefer installing Ollama manually from a trusted, verified source, or pin and review installer content before running it.
Data you fetch or read through Grago may be sent to whichever model endpoint is configured, so a non-local endpoint could receive sensitive content.
Fetched web data or local file contents are sent to an OpenAI-compatible model endpoint. The default is localhost, which matches the local-LLM purpose, but the endpoint is configurable.
api_base=$(get_config "api_base" "http://localhost:11434/v1") ... curl -s "${api_base}/chat/completions" ... --arg user "${prompt}\n\n---\nDATA:\n${input}"Keep api_base pointed at a trusted local endpoint unless you intentionally want to send data elsewhere, and avoid using sensitive files as sources.
A local model server may continue running after installation and consume local resources until stopped.
The installer may start the Ollama service in the background if it is not already running. This supports the skill's local-model purpose, but it is a long-running local process started by installation.
ollama serve &>/dev/null &
Confirm you want Ollama running locally, monitor resource use, and stop or disable the service when it is not needed.
