Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Grago

v1.0.1

Delegate web and API data fetching to local LLMs for research tasks, saving tokens and keeping data private while using your local machine for analysis.

0· 524·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for solsuk/grago.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Grago" (solsuk/grago) from ClawHub.
Skill page: https://clawhub.ai/solsuk/grago
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install grago

ClawHub CLI

Package manager switcher

npx clawhub@latest install grago
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (delegate web/API fetches to local LLMs) match the code and instructions: the scripts fetch URLs, read files, transform data, and send results to a local Ollama model. The installer pulls Ollama and a model as described. Requested capabilities are proportional to the declared purpose.
!
Instruction Scope
SKILL.md and grago.sh allow execution of arbitrary shell commands (cmd_pipe uses eval on fetch/transform commands; cmd_fetch and cmd_research run transforms via eval). The research flow can read arbitrary file paths from sources.yaml (cat $path). These behaviors are explicitly documented in SECURITY.md, but they mean the skill can access files and run commands beyond narrow fetch tasks — so it must only be used in trusted, single-user environments.
Install Mechanism
There is no packaged install spec in the registry, but install.sh runs an installer: on macOS it uses brew to install Ollama; on other OSes it runs curl -fsSL https://ollama.ai/install.sh | sh. Pulling models via ollama pull is expected. Using an official vendor URL (ollama.ai) is reasonable, but piping a remote install script to sh is higher-risk than packaged installs and should be inspected before running.
Credentials
The skill declares no required env vars or credentials, and the code does not require external secrets. However, sources.yaml examples in README show header values like Authorization: "Bearer ${API_KEY}", implying users may expose env-based secrets via sources config; the skill does not declare or manage those. The script reads arbitrary file paths and could expose local secrets if sources.yaml is used unsafely. This is consistent with the tool's purpose but worth caution.
Persistence & Privilege
The skill is not always-enabled and does not request elevated platform privileges. Installer writes to typical per-user locations (~/.grago, ~/.local/bin or /usr/local/bin) and copies SKILL.md into OpenClaw workspace if present. It does not modify other skills or global agent settings beyond installing its files.
Scan Findings in Context
[shell-eval-user-input] expected: grago.sh intentionally uses eval to run transform and fetch commands (e.g., eval "$transform", eval "$fetch_cmd"); SECURITY.md explicitly states this is by design. The scanner finding is correct and expected.
[arbitrary-file-read] expected: cmd_research supports type:file and reads paths via cat $path, allowing local file inclusion as part of research. This is necessary for local-log research use cases but means local secrets could be read if configured.
[prompt-injection-risk] expected: SKILL.md and SECURITY.md acknowledge prompt-injection risks: if the agent is compromised, Grago will execute arbitrary commands. This is an intended trade-off for the feature.
Assessment
Grago is coherent with its stated goal but deliberately runs arbitrary shell commands and can read local files — this makes it dangerous on shared or untrusted machines. Only install and run Grago on devices you fully control (personal Mac/VPS/workstation). Before installing: review install.sh (it may run ollama's remote install script), inspect grago.sh for eval usage, and ensure no sensitive files or credentials are reachable from sources.yaml or from commands you allow the agent to run. Do not use on multi-tenant systems, public-facing agents, or machines containing secrets you can't afford to expose. If you need narrower, safer behavior, prefer a tool that uses explicit, whitelisted HTTP calls rather than eval-ing shell commands.

Like a lobster shell, security has layers — review code before you run it.

latestvk9705fxwbj5k2vevft11cfk79581tdj7
524downloads
0stars
2versions
Updated 15h ago
v1.0.1
MIT-0

Grago

Delegate research and data-fetch tasks to a free local LLM. Save tokens. Use your machine.

Grago bridges the gap between your OpenClaw agent and local LLMs (Ollama, llama.cpp, etc.) that can't use tools natively. It runs shell scripts to fetch live data from the web, APIs, and local files — then pipes the results into your local model with a focused prompt.

Your cloud model stays sharp. Your local machine does the grunt work. Your token bill drops.

⚠️ Security Model

Grago executes shell commands. This is intentional — it's the only way to give tool-less local LLMs access to external data.

Safe for: Trusted, single-user environments (your own Mac Mini, VPS, workstation)
NOT safe for: Multi-tenant systems, public APIs, untrusted agents

If your OpenClaw agent is compromised via prompt injection, Grago can execute arbitrary commands. This is the trade-off for free local compute. Read SECURITY.md in the repo for full details.

When to Use This Skill

Use Grago when:

  • You need live data fetched (web pages, APIs, RSS feeds, logs)
  • The task is research-heavy and doesn't need your primary model
  • You want to keep data on your own machine (privacy)
  • You want to save tokens by offloading analysis to a local LLM

How It Works

  1. Fetch — Shell scripts pull live data (curl, jq, grep, etc.)
  2. Analyze — Results are piped to your local Ollama model with a prompt
  3. Return — Structured analysis comes back to your OpenClaw agent

Usage

# Fetch a URL and analyze locally
grago fetch "https://example.com" \
  --analyze "Summarize the key points" \
  --model gemma2

# Multi-source research from a YAML config
grago research \
  --sources sources.yaml \
  --prompt "What are the main themes across these sources?"

# Pipe any shell command into your local model
grago pipe \
  --fetch "curl -s https://api.example.com/data" \
  --transform "jq .results" \
  --analyze "Identify trends and flag outliers"

Configuration

Config file: ~/.grago/config.yaml

default_model: gemma2        # Your preferred Ollama model
timeout: 30                  # Seconds per fetch
max_input_chars: 16000       # Input truncation limit
output_format: markdown      # markdown | json | text

Requirements

  • Ollama installed and running locally (install.sh handles this)
  • At least one model pulled in Ollama (gemma2, mistral, llama3, etc.)
  • bash, curl, jq

Installation

git clone https://github.com/solsuk/grago.git
cd grago && ./install.sh

Notes for the Agent

  • Prefer pipe mode over fetch --analyze for reliability (avoids Ollama TTY spinner issues)
  • Default model is whatever is set in ~/.grago/config.yaml; override per-call with --model
  • Input is truncated to max_input_chars before being sent to the local model
  • Local model responses can be slow (5–30s depending on hardware and model size) — this is expected
  • Grago is for research and fetch delegation — not for tasks requiring your primary model's reasoning

Comments

Loading comments...