clawgrep

v0.1.4

Grep-like CLI with hybrid semantic and keyword search. Combines semantic embedding search with keyword matching for high-quality code and document retrieval....

0· 152·0 current·0 all-time
byAndrew Schonhoffer@schonhoffer

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for schonhoffer/clawgrep.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "clawgrep" (schonhoffer/clawgrep) from ClawHub.
Skill page: https://clawhub.ai/schonhoffer/clawgrep
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install clawgrep

ClawHub CLI

Package manager switcher

npx clawhub@latest install clawgrep
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the instructions: the SKILL.md documents how to use a local clawgrep CLI and how it behaves. All required functionality (embedding model download, local cache, grep-compatible output) is consistent with a semantic+keyword search tool.
Instruction Scope
The instructions only tell the agent/user to run the clawgrep binary, install it via cargo/npm/pip if missing, and to search user-specified paths. They do not instruct reading unrelated system files, exfiltrating data, or accessing unrelated credentials. The SKILL.md explicitly recommends not searching the whole filesystem.
Install Mechanism
There is no install spec in the registry (instruction-only). SKILL.md states the clawgrep binary will — on first run — download a ~30 MB ONNX model from Hugging Face and cache it locally. That network activity is expected for embedding-based tools but is the primary vector to be aware of (initial download only).
Credentials
The registry lists no required environment variables or credentials. SKILL.md documents optional env vars (CLAWGREP_CACHE_DIR, CLAWGREP_CONFIG, CLAWGREP_VERBOSE, NO_COLOR, RUST_LOG) that are reasonable for a CLI and are not required secrets.
Persistence & Privilege
The skill is not always-enabled and does not request any persistent platform privileges. It does reference a local cache directory (standard for this use case) but does not modify other skills or system-wide settings.
Assessment
This skill is a documentation/instruction wrapper for the external clawgrep CLI rather than executable code. Before installing/using clawgrep: (1) install the clawgrep binary from the repository linked in the SKILL.md or build it locally (cargo/npm/pip as appropriate) and verify that source is trustworthy; (2) be aware that the first run will download a ~30 MB ONNX model from Hugging Face (network activity) and cache it locally in ~/.cache/clawgrep or the platform-specific cache — if you need offline-only behavior, pre-download or audit that step; (3) the skill itself does not request credentials or perform exfiltration, but the binary will read whatever paths you ask it to search, so avoid running it over the entire filesystem if you are concerned about sensitive data; (4) if you need higher assurance, inspect the upstream GitHub project and/or build the binary from source before use.

Like a lobster shell, security has layers — review code before you run it.

latestvk974kvvwtdf7r8dyp1p5cs2z8h83rbap
152downloads
0stars
2versions
Updated 1mo ago
v0.1.4
MIT-0

clawgrep

Semantic + keyword file search. Output is grep-compatible. Runs fully locally. On first run, automatically downloads a small ONNX embedding model (~30 MB) from Hugging Face and caches it in the local cache directory. After that, all searches are offline.

Check availability

clawgrep --version

If not found, install from the open-source repository using any of these methods (only one needed):

cargo install clawgrep        # Rust (recommended)
npm install -g clawgrep        # Node.js
pip install clawgrep           # Python

Basic usage

clawgrep --no-color "query" <path>

Always pass --no-color when parsing output programmatically.

Search a workspace

clawgrep --no-color "previous discussion about auth flow" ./memory

Output format

Grep-compatible, one result per line, ranked by relevance (best first):

$ clawgrep --no-color "previous discussion about auth flow" ./memory
memory/2025-06-12-auth-design.md:8:Decided to use OAuth2 with PKCE for all client auth.
memory/2025-06-12-auth-design.md:14:Token refresh should be transparent to the user.
memory/2025-06-10-planning.md:3:Auth flow is the top priority for the sprint.
memory/archive/2025-05-session-notes.md:42:Discussed moving auth to a separate service.
memory/archive/2025-05-session-notes.md:87:Need to revisit token expiry policy.

Each line is file:line:text. Context lines (from -C) use - as the separator instead: file-line-text.

Exit codes

CodeMeaning
0Match found
1No match
2Error

Same as grep. Use -q for existence checks without output.

Choosing search mode

Default weights: 70% semantic, 30% keyword.

Concept search (don't know exact wording):

clawgrep --no-color "decision about migration strategy" ./memory

Exact identifier search (note IDs, tags, serial numbers):

clawgrep --no-color --keyword-weight 0.8 --semantic-weight 0.2 "PROJ-1042" ./memory

Key flags

FlagPurpose
-k NNumber of results (default: 5)
-C NContext lines before and after
-lPrint only matching filenames
-qQuiet; just set exit code
--show-scoreAppend relevance score
--path-boost NBoost filename matches (>1.0 = higher)
--min-score NFilter low-relevance results (0.0–1.0)

See CLI reference for all flags.

Best practices

  1. Use --no-color always when parsing output.
  2. Keep -k small (3–5) to reduce output. Increase only when needed.
  3. Check exit codes instead of parsing stdout when possible.
  4. Let the cache persist — don't use --no-cache unless searching throwaway content. First run indexes; subsequent runs are fast.
  5. Search the narrowest relevant directory, not the whole filesystem.

References (advanced, usually not needed)

The information above should be sufficient for normal use. Only load these if you run into problems or need flags not listed above:

  • CLI reference — all flags, config file format, grep compatibility
  • Examples — more input/output examples for edge cases

Comments

Loading comments...