index1
v2.0.3AI memory system for coding agents — code index + cognitive facts, persistent across sessions.
Security Scan
OpenClaw
Benign
medium confidencePurpose & Capability
The name/description (code memory + search) matches the instructions: installing/running an index1 binary, creating a project .mcp.json, indexing local source/docs, and optionally configuring an embedding backend. No unrelated credentials, binaries, or config paths are requested.
Instruction Scope
Instructions stay within the stated purpose (indexing, recall, reindex, web UI). However, they include potentially impactful operations: 'index1 setup' (claims to auto-configure hooks and MCP), writing/creating .mcp.json and edits to project README/CLAUDE.md, starting a web UI that opens a local port, and indexing project files (reads repository data). These are expected for this tool but worth awareness.
Install Mechanism
The skill is instruction-only and recommends installing index1 via pipx/pip or npx and suggests running a remote installer (curl -fsSL https://ollama.com/install.sh | sh) for an optional backend. npx and curl|sh run remote code and are higher-risk operationally; the SKILL.md does not provide hashes or pinned release URLs to verify installers.
Credentials
No environment variables, credentials, or config paths are requested by the skill. Optional integration with Ollama (a local service) is described; that is proportional to the optional embedding backend and does not introduce unexplained credential access.
Persistence & Privilege
always is false and model invocation is allowed (normal). The tool will create/modify project files (.mcp.json, suggested CLAUDE.md edits) and can start a persistent local process/web UI — expected for an indexing service but worth reviewing. 'index1 setup' could make automatic config changes; the instructions do not show an explicit review step.
Assessment
This skill appears to do what it says (a local code/document index + cognition tools), but follow these precautions before installing: (1) Inspect the index1 package source on PyPI/NPM and prefer installing from trusted, pinned releases rather than running npx unverified code. (2) Avoid running curl | sh with no verification; fetch installers manually and check signatures or vendor docs. (3) Review what 'index1 setup' will change (backup project files first) — it can write .mcp.json and modify CLAUDE.md. (4) Running the web UI opens a local HTTP port; ensure you run it in a safe/network environment. (5) If you pull large remote models or configure an external embedding service, understand whether any project data will be sent to that service. If you want higher assurance, run installation and initial indexing in an isolated environment or container and review the files the tool writes.Like a lobster shell, security has layers — review code before you run it.
latest
index1
AI memory system for coding agents with BM25 + vector hybrid search. Provides 6 MCP tools for intelligent code/doc search and cognitive fact recording.
What it does
- Dual memory: corpus (code index) + cognition (episodic facts)
- Hybrid search: BM25 full-text + vector semantic search with RRF fusion
- Structure-aware chunking: Markdown, Python, Rust, JavaScript, plain text
- MCP Server: 6 tools (
recall,learn,read,status,reindex,config) - CJK optimized: Chinese/Japanese/Korean query detection with dynamic weight tuning
- Built-in ONNX embedding: Vector search works out of the box, no Ollama required
- Graceful degradation: Works without any embedding service (BM25-only mode)
Install
# Recommended
pipx install index1
# Or via pip
pip install index1
# Or via npm (auto-installs Python package)
npx index1@latest
One-click plugin setup:
index1 setup # Auto-configure hooks + MCP for Claude Code
Verify:
index1 --version
index1 doctor # Check environment
Setup MCP
Create .mcp.json in your project root:
{
"mcpServers": {
"index1": {
"type": "stdio",
"command": "index1",
"args": ["serve"]
}
}
}
If
index1is not in PATH, use the full path fromwhich index1.
Add Search Rules
Add to your project's .claude/CLAUDE.md:
## Search Strategy
This project has index1 MCP Server configured (recall + 5 other tools). When searching code:
1. Known identifiers (function/class/file names) -> Grep/Glob directly (4ms)
2. Exploratory questions ("how does XX work") -> recall first, then Grep for details
3. CJK query for English code -> must use recall (Grep can't cross languages)
4. High-frequency keywords (50+ expected matches) -> prefer recall (saves 90%+ context)
Impact:
Without rules: Grep "search" -> 881 lines -> 35,895 tokens
With rules: recall -> 5 summaries -> 460 tokens (97% savings)
Index Your Project
index1 index ./src ./docs # Index source and docs
index1 status # Check index stats
index1 search "your query" # Test search
Optional: Multilingual Enhancement
index1 v2 has built-in ONNX embedding (bge-small-en-v1.5). For better multilingual support:
curl -fsSL https://ollama.com/install.sh | sh
ollama pull nomic-embed-text # Standard, 270MB
# or
ollama pull bge-m3 # Best for CJK, 1.2GB
index1 config embed_backend ollama
index1 doctor # Verify setup
Without Ollama, ONNX embedding provides vector search out of the box.
Web UI
index1 web # Start Web UI on port 6888
index1 web --port 8080 # Custom port
MCP Tools Reference
| Tool | Description |
|---|---|
recall | Unified search — code + cognitive facts, BM25 + vector hybrid |
learn | Record insights, decisions, lessons learned (auto-classify + dedup) |
read | Read file content + index metadata |
status | Index and cognition statistics |
reindex | Rebuild index for a path or collection |
config | View or modify configuration |
Troubleshooting
| Issue | Fix |
|---|---|
| Tools not showing | Check .mcp.json format and index1 path |
| AI doesn't use recall | Add search rules to CLAUDE.md |
command not found | Use full path from which index1 |
| Chinese search returns 0 | Install Ollama + bge-m3 model |
| No vector search | Built-in ONNX should work; run index1 doctor |
Comments
Loading comments...
