Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

The Librarian

v1.0.1

Build and search lightweight quantized document indexes with TurboVec. Use when you need to create searchable indexes from documents for RAG applications wit...

0· 79·0 current·0 all-time
byEnda@rochyroch

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for rochyroch/thelibrarian.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "The Librarian" (rochyroch/thelibrarian) from ClawHub.
Skill page: https://clawhub.ai/rochyroch/thelibrarian
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install thelibrarian

ClawHub CLI

Package manager switcher

npx clawhub@latest install thelibrarian
Security Scan
Capability signals
CryptoCan make purchases
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description (lightweight quantized document index/search) matches the included scripts: build_index.py, search.py, and a wrapper. Required libraries (turbovec, rank-bm25, flashrank, numpy, requests) and use of an embedding API are appropriate for the described functionality.
Instruction Scope
Runtime instructions and scripts operate only on user-supplied document directories and write index files to the specified output directory. The code makes network calls only to an embedding service (requests.post to an Ollama-style API URL). The SKILL.md mentions an OLLAMA_API environment variable in help text, but the Python scripts default to a hard-coded DEFAULT_OLLAMA_API and accept a --api flag — a minor mismatch in where the config is read from.
Install Mechanism
No install spec is provided (instruction-only install). The skill expects the user to create a local virtualenv and pip-install dependencies; nothing is downloaded or executed silently by an installer in the package itself.
Credentials
The skill requests no declared credentials or config paths. However, it posts document text to an embedding API (default: http://host.docker.internal:11434). This is necessary for embeddings but means the user must trust the endpoint they point to; SKILL.md/help references an OLLAMA_API env var but the scripts rely on a default or CLI flag, so confirm where embeddings will be sent.
Persistence & Privilege
always is false, the skill does not require persistent platform privileges, and it confines file writes to the index output path. It does not modify other skills or system-wide configs.
Assessment
This skill appears coherent for building/searching TurboVec quantized indexes, but before installing: 1) Confirm where embeddings are sent — the scripts default to http://host.docker.internal:11434 (an Ollama-style local endpoint). If you run the tool, embedding text will be POSTed to whichever API URL you supply; point it to a trusted local service or a trusted remote provider. 2) SKILL.md/help mention an OLLAMA_API env var but the scripts use a default and accept --api — set the --api flag or edit the code if you need a different endpoint. 3) Run the code in an isolated environment (dedicated venv/container) when indexing sensitive documents. 4) Review and vet third-party packages (turbovec, flashrank, rank-bm25) before pip installing them. 5) For high-risk documents (medical, legal, financial) follow the author's own advice and use a higher-accuracy/approved setup (e.g., FAISS) or ensure your embedding provider and runtime are fully trusted.

Like a lobster shell, security has layers — review code before you run it.

latestvk97emy75am974ssf7j3amkve1n85ayeh
79downloads
0stars
2versions
Updated 5d ago
v1.0.1
MIT-0

The Librarian

Lightweight document search with TurboVec quantization. Build semantic search indexes that run on minimal hardware.

Author: RandTrad Consulting — Document Intelligence for SMEs License: MIT — Free for personal and commercial use with attribution

Lightweight document search with TurboVec quantization. Build semantic search indexes that run on minimal hardware.

What It Does

  • Builds quantized vector indexes from Markdown/text documents
  • Supports hybrid search (vector + BM25 keyword matching)
  • Optional Flashrank reranking for improved accuracy
  • Chunk expansion for surrounding context
  • 8-16x smaller indexes than FAISS

When to Use

Use CaseChoose The Librarian
Resource-constrained hardware✅ Runs on Raspberry Pi, 512MB RAM
Personal knowledge base✅ Zero infrastructure
Embedded/offline deployment✅ No cloud, no database
100K+ documents on limited hardware✅ Fits where FAISS doesn't
Medical/legal records❌ Use FAISS instead
Maximum accuracy required❌ Use FAISS + Flashrank

Accuracy: ~97-98% of FAISS for 4-bit quantization. Top results may occasionally swap ranking.

Quick Start

Prerequisites

# Install BLAS library (required for TurboVec)
sudo apt install libblas3

# Create venv and install dependencies
cd /path/to/the-librarian
python3 -m venv venv
source venv/bin/activate
pip install turbovec numpy requests rank-bm25 flashrank

Build an Index

# Using the wrapper (recommended)
./scripts/librarian build /path/to/documents/ index/my_library

# With options
./scripts/librarian build /path/to/docs/ index/my_library --bits 3 --chunk-size 800

# Direct Python
LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libblas.so.3 \
  python scripts/build_index.py --input /path/to/docs/ --output index/my_library

Search

# Pure vector search
./scripts/librarian search "habit formation" index/my_library

# Hybrid (vector + BM25)
./scripts/librarian search "habit formation" index/my_library --hybrid

# Hybrid + rerank (best accuracy)
./scripts/librarian search "habit formation" index/my_library --hybrid --rerank

# With context expansion
./scripts/librarian search "habit formation" index/my_library --hybrid --rerank --expand 1

# JSON output
./scripts/librarian search "habit formation" index/my_library --json

Search Modes

ModeTimeAccuracyUse Case
Vector only~130msGoodSemantic concepts, synonyms
Hybrid~140msBetterCombines semantic + exact keywords
Hybrid + rerank~320msBestMaximum precision

Bit Width Options

BitsCompressionAccuracyUse Case
4-bit8x~97-98%Default, best balance
3-bit10.7x~95-96%Tight memory
2-bit16x~93-95%Extreme compression

File Structure

the-librarian/
├── SKILL.md
├── scripts/
│   ├── librarian           # Wrapper script (handles LD_PRELOAD)
│   ├── build_index.py      # Build quantized index
│   └── search.py           # Search with hybrid + rerank
└── references/
    └── quantization.md     # How TurboVec compression works

Index Files

After building, you'll have:

index/my_library/
├── library.qindex      # TurboVec quantized index
├── chunks.json         # Document chunks with metadata
├── bm25_index.pkl      # BM25 keyword index (if rank-bm25 installed)
└── stats.json          # Build statistics

Accuracy Guidance

For critical applications (medical, legal, financial):

Use FAISS instead. The ~2-3% ranking variance in TurboVec is acceptable for personal knowledge bases, parts catalogs, and general document search, but not for applications where missing a result has consequences.

For personal/team use:

TurboVec is ideal. The accuracy difference is negligible for most queries, and the size savings enable deployment on hardware that couldn't run FAISS at all.

Performance Comparison

MetricFAISSTurboVec 4-bit
Cold query~150-165ms~150-165ms
Warm query~35-40ms~130-135ms
Pure search~10-12ms~10-15ms
Index size100%~7-12%
RAM requiredHighLow

Note: Both spend ~120-140ms generating embeddings via Ollama. The search difference is minimal.

References

  • references/quantization.md - Technical details on how TurboVec compression works

Author

RandTrad Consulting — Document Intelligence consultancy for SMEs

Built by Enda Rochford — RandTrad Consulting

License

MIT License — Free for personal and commercial use with attribution.

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files, to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, subject to the following condition:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

Comments

Loading comments...