MNN Local Knowledge Base

v1.0.0

Local vector knowledge base with GraphRAG retrieval (vector + BM25 + knowledge graph). Use this skill when the user mentions: "查知识库", "加入知识库", "记住这个", "save...

0· 113·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (local MNN KB, GraphRAG) match the delivered artifacts: a Python implementation, CLI, and instructions for building/querying a local KB. Required components (MNN embedding backend, text parsers, optional LLM client) are appropriate for the stated functionality.
Instruction Scope
SKILL.md/README instruct the agent to index local files, insert notes, and return retrieved context. The README and code also permit using an LLM for generation (configurable via config.json / --no-llm). This means retrieved private context may be sent to a remote OpenAI-compatible endpoint if you enable LLM answering — a normal feature for RAG but an important privacy consideration. There is a small inconsistency: SKILL.md explicitly states 'no LLM call is made inside this tool' for kb_query, yet other docs and config include an llm_api section and examples that perform LLM generation. Confirm desired behavior (use --no-llm if you want pure local retrieval).
Install Mechanism
No formal install spec in registry (instruction-only), but code auto-downloads an embedding model (~400 MB) from ModelScope (modelscope.cn) on first run via urllib.request. Downloading from ModelScope is expected for an embedding backend; this does write model files to disk. The skill asks you to pip install requirements.txt (standard).
Credentials
The skill does not require platform environment variables, but it expects a local config.json with an llm_api.api_key and base_url if you want LLM generation. Storing the API key in config.json (gitignored by the project) is consistent but means a secret is kept on disk. The presence of openai (or OpenAI-compatible) client is justified by the optional LLM step; however, enabling it will send KB context and user queries to whatever endpoint you configure. If you do not want outbound data leakage, use --no-llm or provide a local LLM endpoint you trust.
Persistence & Privilege
always:false and the skill does not request force-inclusion or system-wide config changes. It writes its own artifacts (assets/, knowledge_bases/, downloaded model) in its directory and updates the model's llm_config.json for embedding tokenization alignment — expected for this use case. It does not modify other skills or system-wide agent settings.
Assessment
This skill appears to do what it says: build and query a local MNN-based KB and (optionally) call an LLM for answers. Before installing, consider: 1) Model download: the first run auto-downloads a ~400MB embedding model from ModelScope (modelscope.cn) into the repo's assets directory — run this on a machine and network where you are comfortable downloading large binary files. 2) Secrets: the tool expects an API key in config.json for LLM generation; that file is written to disk (it is gitignored by the project). Do not put highly sensitive keys there unless you control the environment. 3) Data exfiltration: if you enable the LLM generation path (default examples show this), retrieved KB context and queries will be sent to the configured OpenAI-compatible endpoint. Use --no-llm or point to a trusted/local LLM if you need to keep KB content local. 4) Review code if you have high security needs: the model download and llm invocation are visible in scripts/py_mnn_kb.py (no obfuscated endpoints or hidden backdoors were detected). 5) Run in an isolated environment (virtualenv / container) and inspect config.json and the repo before giving it access to private documents.

Like a lobster shell, security has layers — review code before you run it.

knowledgevk9748m5rmrgnsmmxbey61wtgm183ha4wlatestvk9748m5rmrgnsmmxbey61wtgm183ha4wlocalvk9748m5rmrgnsmmxbey61wtgm183ha4wmnnvk9748m5rmrgnsmmxbey61wtgm183ha4wofflinevk9748m5rmrgnsmmxbey61wtgm183ha4wretrievalvk9748m5rmrgnsmmxbey61wtgm183ha4w

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments