LegalFrance
v1.0.1Assistant juridique français RAG sur codes et lois consolidés (LEGI/DILA). Utiliser pour questions de droit français, recherche d'articles, explication de textes législatifs, synthèse juridique avec citations vérifiables.
⭐ 0· 702·0 current·0 all-time
by@msgnoki
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description match the code and scripts: ingestion from AgentPublic/legi, ChromaDB + SQLite FTS search, RAG prompt builder and helpers. No unrelated env vars, binaries, or external services are requested.
Instruction Scope
SKILL.md instructs running ingest/search/one_shot scripts and requires user confirmation before the ingest (which downloads ~2 GB). The code builds a strong SYSTEM_PROMPT for the LLM (this triggered a 'system-prompt-override' pattern). That is expected for a RAG assistant, but you should be aware the skill supplies explicit system-level instructions that will guide any LLM used with the prompts.
Install Mechanism
There is no automated install spec (instruction-only), which lowers automated install risk. Running the scripts requires third-party Python packages (datasets, chromadb, sentence_transformers) and will download a large embedding model (BAAI/bge-m3) and the HF dataset. Downloads come from known hosts (HuggingFace, model hub) rather than arbitrary URLs, but expect heavy network and disk activity.
Credentials
The skill declares no required environment variables or credentials (good). One caveat: downloading some models or private HF resources can require a HuggingFace token (HF_TOKEN) or similar not declared here; this may cause runtime prompts or failures but is not evidence of hidden credential demands.
Persistence & Privilege
The skill writes indexes to a local data/ directory (chroma_db and fts_index.db) as expected for a local RAG assistant. It does not request always:true or modify other skills or system-wide settings.
Scan Findings in Context
[system-prompt-override] expected: ask.py intentionally defines a detailed SYSTEM_PROMPT to constrain the LLM's behaviour for legal answers; this matches the skill's RAG goal and explains the pre-scan flag.
Assessment
This skill appears to do what it says: build a local RAG assistant over the LEGI dataset and produce SYSTEM+USER prompts for an LLM. Before installing/running: (1) review the scripts yourself (they are included) and run them in an isolated environment/VM if you are concerned; (2) ensure you have ~2+ GB free and sufficient disk space for indexes; (3) be prepared to install Python packages (datasets, chromadb, sentence_transformers) and possibly provide a HuggingFace token if a model requires authentication; (4) note the skill will write persistent indexes under data/, so back up or choose an appropriate working directory; (5) understand the skill supplies a strict system prompt that will control downstream LLM outputs — this is normal for RAG but worth reviewing if you plan to run the LLM with different safety settings.Like a lobster shell, security has layers — review code before you run it.
latestvk979f61qkw0n2ej4pvsskmhwv1812hpc
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
