RAGFlow
v1.0.8Use for RAGFlow dataset tasks: create, list, inspect, update, or delete datasets; upload, list, update, or delete documents; start or stop parsing; check par...
MIT-0
Security Scan
OpenClaw
Benign
medium confidencePurpose & Capability
Name/description match the included scripts: dataset CRUD, document upload/update/delete, parsing control, status checks, search, and listing configured models. Declared requirements (python3, RAGFLOW_API_URL, RAGFLOW_API_KEY) are appropriate and proportionate for an HTTP API client to a RAGFlow service.
Instruction Scope
SKILL.md directs the agent to run only the bundled scripts and to prefer --json. The scripts operate against the declared RAGFLOW_API_URL and use the API key for Authorization. The scripts legitimately read files for uploads and accept @path JSON inputs for update operations — this is expected for upload/update behavior but means the skill can read and transmit any local file the agent is asked to upload or pass as @file. Guardrails require explicit confirmation for deletes, which is appropriate.
Install Mechanism
No install spec; this is an instruction+script bundle that requires python3 on PATH. Nothing is downloaded or written during install, lowering the installation risk.
Credentials
Only RAGFLOW_API_URL and RAGFLOW_API_KEY are required and primaryEnv is the API key — this is appropriate for communicating with a RAGFlow HTTP API. There are no unrelated secrets requested. (Note: some helper code like common.py may read runtime config from args/env or config files — the manifest references resolve_runtime_config/require_api_key but declared env requirements match the expected inputs.)
Persistence & Privilege
always:false and no install hooks are present. The skill does not request system-wide persistence or changes to other skills. The agent may invoke the skill autonomously (disable-model-invocation is false), which is the platform default; this is not combined with other red flags here.
Assessment
This skill appears to be a straightforward client for a RAGFlow HTTP API. Before installing: (1) Ensure RAGFLOW_API_URL points to a trusted RAGFlow instance; (2) limit the RAGFLOW_API_KEY scope/permissions and rotate it if possible; (3) be cautious when asking the agent to upload files — the scripts will read and transmit any local file you provide or reference via @path, so do not pass sensitive local files unless intended; (4) the SKILL.md requires explicit confirmation for deletes, but verify your operational workflow enforces that; (5) if you do not want the agent to call this skill autonomously, disable autonomous invocation for this skill in your agent settings.Like a lobster shell, security has layers — review code before you run it.
latest
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
Binspython3
EnvRAGFLOW_API_URL, RAGFLOW_API_KEY
Primary envRAGFLOW_API_KEY
SKILL.md
RAGFlow Dataset And Retrieval
Use only the bundled scripts in scripts/.
Prefer --json so returned fields can be relayed exactly.
Follow reference.md for all user-facing output.
Use This Skill When
- the user wants to create, list, inspect, update, or delete RAGFlow datasets
- the user wants to upload, list, update, or delete documents in a dataset
- the user wants to start parsing, stop parsing, or check parse progress
- the user wants to retrieve chunks from one or more datasets
- the user wants to list configured RAGFlow models
Core Workflow
- Resolve the target dataset or document IDs first.
- Run the matching script from
scripts/. - Use
--jsonunless a script only needs a simple text response. - Return API fields exactly; do not guess missing details.
Common commands:
python3 scripts/datasets.py list --json
python3 scripts/datasets.py info DATASET_ID --json
python3 scripts/datasets.py create "Example Dataset" --description "Quarterly reports" --json
python3 scripts/update_dataset.py DATASET_ID --name "Updated Dataset" --json
python3 scripts/upload.py DATASET_ID /path/to/file.pdf --json
python3 scripts/upload.py list DATASET_ID --json
python3 scripts/update_document.py DATASET_ID DOC_ID --name "Updated Document" --json
python3 scripts/parse.py DATASET_ID DOC_ID1 [DOC_ID2 ...] --json
python3 scripts/stop_parse_documents.py DATASET_ID DOC_ID1 [DOC_ID2 ...] --json
python3 scripts/parse_status.py DATASET_ID --json
python3 scripts/search.py "query" --json
python3 scripts/search.py "query" DATASET_ID --json
python3 scripts/search.py --dataset-ids DATASET_ID1,DATASET_ID2 --doc-ids DOC_ID1,DOC_ID2 "query" --json
python3 scripts/search.py --retrieval-test --kb-id DATASET_ID "query" --json
python3 scripts/list_models.py --json
Guardrails
- For any delete action, list the exact items first and require explicit user confirmation before executing.
- Delete only by explicit dataset IDs or document IDs. If the user gives names or fuzzy descriptions, resolve IDs first.
- Upload does not start parsing. Start parsing only when the user asks for it.
parse.pyreturns immediately after the start request; useparse_status.pyfor progress.- For progress requests, use
parse_status.pyon the most specific scope available:- dataset specified: inspect that dataset
- document IDs specified: pass
--doc-ids - no dataset specified: list datasets first, then aggregate status across datasets
- If a parse status result includes
progress_msg, surface it directly. ForFAIL, treat it as the primary error detail. - Use
--retrieval-testonly for single-dataset debugging or when the user explicitly asks for that endpoint.
Output Rules
- Follow
reference.md. - Use tables for 3+ items when possible.
- Preserve
api_error,error,message, and related fields exactly as returned. - Never fabricate progress percentages or inferred causes.
Files
15 totalSelect a file
Select a file to preview.
Comments
Loading comments…
