Install
openclaw skills install open-ragflowRAGFlow open-source Retrieval-Augmented Generation (RAG) engine — deployment, configuration, management, and troubleshooting.
openclaw skills install open-ragflowOpen-source RAG engine fusing RAG with Agent capabilities. Full-stack: Python backend (Flask), React/TypeScript frontend, Docker-deployed microservices.
infiniflow/ragflowvm.max_map_count >= 262144 (Linux, for Elasticsearch)git clone https://github.com/infiniflow/ragflow.git
cd ragflow/docker
docker compose -f docker-compose.yml up -d
docker logs -f docker-ragflow-cpu-1 # wait for the banner, then login
# Open http://YOUR_SERVER_IP in browser
Configure LLM API keys in docker/service_conf.yaml.template under user_default_llm, then restart:
docker compose -f docker-compose.yml up -d
If Docker Hub is slow:
swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflowregistry.cn-hangzhou.aliyuncs.com/infiniflow/ragflowSet HuggingFace mirror if needed: HF_ENDPOINT=https://hf-mirror.com in docker/.env.
| File | Scope |
|---|---|
docker/.env | Environment variables: SVR_HTTP_PORT, MYSQL_PASSWORD, MINIO_PASSWORD, DOC_ENGINE, RAGFLOW_IMAGE, HF_ENDPOINT |
docker/service_conf.yaml.template | Backend services: LLM factory, API keys, embedding/rerank/ASR/TTS models |
docker/docker-compose.yml | Full stack orchestration |
docker/docker-compose-base.yml | Infrastructure services only (dev mode) |
docker/service_conf.yaml.template:
user_default_llm:
factory: "OpenAI" # or "DeepSeek", "Gemini", etc.
api_key: "sk-..."
base_url: "https://api.openai.com/v1/"
docker compose -f docker-compose.yml up -d to apply.All CLI commands end with ;. Full reference: references/cli-reference.md.
# Datasets
LIST DATASETS;
CREATE DATASET 'my_kb' WITH EMBEDDING 'text-embedding-ada-002' PARSER 'pdf';
DROP DATASET 'my_kb';
LIST FILES OF DATASET 'my_kb';
# Documents
IMPORT '/path/to/doc.pdf' INTO DATASET 'my_kb';
PARSE DATASET 'my_kb' SYNC;
PARSE DATASET 'my_kb' ASYNC;
# Search
SEARCH 'What is RAG?' ON DATASETS 'my_kb';
# Models
CREATE MODEL PROVIDER 'openai' 'sk-...';
SET DEFAULT LLM 'gpt-4';
LIST MODEL PROVIDERS;
LIST DEFAULT MODELS;
# Agents & Chats
LIST AGENTS;
LIST CHATS;
CREATE CHAT 'my_session';
DROP CHAT 'my_session';
# Connection
PING;
SHOW CURRENT USER;
docker compose -f docker/docker-compose.yml down -v # WARNING: clears data
# Edit docker/.env: set DOC_ENGINE=infinity
docker compose -f docker-compose.yml up -d
Infinity is lighter weight but Linux/arm64 is not officially supported.
Web UI (React+TS+vitejs+shadcn) → Flask API (/api/) → RAG Core (/rag/) + Agent (/agent/)
↓
Infrastructure: MySQL + Elasticsearch/Infinity + Redis + MinIO
/api/): Flask blueprints — kb, dialog, document, canvas, file, user/rag/): DeepDoc parsing, LLM/embedding/rerank abstractions, chunking, GraphRAG/agent/): Canvas-based workflow builder with components (LLM, Retrieval, Code Executor, MCP, Search, SQL)/web/): React 18 + TypeScript + ViteSee references/architecture.md for detailed component breakdown.
git clone https://github.com/infiniflow/ragflow.git && cd ragflow
uv sync --python 3.12 && uv run python3 download_deps.py
docker compose -f docker/docker-compose-base.yml up -d
# Add to /etc/hosts: 127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager
source .venv/bin/activate && export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
# Separate terminal:
cd web && npm install && npm run dev
| Problem | Fix |
|---|---|
network abnormal browser error | Wait for Docker logs to show the RAGFlow banner — server is initializing |
| Docker pull timeout in China | Use RAGFLOW_IMAGE mirrors (Huawei Cloud / Alibaba Cloud) |
| HuggingFace unreachable | export HF_ENDPOINT=https://hf-mirror.com |
| ARM64 platform | Build Docker image from source (no official ARM64 image) |
| Port conflict | Change 80:80 to <PORT>:80 in docker-compose.yml |
| Elasticsearch exits with 137 | Increase Docker memory allocation |
vm.max_map_count too low | sudo sysctl -w vm.max_map_count=262144 |
http://SERVER_IP/api/ — Swagger docs at /api/docssdk/python/python admin/client/ragflow_cli.py <command>