Install
openclaw skills install ragflow-runbookEnd-to-end runbook for deploying, operating, troubleshooting, and monitoring RAGFlow (runtime ops only).
openclaw skills install ragflow-runbookA practical runbook for deploying, operating, troubleshooting, and calling RAGFlow (Retrieval-Augmented Generation).
Goal: any agent should be able to bring RAGFlow up, diagnose failures, and call the API safely even without knowing the deployment details up front.
Before running any commands, confirm the following (missing any of these often leads to wrong assumptions):
Windows / WSL2 / Linux / macOS (client only)docker-compose.yml)RAGFLOW_BASE_URL (e.g. http://localhost:9380 or an internal/Tailscale address)Security: never store or share API keys / DB passwords in plaintext (docs, repo, or chat).
Use environment variables so all agents can run the same commands:
RAGFLOW_BASE_URL: prefer an internal/Tailscale URL, e.g. http://100.x.y.z:9380RAGFLOW_API_KEY: Bearer token (created in the RAGFlow Web UI)Quick verification (separate liveness / readiness / auth; tolerate path differences across versions):
GET $RAGFLOW_BASE_URL/openapi.jsonGET $RAGFLOW_BASE_URL/api/v1/openapi.jsonGET $RAGFLOW_BASE_URL/v1/system/pingGET $RAGFLOW_BASE_URL/v1/system/statusGET $RAGFLOW_BASE_URL/v1/system/pingIf these do not match your deployment: treat the returned openapi.json as the source of truth.
This skill ships with its own ops helpers under scripts/:
scripts/ragflow_ping.py: liveness + readinessscripts/ragflow_smoke.py: auth + API smoke (system-level only)scripts/ragflow_status.py: compact status summaryscripts/ragflow_alert.py: send an ops alert via OpenClaw messagingThis skill is intentionally decoupled from any workspace-specific application content. It focuses only on RAGFlow runtime operations.
This section targets a brand-new machine. Goal: get to a working UI + API quickly: clone upstream docker bundle -> start -> create API key in UI -> validate via curl/scripts.
WSL2 (recommended: store files on a Windows drive like D:; run commands inside WSL2):
# WSL2
cd /mnt/d
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/docker
# Common requirement for some document engine profiles
sudo sysctl -w vm.max_map_count=262144 || true
# Default .env = elasticsearch + cpu
# To change ports/passwords/image versions: edit docker/.env
docker compose up -d
docker compose ps
Linux:
# Linux
sudo mkdir -p /opt && cd /opt
sudo chown -R "$USER" /opt
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/docker
sudo sysctl -w vm.max_map_count=262144 || true
docker compose up -d
docker compose ps
Next: open Web UI (default http://<host>:80), finish initialization, create an API key, then validate using ## 3 + ## 8.
To avoid missing files or mismatched versions, use git clone and run from the upstream docker/ directory:
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/docker
# Optional: pin to a tag/commit for production
# git checkout <tag-or-commit>
The upstream docker/ folder typically includes:
docker-compose.yml (often include: ./docker-compose-base.yml)docker-compose-base.yml (backend services: database + cache + object storage + document engine).env (default ports/passwords; change for production)service_conf.yaml.template (used to generate service_conf.yaml at container startup)entrypoint.sh (commonly started with flags like --enable-adminserver / --enable-mcpserver)nginx/ (for built-in Web UI / reverse proxy)README.md (docker-specific docs)Note: upstream explicitly warns that some compose variants (e.g. docker-compose-macos.yml) are not actively maintained. Do not use them unless you know why.
COMPOSE_PROFILES)Upstream .env defaults:
COMPOSE_PROFILES is derived from selected backend profiles (e.g. document engine + compute device)So you typically do not need to pass --profile manually. docker compose up -d will pick profiles from .env.
Before starting (Linux/WSL2, for some document engine profiles):
cat /proc/sys/vm/max_map_count || true
sudo sysctl -w vm.max_map_count=262144 || true
Start:
# In ragflow/docker
# Optional: explicit profiles if you do not want to rely on COMPOSE_PROFILES
# docker compose --profile elasticsearch --profile cpu up -d
docker compose up -d
docker compose ps
Switch CPU/GPU (examples):
# Option 1: edit docker/.env
# DEVICE=gpu
# Option 2: override temporarily (do not modify files)
DEVICE=gpu docker compose up -d
Enable embeddings service (TEI): upstream suggests adding a tei profile to COMPOSE_PROFILES:
# Example:
# COMPOSE_PROFILES=${COMPOSE_PROFILES},tei-cpu
# or:
# COMPOSE_PROFILES=${COMPOSE_PROFILES},tei-gpu
docker compose up -d
Validation: wait for key services to be running/healthy in docker compose ps, then run liveness/readiness (## 3) and API prefix detection (## 8).
.env)In upstream docker/.env (main branch), exposed ports typically mean:
SVR_WEB_HTTP_PORT (default 80), SVR_WEB_HTTPS_PORT (default 443)SVR_HTTP_PORT (default 9380)ADMIN_SVR_HTTP_PORT (default 9381)SVR_MCP_PORT (default 9382)Shortest path to a usable setup:
http://<host>:${SVR_WEB_HTTP_PORT} (default http://<host>:80)RAGFLOW_BASE_URL=http://<host>:${SVR_HTTP_PORT} (default http://<host>:9380)RAGFLOW_API_KEY=ragflow-... (Bearer token; do not paste secrets into chat)Then validate with liveness/readiness in ## 3.
Production warning: upstream
.envexplicitly warns against using default passwords. At minimum changeELASTIC_PASSWORD,MYSQL_PASSWORD,MINIO_PASSWORD, andREDIS_PASSWORD.
docker compose ...)vm.max_map_count >= 262144 (required by some document engine profiles)Checks:
docker --version
docker compose version
# Linux/WSL2 only
cat /proc/sys/vm/max_map_count
Temporary fix (Linux/WSL2):
sudo sysctl -w vm.max_map_count=262144
Prereq: you are in the directory that contains docker-compose.yml.
docker compose up -d
docker compose ps
Tail logs:
docker compose logs -f
# Status
docker compose ps
# Start/stop
docker compose up -d
docker compose down
# Restart
docker compose restart
# Logs (all / last N lines / last 1h)
docker compose logs
docker compose logs --tail=200
docker compose logs --since=1h
# Resource usage
docker stats
Note: service names differ across compose versions. If you see "no such service", run docker compose ps and use the actual service name.
Common causes: vm.max_map_count too small, low RAM, disk full.
# Linux/WSL2
cat /proc/sys/vm/max_map_count
sudo sysctl -w vm.max_map_count=262144
docker compose ps
docker compose logs --tail=200 <es-service>
docker compose ps
docker compose logs --tail=200 <mysql-service>
.env / compose and restart.RAGFlow often exposes:
Recommended convention:
RAGFLOW_BASE_URL points to the API root, e.g. http://localhost:9380Authorization: Bearer $RAGFLOW_API_KEYv1)Across versions/deployments, RAGFlow may have two API prefixes:
v1/... (often system/user/token)api/v1/... (often application endpoints)Use this template to auto-detect the prefix (prefer v1, fallback to api/v1).
# 0) Ensure you are hitting the API port/host (not the UI port)
# Any 200 is OK
curl -sS -o /dev/null -w "%{http_code}\n" "$RAGFLOW_BASE_URL/openapi.json"
curl -sS -o /dev/null -w "%{http_code}\n" "$RAGFLOW_BASE_URL/api/v1/openapi.json"
# 1) Auto-detect prefix (prefer v1)
RAGFLOW_API_PREFIX=""
if curl -sS -o /dev/null -w "%{http_code}" "$RAGFLOW_BASE_URL/v1/system/ping" | grep -q "200"; then
RAGFLOW_API_PREFIX="v1"
elif curl -sS -o /dev/null -w "%{http_code}" "$RAGFLOW_BASE_URL/api/v1/openapi.json" | grep -q "200"; then
RAGFLOW_API_PREFIX="api/v1"
else
echo "Cannot detect API prefix. Check base URL / reverse proxy / firewall."
exit 1
fi
echo "Detected prefix: $RAGFLOW_API_PREFIX"
Ops-only examples (no application-level endpoints):
Example 1: system ping (no secrets in output)
curl -sS -o /dev/null -w "%{http_code}\n" "$RAGFLOW_BASE_URL/v1/system/ping"
Example 2: system status (auth)
curl -sS -X GET "$RAGFLOW_BASE_URL/v1/system/status" \
-H "Authorization: Bearer $RAGFLOW_API_KEY" | head
Example 3: fetch openapi schema (liveness)
curl -sS "$RAGFLOW_BASE_URL/openapi.json" | head
Note: If your deployment uses different paths, openapi.json is the source of truth. Avoid calling application-level endpoints from ops runbooks.
openapi.json first to confirm real paths/fields/version differences.Bearer prefix.Principle: stop services first, then back up volumes, then back up compose configs.
Backup (example; volume names depend on your environment):
mkdir -p backup
docker run --rm \
-v <mysql_volume>:/source \
-v "$PWD/backup":/backup \
alpine tar czf /backup/mysql-data.tar.gz -C /source .
Restore:
docker compose down
docker run --rm \
-v <mysql_volume>:/target \
-v "$PWD/backup":/backup \
alpine tar xzf /backup/mysql-data.tar.gz -C /target
docker compose up -d
latest in production; pin image versions.When a user says "RAGFlow is not working", use this order to reduce back-and-forth:
docker compose ps (which containers are unhealthy/exited)docker compose logs --tail=200 <unhealthy-service> (capture the first actionable errors)docker stats, disk, vm.max_map_count (Linux/WSL2)curl $RAGFLOW_BASE_URL/openapi.jsonragflow_ping.py or GET /v1/system/status (with Bearer)This section documents an end-to-end operations workflow for running RAGFlow with OpenClaw. It is intentionally decoupled from any application-layer usage and focuses only on RAGFlow runtime operations.
Included:
Excluded (by design):
On the machine running OpenClaw, set:
RAGFLOW_BASE_URL (prefer an internal/Tailscale address)RAGFLOW_API_KEY (Bearer token; never commit; do not paste into chat)Recommended ops endpoints:
GET $RAGFLOW_BASE_URL/openapi.jsonGET $RAGFLOW_BASE_URL/v1/system/statusIf paths differ in your deployment, use openapi.json as the source of truth.
This skill includes built-in helpers under scripts/.
They are designed to be:
Helpers:
scripts/ragflow_ping.py
scripts/ragflow_smoke.py
scripts/ragflow_status.py
/v1/system/status and print a compact key summary.scripts/ragflow_alert.py
openclaw message send CLI.(Prefer the skill-local scripts so the runbook works in any environment.)
scripts/ragflow_ping.py
RAGFLOW_BASE_URLRAGFLOW_API_KEY (if set, readiness check is performed)GET {base_url}/openapi.json (no auth)GET {base_url}/v1/system/status (Bearer auth)OK_LIVE (no api key set)OK_READY keys=...LIVENESS_FAIL ...READINESS_FAIL ...0 OK2 liveness failed3 readiness failedscripts/ragflow_smoke.py
RAGFLOW_BASE_URLRAGFLOW_API_KEYGET {base_url}/v1/system/status (auth)GET {base_url}/v1/system/ping (auth or no-auth depending on deployment)OK smokeFAIL system/status ...FAIL system/ping ...0 OK2 system/status failed3 system/ping failedscripts/ragflow_status.py
RAGFLOW_BASE_URLRAGFLOW_API_KEYGET {base_url}/v1/system/statusOK keys=key1,key2,... (compact, no secrets)0 OK2 HTTP failure3 invalid JSONscripts/ragflow_alert.py
--title (required), --details (optional)OPENCLAW_PRIMARY_CHAT_ID (default target)openclaw message send ....--details.Connectivity
RAGFLOW_BASE_URL over the network.Liveness
openapi.json responds with HTTP 200.Readiness
v1/system/status responds with HTTP 200 when authenticated.Smoke
scripts/ragflow_smoke.py (system endpoints only).Escalation artifacts
docker compose psdocker compose logs --tail=200 <ragflow-service>Goal: provide copy/paste recipes. An agent can create these tasks when needed.
Ping every 10 minutes and alert on failure:
*/10 * * * * RAGFLOW_BASE_URL="http://127.0.0.1:9380" RAGFLOW_API_KEY="${RAGFLOW_API_KEY}" /usr/bin/python3 /path/to/skills/ragflow-runbook/scripts/ragflow_ping.py || /usr/bin/python3 /path/to/skills/ragflow-runbook/scripts/ragflow_alert.py --title "ping failed" --details "ragflow_ping.py exit=$?"
Smoke once per day at 06:05 and alert on failure:
5 6 * * * RAGFLOW_BASE_URL="http://127.0.0.1:9380" RAGFLOW_API_KEY="${RAGFLOW_API_KEY}" /usr/bin/python3 /path/to/skills/ragflow-runbook/scripts/ragflow_smoke.py || /usr/bin/python3 /path/to/skills/ragflow-runbook/scripts/ragflow_alert.py --title "smoke failed" --details "ragflow_smoke.py exit=$?"
Notes:
/path/to/skills/... with the real absolute path.Create two plist files (one for ping, one for smoke) and load them with launchctl.
Ping (every 10 minutes):
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>ai.openclaw.ragflow.ping</string>
<key>ProgramArguments</key>
<array>
<string>/usr/bin/python3</string>
<string>/ABS/PATH/skills/ragflow-runbook/scripts/ragflow_ping.py</string>
</array>
<key>StartInterval</key>
<integer>600</integer>
<key>EnvironmentVariables</key>
<dict>
<key>RAGFLOW_BASE_URL</key>
<string>http://127.0.0.1:9380</string>
<key>RAGFLOW_API_KEY</key>
<string>${RAGFLOW_API_KEY}</string>
</dict>
<key>StandardOutPath</key>
<string>/tmp/ragflow-ping.out</string>
<key>StandardErrorPath</key>
<string>/tmp/ragflow-ping.err</string>
<key>RunAtLoad</key>
<true/>
</dict>
</plist>
Smoke (daily at 06:05):
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>ai.openclaw.ragflow.smoke</string>
<key>ProgramArguments</key>
<array>
<string>/usr/bin/python3</string>
<string>/ABS/PATH/skills/ragflow-runbook/scripts/ragflow_smoke.py</string>
</array>
<key>StartCalendarInterval</key>
<dict>
<key>Hour</key>
<integer>6</integer>
<key>Minute</key>
<integer>5</integer>
</dict>
<key>EnvironmentVariables</key>
<dict>
<key>RAGFLOW_BASE_URL</key>
<string>http://127.0.0.1:9380</string>
<key>RAGFLOW_API_KEY</key>
<string>${RAGFLOW_API_KEY}</string>
</dict>
<key>StandardOutPath</key>
<string>/tmp/ragflow-smoke.out</string>
<key>StandardErrorPath</key>
<string>/tmp/ragflow-smoke.err</string>
<key>RunAtLoad</key>
<true/>
</dict>
</plist>
Notes:
/ABS/PATH/... with the real absolute path.SVR_HTTP_PORT to the public internet.RAGFLOW_API_KEY in env/secret manager only.