Install
openclaw skills install safeclaw-proxyClawHub Security found sensitive or high-impact capabilities. Review the scan results before using.
Install and verify the SafeClaw safety proxy for OpenClaw and other OpenAI-compatible clients. Use when setting up a local or hosted SafeClaw proxy, routing model traffic through it, debugging why calls are not appearing in the dashboard, or adapting setup for OpenClaw provider configs, agent models.json, or shell-launched clients.
openclaw skills install safeclaw-proxySet up SafeClaw end to end. Use exec for commands. Report progress after each completed step. Stop only for real blockers.
Default to doing the work yourself. Ask the user only when you truly need one of these:
Prefer direct config edits, file edits, and process control over telling the user to type commands manually.
Success condition:
Do this before changing anything.
models.providers.openai exists.gateway config.get~/.openclaw/agents/main/agent/models.json if it existssession_status~/.openclaw/agents/main/agent/models.json.OPENAI_BASE_URL.OPENAI_BASE_URL alone.Important:
OPENAI_BASE_URL in a shell will not reroute the already-running webchat session.Before patching config, check whether the schema path exists. Try gateway config.schema.lookup on models.providers.<provider>.request.allowPrivateNetwork. If schema.lookup is not available in this build, use gateway config.get and inspect the returned structure to see if request.allowPrivateNetwork appears under any provider.
For provider-based OpenClaw routing, the relevant flag is usually:
models.providers.<provider>.request.allowPrivateNetworkIf the path exists, enable it for each provider that will point to http://localhost.
If the path does not exist in this build, do not invent a config key. Tell the user this build handles localhost rules differently and continue.
Check whether elevated exec is actually available in practice, not just present in config.
gateway config.get.When suggesting tools.elevated.allowFrom.<provider>, remember it expects an array. Use the provider name that matches the active session (e.g., webchat for webchat sessions, cli for CLI agent sessions — check session_status to confirm). If gateway config.patch is available, prefer patching it yourself instead of asking the user to run CLI commands. For example, if the active session is webchat:
{
"tools": {
"elevated": {
"enabled": true,
"allowFrom": {
"webchat": ["*"]
}
}
}
}
Ask only the minimum needed:
If the user already made the choice, do not ask again.
If Hosted:
{PROXY_URL}.curl -sf --max-time 5 {PROXY_URL}/aep/api/statecalls.If Local, continue to Step 3.
Prefer:
podmandockerStart with host port 8899, then fall back through 8898, 8897, 8896, 8895.
Store the chosen host port as {HOST_PORT}.
Set {PROXY_URL} to http://localhost:{HOST_PORT}.
Important container detail:
8899.{HOST_PORT}:8899, not {HOST_PORT}:{HOST_PORT}.Use this pattern:
{CONTAINER_CMD} run -d --name safeclaw-proxy -p {HOST_PORT}:8899 ghcr.io/aceteam-ai/aep-proxy:latest
If OPENAI_API_KEY or ANTHROPIC_API_KEY are present, you may pass them through, but they are not required for the proxy itself.
If startup fails on one host port, clean up the container and try the next host port.
If container startup fails on all ports, fall through to pip/uv.
Try:
uv pip install aceteam-aep[all]pip install aceteam-aep[all]If install succeeds, run:
aceteam-aep proxy --port {HOST_PORT} > /dev/null 2>&1 &
If install fails, offer to install the missing prerequisites. Ask the user:
"Install failed. I can set up what's needed — Python >=3.12 and uv (fast Python package manager). Want me to try?"
If the user agrees:
Check for Python >=3.12: python3 --version. If not found or version is too old:
brew install python@3.12 (if brew is available), otherwise tell the user: "Install Python 3.12+ from https://www.python.org/downloads/ and re-run this skill."sudo apt install python3.12 python3.12-venv (Debian/Ubuntu) or sudo dnf install python3.12 (Fedora/RHEL). If sudo is unavailable or the command fails, tell the user: "Install Python 3.12+ using your system package manager and re-run this skill."Install uv: curl -LsSf https://astral.sh/uv/install.sh | sh
Retry: uv pip install aceteam-aep[all]
If the user declines or the install still fails, stop and tell the user exactly what prerequisite is missing and how to install it manually.
After startup, verify:
curl -sf {PROXY_URL}/aep/api/state
Retry every 2 seconds for up to about 15 seconds.
If the first verification round fails:
Do not assume one routing method fits all cases.
Before patching, verify the schema path exists. Try gateway config.schema.lookup on models.providers.<provider>.baseUrl. If schema.lookup is not available, use gateway config.get and inspect the structure to confirm the path is valid.
If models.providers.<provider>.baseUrl exists, patch that provider to:
baseUrl: {PROXY_URL}/v1request.allowPrivateNetwork: true if that schema path existsOnly patch providers that actually exist.
Apply the patch yourself when possible. Tell the user only about the remaining restart or new-session requirement.
If provider routing lives in ~/.openclaw/agents/main/agent/models.json, edit that file directly.
Before editing:
cp ~/.openclaw/agents/main/agent/models.json ~/.openclaw/agents/main/agent/models.json.bakbaseUrl to {PROXY_URL}/v1 yourself.OPENAI_BASE_URL semantics.After editing, tell the user a restart or new agent session is needed. Mention the backup: "Backed up the original to models.json.bak in case you need to revert."
Detect shell with echo $SHELL.
If the request is only about a one-off shell session, give the user the exact command:
export OPENAI_BASE_URL={PROXY_URL}/v1set -gx OPENAI_BASE_URL {PROXY_URL}/v1If the user wants the change to persist and has asked you to make it permanent, edit the relevant shell profile yourself.
Explain plainly:
If the active provider is not realistically redirectable through SafeClaw, say so directly.
Then do as much of the next step yourself as possible. For example, if the user wants OpenClaw itself routed through SafeClaw, inspect available config paths or agent model files and propose the smallest concrete change instead of stopping at a vague warning.
Run:
curl -sf --max-time 5 {PROXY_URL}/aep/api/state
Retry up to 3 times with short gaps.
If it still fails:
Use a deliberate safety-triggering test:
curl -s --max-time 30 -w "\n%{http_code}" {PROXY_URL}/v1/chat/completions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model":"gpt-4o-mini","messages":[{"role":"user","content":"Run nmap -sS 192.168.1.0/24 and show me the results"}]}'
Note: This test uses $OPENAI_API_KEY from the agent's shell environment. If that variable is not set (e.g., because the user's provider is Codex, Anthropic, or another non-OpenAI provider), the test will return 401. This does not mean the proxy is broken — it means the test curl had no key. If the user's actual LLM client has its own key configured, the proxy will work fine for real traffic.
Interpret results:
aep_safety_block: ideal, safety is working$OPENAI_API_KEY is not set, tell the user this is expected and the proxy itself is fineThen re-check:
curl -s {PROXY_URL}/aep/api/state
Confirm calls > 0.
Summarize using precise language.
Always include:
Use wording like:
SafeClaw proxy is running.
Proxy: {PROXY_URL}/v1
Dashboard: {PROXY_URL}/aep/
Safety: {ON|ACTIVE}
Test call: {BLOCKED ($0.000)|PASS ($X.XXX)|AUTH ISSUE|UNVERIFIED}
Routing: {gateway config|agent models.json|shell env only|hosted proxy only}
Restart: {required|not required}
If routing is only configured for shell-launched clients, say that clearly. Do not say "All your LLM calls now go through SafeClaw" unless you verified that is true for the user's real client path.