AI守门人

ReviewAudited by ClawScan on May 10, 2026.

Overview

The skill is mostly a disclosed local LLM proxy, but its start/stop script can forcibly kill unrelated processes using the configured port.

Use this only if you are comfortable running a local background LLM proxy. Before starting or restarting, confirm port 18888 is not used by another service, use scoped provider API keys, review the filtering rules and logs, and stop the proxy when finished.

Findings (6)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Starting, stopping, or restarting the skill could terminate an unrelated local service if it is using the same port.

Why it was flagged

The default control flow can forcibly terminate every process listening on the configured port, without verifying that it is the llm-proxy process.

Skill content
kill_by_port "$PROXY_PORT" ... pids=$(lsof -ti ":$port" ...); ... kill -9 "$pid"
Recommendation

Check the port and PID before running start/stop/restart. The maintainer should verify process ownership, ask before killing, and prefer graceful termination.

What this means

Model output may be changed or interrupted by the proxy’s safety rules.

Why it was flagged

The proxy can inject warning chunks into streaming responses and block responses that match critical content rules. This is aligned with the stated safety-filtering purpose.

Skill content
warning_chunk = inject_warning_chunk(quick_alerts) ... if critical: ... return json.dumps(error_response, ensure_ascii=False).encode('utf-8'), True
Recommendation

Review and test the filtering rules before relying on the proxy for production or sensitive workflows.

What this means

Provider API keys pass through the local proxy when you use it.

Why it was flagged

The proxy is intended to receive and forward provider authorization headers. This is expected for an LLM API proxy, and the artifacts do not show unrelated credential collection.

Skill content
-H "Authorization: Bearer $OPENAI_API_KEY"
Recommendation

Use provider keys with appropriate scope and billing controls, and only send requests from trusted local clients.

What this means

The skill may fail or behave differently on systems without those tools.

Why it was flagged

The scripts depend on local tools such as curl, python3, and lsof, while the registry metadata lists no required binaries. This is under-declared but visible and purpose-aligned.

Skill content
curl -s --max-time 3 "$PROXY_URL" ... python3 -u "$PROXY_SCRIPT" ... pids=$(lsof -ti ":$port"
Recommendation

Ensure python3, curl, and lsof are available, and the maintainer should declare these runtime requirements.

What this means

Prompts, responses, and metadata sent through the proxy may leave the machine for the selected provider.

Why it was flagged

The local gateway routes requests to multiple external provider endpoints. This is the core disclosed purpose, and it is bound to localhost by default.

Skill content
"listen_host": "127.0.0.1" ... "/openai": { "url": "https://api.openai.com/v1" }
Recommendation

Verify the selected provider route before sending sensitive prompts, and understand each provider’s retention and billing policies.

What this means

The proxy can continue handling local requests and writing logs after the start command returns.

Why it was flagged

The proxy starts as a background process and records a PID file. This is disclosed by the start/stop workflow and does not show hidden autostart behavior.

Skill content
python3 -u "$PROXY_SCRIPT" >> "$LOG_FILE" 2>&1 & ... echo $! > "$PID_FILE"
Recommendation

Stop it when finished and monitor the log directory if using it with sensitive requests.