API Logger
WarnAudited by ClawScan on May 10, 2026.
Overview
This skill largely does what it advertises, but it persistently records complete AI prompts and responses and can route or export that sensitive data through insufficiently bounded components.
Review carefully before installing. Only use this on machines and workloads where full LLM conversation logging is acceptable. Set the upstream to your trusted HTTPS model endpoint before routing OpenClaw traffic through the proxy, protect or regularly delete the log directory, and avoid the Feishu export until the external helper and sharing permissions are verified.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Private prompts, business data, credentials accidentally pasted into prompts, and model responses may be saved in local JSONL logs and later viewed or shared.
The skill explicitly stores full prompts, system prompts, message history, and model outputs. That is purpose-aligned for an API logger, but it is high-impact sensitive context with no clear retention, exclusion, or protection controls in the artifacts.
透明代理拦截所有请求,零侵入记录...记录完整 prompt/generation/token 用量...`request_body` | 完整请求(model、system、messages)
Use only for debugging sessions where full conversation capture is acceptable; secure the log directory, define deletion/retention rules, and avoid logging sensitive workloads.
If the upstream is not the user's intended trusted model endpoint, LLM traffic and authorization headers could be exposed to the wrong service.
The background proxy is installed with a hardcoded non-HTTPS upstream endpoint. Because the skill routes LLM API calls through this proxy, API credentials and request bodies could be sent to that endpoint unless the user changes it first.
UPSTREAM="http://model.mify.ai.srv/anthropic" ... <string>--upstream</string> ... <string>${UPSTREAM}</string>Before changing OpenClaw's baseUrl, set the upstream to a trusted HTTPS endpoint and verify that no traffic is routed to the default address.
The proxy may keep running and logging future LLM calls after the initial debugging task is finished.
The installer creates a LaunchAgent that starts at login and is kept alive. This is disclosed and aligned with continuous API logging, but users should recognize it as persistent background behavior.
<key>RunAtLoad</key>\n <true/>\n\n <key>KeepAlive</key>\n <true/> ... launchctl load "$PLIST_PATH"
Install only if continuous logging is desired, and document how to stop it, such as unloading and removing the LaunchAgent.
Using the Feishu option could run unreviewed local code with access to sensitive prompt/response logs and any Feishu credentials configured for that helper.
The optional Feishu export executes an external helper script at a hardcoded absolute path outside the supplied skill files, passing it generated log content. That helper is not included in the manifest for review.
FEISHU_WRITE = Path("/Users/xm_plus/.openclaw/workspace/company/feishu_write.py") ... subprocess.run(["python3", str(FEISHU_WRITE), title, tmp.name],Do not use --feishu until the helper script is reviewed, bundled or pinned, and its credential/data handling is clearly documented.
Prompt and response logs may become visible to people or systems with access to the generated Feishu document.
The skill documents an explicit Feishu export mode. This is user-invoked rather than automatic, but it can move LLM logs into a third-party collaboration system.
python3 log_viewer.py --stats --feishu ... 生成飞书文档(超300条自动截断明细)
Use Feishu export only for non-sensitive logs or after confirming the document destination, permissions, and sharing scope.
