高可靠性文本审核器
SuspiciousAudited by ClawScan on May 10, 2026.
Overview
The skill mostly matches a text-auditing purpose, but it needs review because it can expose Baidu credentials in errors, sends content to Baidu, overpromises “zero risk,” and references a missing runner script.
Review before installing. Use a dedicated, low-privilege Baidu API key if possible, avoid auditing highly sensitive text through the cloud API, verify why run.ps1 is missing, and treat the tool's output as a best-effort compliance review rather than a zero-risk guarantee.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
If a Baidu request fails, API secrets or access tokens could be revealed in the agent transcript or logs and then reused against the user's Baidu account or quota.
The script reads Baidu API credentials from the user's OpenClaw config, places secrets/tokens in request URLs or query parameters, and prints raw exception strings. HTTP errors from requests can include the full URL, which may expose client_secret or access_token in agent output or logs.
config_path = Path.home() / '.openclaw' / 'config.json' ... "client_secret": self.secret_key ... url = f"{TEXT_CENSOR_URL}?access_token={self.access_token}" ... error_output = {"error": str(e)}Redact secrets and tokens before printing errors, avoid putting access tokens in logged URLs where possible, and declare the Baidu credential/config requirement in metadata.
The skill may fail, or an agent may end up executing a runner file that was not included in the reviewed package.
The declared source is a placeholder, and the skill says the agent should run a core run.ps1 script, but the supplied file manifest does not include run.ps1. That leaves the main executable path missing or unreviewed.
source: https://github.com/your-repo/robust-text-auditor ... 该技能的核心是 `run.ps1` 脚本
Include the referenced run.ps1 in the package, align registry and SKILL metadata, provide a real source/provenance link, and pin or declare runtime dependencies.
Users may overtrust the result and publish content believing it is guaranteed safe when the tool can only provide a risk assessment.
The prompt tells the model to guarantee that content will never trigger warnings, throttling, or penalties. That is an unsupported compliance guarantee, especially for platform moderation rules.
确保我提供的内容计划在发布后绝对不会触发任何平台的警告、限流或处罚...【零风险】方案
Replace absolute safety language with clear limitations, such as “best-effort compliance review,” and disclose that platform enforcement outcomes cannot be guaranteed.
Text or images submitted for audit are shared with Baidu, which may matter if the content is private, confidential, or regulated.
The script sends user-provided text, and also supports sending selected local images, to Baidu content-audit endpoints. This is purpose-aligned, but it is an external provider data flow.
data = {"text": text} ... response = requests.post(url, data=data, headers=headers, timeout=10) ... data = {"image": img_base64}Use the skill only for content you are comfortable sending to Baidu, disclose this data flow clearly, and consider a local-only mode for sensitive material.
Malicious or adversarial text being audited could try to steer the final LLM response away from the intended compliance review.
The audited text is inserted directly into the model prompt. This is expected for a text-auditing tool, but text being audited could contain prompt-like instructions unless the model is told to treat it only as data.
final_prompt = prompt_template.replace('{TEXT_TO_CHECK}', text_to_check)Wrap audited text in robust delimiters, escape delimiter characters, and add explicit instructions that any directives inside the audited text are data, not commands.
