抖音直播弹幕AI智能回复助手
Analysis
The skill mostly matches its stated Douyin live-chat AI reply purpose, but it runs a large obfuscated JavaScript signing helper through eval and disables WebSocket TLS certificate checks, so it should be reviewed carefully before use.
Findings (8)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.
const signCode = fs.readFileSync(path.join(__dirname, 'sign.js'), 'utf8'); eval(signCode);
The Node wrapper reads and dynamically evaluates the bundled signing script at runtime. Because the evaluated script is large and obfuscated, the user cannot easily verify what code will run with local Node.js privileges.
var _0x3e0a6e = eval('var _0x13db80 = w_0x25f3;require(_0x13db80(628));')The static scan shows dynamic require/eval inside the bundled signing helper, and the file content is heavily obfuscated. The artifacts do not provide a source, homepage, checksum, or provenance for this helper.
self.ws.run_forever(sslopt={'cert_reqs': ssl.CERT_NONE})The WebSocket connection disables TLS certificate verification. This is not required by the stated purpose and weakens the integrity of data received from the live chat connection.
"content": f'观众"{user_name}"发了一条弹幕:"{user_message}"\n\n请帮我回复这条弹幕:'Untrusted live-chat text is inserted directly into the LLM prompt. A viewer could try to influence the assistant's generated reply through prompt-injection-style chat messages.
max_reconnects = 100 # 最大重连次数 ... while reconnect_count < max_reconnects:
The recommended entry point can keep reconnecting and running for a long live session. This behavior is disclosed and bounded, but it can continue making network/API calls until stopped or the retry limit is reached.
Checks whether tool use, credentials, dependencies, identity, account access, or inter-agent boundaries are broader than the stated purpose.
DEEPSEEK_API_KEY = os.environ.get("DEEPSEEK_API_KEY", "your_deepseek_api_key_here")The skill requires a DeepSeek API key to generate replies. This is expected for the stated AI integration, and the code supports using an environment variable.
Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.
"content": f'观众"{user_name}"发了一条弹幕:"{user_message}"\n\n请帮我回复这条弹幕:'Viewer names/messages are included in the prompt payload sent to DeepSeek. This is purpose-aligned and disclosed, but it is still an external provider data flow.
self.cache[key] = {
'user_message': user_message,
'ai_reply': reply_data.get('reply'),
'timestamp': reply_data.get('timestamp'),The cache stores viewer messages, AI replies, and timestamps locally for reuse.
