Back to skill
v0.1.0

抖音直播弹幕AI智能回复助手

ReviewClawScan verdict for this skill. Analyzed May 1, 2026, 7:53 AM.

Analysis

The skill mostly matches its stated Douyin live-chat AI reply purpose, but it runs a large obfuscated JavaScript signing helper through eval and disables WebSocket TLS certificate checks, so it should be reviewed carefully before use.

GuidanceInstall only if you are comfortable running bundled obfuscated JavaScript locally and sharing live-chat content with DeepSeek. Prefer a version that removes eval, documents the signing script source, keeps TLS verification enabled, and clearly declares its API key and dependency requirements.

Findings (8)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

Abnormal behavior control

Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.

Unexpected Code Execution
SeverityHighConfidenceHighStatusConcern
scripts/get_sign_wrapper.js
const signCode = fs.readFileSync(path.join(__dirname, 'sign.js'), 'utf8');
eval(signCode);

The Node wrapper reads and dynamically evaluates the bundled signing script at runtime. Because the evaluated script is large and obfuscated, the user cannot easily verify what code will run with local Node.js privileges.

User impactRunning the assistant executes opaque JavaScript on the user's machine; if the bundled signing script is unsafe or later modified, it could perform actions beyond generating a Douyin signature.
RecommendationAvoid eval-based loading. Export a reviewed get_sign function as a normal module, document the signing script provenance, and use a minimized/deobfuscated or pinned trusted implementation.
Agentic Supply Chain Vulnerabilities
SeverityMediumConfidenceHighStatusConcern
scripts/sign.js
var _0x3e0a6e = eval('var _0x13db80 = w_0x25f3;require(_0x13db80(628));')

The static scan shows dynamic require/eval inside the bundled signing helper, and the file content is heavily obfuscated. The artifacts do not provide a source, homepage, checksum, or provenance for this helper.

User impactA user would be trusting opaque third-party-style code as part of the skill's runtime path, which increases the chance of hidden or unintended behavior going unnoticed.
RecommendationProvide provenance for sign.js, pin a known version or checksum, replace obfuscated code where possible, and document why any dynamic loading is necessary.
Tool Misuse and Exploitation
SeverityMediumConfidenceHighStatusConcern
scripts/douyinlive.py
self.ws.run_forever(sslopt={'cert_reqs': ssl.CERT_NONE})

The WebSocket connection disables TLS certificate verification. This is not required by the stated purpose and weakens the integrity of data received from the live chat connection.

User impactOn an untrusted network, an attacker could potentially tamper with live-chat data that is then displayed, cached, and sent to the AI provider for reply generation.
RecommendationKeep certificate verification enabled and fix certificate or proxy issues explicitly instead of disabling TLS verification.
Agent Goal Hijack
SeverityLowConfidenceMediumStatusNote
scripts/deepseek_ai.py
"content": f'观众"{user_name}"发了一条弹幕:"{user_message}"\n\n请帮我回复这条弹幕:'

Untrusted live-chat text is inserted directly into the LLM prompt. A viewer could try to influence the assistant's generated reply through prompt-injection-style chat messages.

User impactThe model could generate inappropriate or off-persona replies if a viewer crafts manipulative chat text, although the artifacts do not show automatic posting back to Douyin.
RecommendationAdd prompt-injection-resistant instructions, filter adversarial messages, and have the streamer review outputs before using them publicly.
Rogue Agents
SeverityLowConfidenceHighStatusNote
scripts/main_with_reconnect.py
max_reconnects = 100  # 最大重连次数
...
while reconnect_count < max_reconnects:

The recommended entry point can keep reconnecting and running for a long live session. This behavior is disclosed and bounded, but it can continue making network/API calls until stopped or the retry limit is reached.

User impactA long-running session may continue consuming DeepSeek API quota and processing live-chat data if left unattended.
RecommendationRun it only when intentionally monitoring a live stream, set API spending limits, and stop it with Ctrl+C when finished.
Permission boundary

Checks whether tool use, credentials, dependencies, identity, account access, or inter-agent boundaries are broader than the stated purpose.

Identity and Privilege Abuse
SeverityLowConfidenceHighStatusNote
scripts/config.py
DEEPSEEK_API_KEY = os.environ.get("DEEPSEEK_API_KEY", "your_deepseek_api_key_here")

The skill requires a DeepSeek API key to generate replies. This is expected for the stated AI integration, and the code supports using an environment variable.

User impactThe key can incur API usage charges and should be protected; the registry metadata does not declare this credential requirement.
RecommendationUse an environment variable or secrets manager, avoid committing a real key into config.py, monitor API spending, and declare the credential requirement in metadata.
Sensitive data protection

Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.

Insecure Inter-Agent Communication
SeverityLowConfidenceHighStatusNote
scripts/deepseek_ai.py
"content": f'观众"{user_name}"发了一条弹幕:"{user_message}"\n\n请帮我回复这条弹幕:'

Viewer names/messages are included in the prompt payload sent to DeepSeek. This is purpose-aligned and disclosed, but it is still an external provider data flow.

User impactLive-chat content and the configured host persona may be sent outside the local machine to DeepSeek for processing.
RecommendationDo not include sensitive private information in the host profile or chat input, disclose AI-provider processing where appropriate, and review the provider's retention/privacy settings.
Memory and Context Poisoning
SeverityLowConfidenceHighStatusNote
scripts/reply_cache.py
self.cache[key] = {
            'user_message': user_message,
            'ai_reply': reply_data.get('reply'),
            'timestamp': reply_data.get('timestamp'),

The cache stores viewer messages, AI replies, and timestamps locally for reuse.

User impactPast live-chat content and replies may remain on disk and may be reused for later matching messages.
RecommendationProtect or periodically delete cache files, avoid caching sensitive conversations, and make retention behavior clear to users.