twitter-ai-kol-fetcher

AdvisoryAudited by Static analysis on Apr 30, 2026.

Overview

No suspicious patterns detected.

Findings (0)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

The skill could make Twitter API requests under an unknown or shared account instead of only using a user-provided key.

Why it was flagged

When the config lacks twitter_api_key, the fetcher falls back to an embedded Twitter API credential; the registry declares no primary credential or required env vars, so the account/billing boundary is unclear.

Skill content
API_KEY = CONFIG.get("twitter_api_key", "new1_7590bc837c4d4104ada0ef3419ab7d6c")  # 默认值供本地使用
Recommendation

Remove the embedded API key, require a user-supplied Twitter API key via config or environment, and declare the credential requirement in metadata.

What this means

Users may believe intermediate data is not retained, while cached tweet/topic data can remain on the local machine and Feishu delivery is ambiguous.

Why it was flagged

The documentation promises deletion/no local storage, but scripts write /tmp/kol_tweets_*.json and *_filtered.json without any deletion path shown. It also mentions Feishu delivery without a configured destination or approval boundary.

Skill content
生成内参 → Markdown 文本 → 发送到飞书 → 删除临时文件 ... 重要:不保存本地文件!
Recommendation

Make the retention behavior accurate, delete temporary files after use if promised, and require an explicit Feishu destination and user approval before sending.

NoteHigh Confidence
ASI01: Agent Goal Hijack
What this means

A malicious or misleading tweet could affect the analysis or wording of the generated internal report.

Why it was flagged

Untrusted tweet text is inserted directly into LLM prompts for report generation. This is central to the skill, but tweets containing prompt-like instructions could influence the generated report.

Skill content
@{t.get('username', 'unknown')}: {t.get('text', '')[:250]}...
Recommendation

Clearly delimit tweet text as untrusted source material and instruct the model not to follow commands embedded in tweets.

What this means

Report prompts and source tweet content leave the local environment and are processed by OpenRouter/model providers.

Why it was flagged

The generated prompts, including fetched tweet/topic material, are sent to OpenRouter. This is expected for the report-generation purpose, but it is an external provider data flow.

Skill content
requests.post("https://openrouter.ai/api/v1/chat/completions", ... "messages": [{"role": "user", "content": prompt}]
Recommendation

Disclose this provider data flow clearly and avoid adding private notes, internal strategy, or confidential material to prompts unless the user accepts the provider’s data handling.