twitter-ai-kol-fetcher
AdvisoryAudited by Static analysis on Apr 30, 2026.
Overview
No suspicious patterns detected.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The skill could make Twitter API requests under an unknown or shared account instead of only using a user-provided key.
When the config lacks twitter_api_key, the fetcher falls back to an embedded Twitter API credential; the registry declares no primary credential or required env vars, so the account/billing boundary is unclear.
API_KEY = CONFIG.get("twitter_api_key", "new1_7590bc837c4d4104ada0ef3419ab7d6c") # 默认值供本地使用Remove the embedded API key, require a user-supplied Twitter API key via config or environment, and declare the credential requirement in metadata.
Users may believe intermediate data is not retained, while cached tweet/topic data can remain on the local machine and Feishu delivery is ambiguous.
The documentation promises deletion/no local storage, but scripts write /tmp/kol_tweets_*.json and *_filtered.json without any deletion path shown. It also mentions Feishu delivery without a configured destination or approval boundary.
生成内参 → Markdown 文本 → 发送到飞书 → 删除临时文件 ... 重要:不保存本地文件!
Make the retention behavior accurate, delete temporary files after use if promised, and require an explicit Feishu destination and user approval before sending.
A malicious or misleading tweet could affect the analysis or wording of the generated internal report.
Untrusted tweet text is inserted directly into LLM prompts for report generation. This is central to the skill, but tweets containing prompt-like instructions could influence the generated report.
@{t.get('username', 'unknown')}: {t.get('text', '')[:250]}...Clearly delimit tweet text as untrusted source material and instruct the model not to follow commands embedded in tweets.
Report prompts and source tweet content leave the local environment and are processed by OpenRouter/model providers.
The generated prompts, including fetched tweet/topic material, are sent to OpenRouter. This is expected for the report-generation purpose, but it is an external provider data flow.
requests.post("https://openrouter.ai/api/v1/chat/completions", ... "messages": [{"role": "user", "content": prompt}]Disclose this provider data flow clearly and avoid adding private notes, internal strategy, or confidential material to prompts unless the user accepts the provider’s data handling.
