ai-topic-scout-feishu
AdvisoryAudited by Static analysis on Apr 30, 2026.
Overview
No suspicious patterns detected.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Installing and authorizing the skill may let the agent create tables and add or update records in the user’s Feishu workspace.
The skill requires delegated Feishu account access and writes records into a Feishu Bitable. This is central to the stated purpose, but it is still third-party workspace write authority.
自动写入飞书多维表格 ... **飞书授权**: 首次使用需完成用户 OAuth 授权
Authorize only the minimum Feishu scopes needed, use a dedicated Bitable or folder where possible, and review generated records periodically.
The skill may rely on local command-line tools such as yt-dlp or curl when fetching content.
The included script runs an external local binary to fetch YouTube data. It uses argument lists rather than shell interpolation and is aligned with the scraping purpose.
cmd = ["yt-dlp", ... f"https://www.youtube.com/{channel_id}/videos"] ... subprocess.run(cmd, capture_output=True, text=True, check=True, timeout=60)Install command-line dependencies from trusted sources, keep them updated, and avoid using untrusted channel/account inputs outside the documented workflow.
A malicious or joking social-media post could distort the generated topic analysis or Feishu entries.
Fetched YouTube/Twitter content is inserted into an LLM prompt for analysis. Social posts and descriptions are untrusted text and could contain prompt-injection-like instructions.
analysis_prompt = """分析以下内容... 内容列表:\n{raw_contents}\n""" ... analysis_result = llm_analyze(analysis_prompt)Treat fetched content strictly as data, add prompt instructions to ignore commands inside posts, and review high-impact generated recommendations before acting on them.
If enabled, the skill can keep fetching and writing Feishu records on a schedule without a fresh manual prompt each time.
The skill documents an optional scheduled OpenClaw cron job that repeatedly invokes the scraping workflow. It is disclosed and user-configured, not hidden.
cron:\n jobs:\n - name: "AI选题追踪" ... expr: "0 9,21 * * *" ... message: "抓取选题"
Enable cron only if desired, keep the schedule limited, and document how to pause or remove the job.
The runtime behavior depends partly on locally installed third-party tools.
The README asks the user to install an unpinned external package. This is a normal setup step for this scraping use case, but it depends on package-source trust.
pip install yt-dlp
Install yt-dlp from a trusted package source, consider pinning a known-good version, and avoid running the skill in highly sensitive environments without review.
