Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

reddit-digest

v1.0.3

抓取指定单个 Subreddit 最近 24 小时热门 Post,逐一获取详情与评论,生成摘要、核心要点、可实践建议、灵感启发及社交媒体分享文案,输出为每日精选 Markdown 文档。 当用户说"帮我抓取/总结 Reddit r/xxx"、"生成 Reddit 每日摘要"、"reddit digest"时使用。

0· 102·0 current·0 all-time
byAlex Redisread@redisread
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's purpose (fetch recent hot posts from a subreddit and produce a Markdown digest) aligns with the included script and templates. However, the runtime relies heavily on an external binary 'autocli' to fetch post details, yet the registry metadata lists no required binaries and no install instructions — a clear mismatch (the skill cannot run as described without autocli).
!
Instruction Scope
SKILL.md instructs the agent to create directories, run a one-line background pipeline that calls 'autocli' and the bundled Python script, and poll temp directories/logs. It also references environment variables (REDDIT_DIGEST_BASE_DIR, REDDIT_DIGEST_SUBREDDIT) that are not declared in the skill metadata and embeds a hard-coded default path (/Users/victor/...) that is user-specific. These instructions grant the skill broad discretion to read/write local files and to start subprocesses — expected for a fetcher but the undocumented dependencies and hard-coded paths are problematic.
Install Mechanism
There is no install spec (instruction-only plus a local script). That limits what the skill itself writes to disk during install. The risk is primarily from runtime execution of an external binary ('autocli'), not from an installer fetched by the skill.
!
Credentials
The skill declares no required env vars or credentials, yet SKILL.md uses two env variables for configuration and the script invokes 'autocli' which may rely on browser sessions or local credentials. The lack of declared required binaries/credentials hides the real runtime needs and potential access to browser-stored auth/session state.
Persistence & Privilege
The skill is not set to always:true and does not request to modify other skills or global agent settings. It performs local file I/O within configured paths only, which is consistent with its purpose.
What to consider before installing
This skill appears to do what it says (fetch Reddit posts and produce a Markdown digest), but there are red flags you should address before installing or running it: - 'autocli' is a required runtime dependency (used by both the SKILL.md pipeline and the Python script) but it is not declared in the skill metadata. Verify you have a trusted 'autocli' binary installed and understand what permissions/sessions it uses (it is described as browser-based and may use your browser cookies/sessions). - SKILL.md references environment variables (REDDIT_DIGEST_BASE_DIR, REDDIT_DIGEST_SUBREDDIT) and uses a hard-coded default path (/Users/victor/...). Update those defaults and provide explicit paths to avoid writing into unexpected locations. - The skill runs a one-line background shell pipeline that writes logs to /tmp and creates temp directories. Review the exact command before running, and consider running it in an isolated environment (container or dedicated account) first. - Inspect the bundled script (scripts/fetch_post_details.py) yourself — it uses subprocess.run to invoke 'autocli'; ensure there are no unexpected network endpoints or commands you don't want executed. If you decide to proceed: install 'autocli' from a trusted source, set REDDIT_DIGEST_BASE_DIR to a safe directory you control, avoid running it while logged into sensitive browser accounts, and run the pipeline manually the first time to confirm behavior. If you cannot verify 'autocli', treat the skill as unsafe to run.

Like a lobster shell, security has layers — review code before you run it.

latestvk9790pr19v3djmwspvfc4te43984hrhp
102downloads
0stars
4versions
Updated 1w ago
v1.0.3
MIT-0

Reddit Subreddit 每日摘要

配置

优先级:命令行参数 > 环境变量 > 默认值

参数环境变量默认值
--base-dirREDDIT_DIGEST_BASE_DIR/Users/victor/Desktop/resource/daily-info/reddit
--subredditREDDIT_DIGEST_SUBREDDITClaudeAI

输出路径:{BASE_DIR}/{YYYYMMDD}/{subreddit_name}/{subreddit_name}-{YYYYMMDD}.md 临时目录:{BASE_DIR}/{YYYYMMDD}/{subreddit_name}/temp/

依赖

  • autocli:获取热门列表和 Post 详情
  • scripts/fetch_post_details.py:批量串行获取所有 Post 详情,保存到临时目录

执行流程

Step 1: 初始化

确定 BASE_DIRSUBREDDITDATE(YYYYMMDD)变量,创建目录:

mkdir -p {BASE_DIR}/{DATE}/{SUBREDDIT}/temp

Step 2: 批量获取 Post 列表与详情

注意:以下命令必须写成一行(不含换行),在终端后台执行(is_background=true),避免3分钟超时。

SKILL_DIR="/path/to/skills/reddit-digest" && TEMP_DIR="{BASE_DIR}/{DATE}/{SUBREDDIT}/temp" && autocli reddit subreddit {SUBREDDIT} --limit 20 --sort top --time day --format json | python3 "$SKILL_DIR/scripts/fetch_post_details.py" --temp-dir "$TEMP_DIR" > /tmp/reddit_fetch_{SUBREDDIT}.log 2>&1

SKILL_DIR 通常为 {workspace}/skills/reddit-digest(本 skill 所在目录)。

脚本串行调用 autocli reddit read {url} -f json 逐条抓取(autocli 基于浏览器,并发会导致 tab 冲突,默认 workers=1),每条 Post 保存为:

{TEMP_DIR}/{rank:02d}-{sanitized_title}.json

每个文件结构:

{
  "rank": 1,
  "meta": { "author", "comments", "title", "upvotes", "url" },
  "content": [ { "author", "score", "text", "type" }, ... ],
  "error": null
}

content 字段:type=POST 为原文,type=L0 为顶层评论,type=L1 为回复。

等待完成:后台启动后,每隔 30 秒检查 ls {TEMP_DIR}/ 或读取日志 /tmp/reddit_fetch_{SUBREDDIT}.log 确认进度(共 20 条,全部出现后即完成)。

容错:抓取失败时 error 字段记录原因,仅用 meta 元数据生成简要摘要。

Step 3: 分析每个 Post

逐一读取 {TEMP_DIR}/*.json,按 rank 顺序分析,筛选并剔除低价值 Post。

输出文档结构与筛选规则见 references/output-template.md

Step 4: 汇总输出

将所有分析结果按 rank 排序合并,写入最终文档。所有内容使用简体中文撰写

Comments

Loading comments...