Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Agent Reach.Bak

Give your AI agent eyes to see the entire internet. 7500+ GitHub stars. Search and read 14 platforms: Twitter/X, Reddit, YouTube, GitHub, Bilibili, XiaoHongS...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 8 · 0 current installs · 0 all-time installs
byReiy Leo@reiy-leo
fork of @panniantong/agent-reach (based on 1.1.0)
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The README-style SKILL.md describes broad web-reading capabilities across many platforms, but the skill declares no required binaries, no env vars, and no install spec. The instructions expect tools like agent-reach, mcporter, xreach, yt-dlp, gh, Python packages (miku_ai, feedparser), Camoufox code under ~/.agent-reach, and npm packages (undici) — none of which are declared. This is a capability/requirement mismatch.
!
Instruction Scope
Runtime instructions tell the agent to run arbitrary shell commands, call third-party fetch services (e.g., https://r.jina.ai/URL), run Python scripts from a persistent directory (~/.agent-reach/tools/...), and ask the user for cookies to log into platforms. Those steps allow the agent to fetch arbitrary URLs (potentially leaking sensitive content to third parties) and execute third‑party code on the host; the SKILL.md gives broad discretion to run many external tools.
!
Install Mechanism
There is no install spec despite claims of 'one command install' and 'agent-reach doctor'. Because this is instruction-only, installation and code execution depend on actions the agent or user will perform at runtime (cloning repos, running installers, npm/pip installs). Lack of a tracked/explicit install increases risk (user might be prompted to run remote install scripts or run unverified code).
!
Credentials
requires.env is empty but the instructions explicitly expect the user to provide cookies and possibly other credentials (xsec_token, browser cookies) and to persist them under ~/.agent-reach. Sensitive data may be requested/stored even though the skill does not declare such needs. The skill also routes fetches through third‑party endpoints (r.jina.ai), which effectively sends user-supplied URLs/content to external services.
!
Persistence & Privilege
The guidance tells the agent to store persistent data under ~/.agent-reach and use /tmp for temp files. Persisting cookies or tokens in the user's home directory is normal for such tools but is a notable privilege: credentials and scraped content may be stored locally. The skill is not always-on, but autonomous invocation plus the ability to request/store credentials increases the blast radius.
What to consider before installing
This skill appears to be a collection of command examples for many scrapers and connectors rather than a self-contained, audited plugin. Before installing or using it: (1) do not paste account cookies or secrets unless you understand and trust the upstream project; prefer ephemeral/test accounts; (2) review the upstream install/docs and source code at the GitHub repo (the SKILL.md points to raw.install.md) before running any install commands or scripts; (3) expect to manually install/inspect required binaries (yt-dlp, gh, mcporter, xreach, Python packages, npm undici) rather than relying on the agent to auto-install; (4) be aware that some fetch commands send URLs/content to third-party services (r.jina.ai), which can leak sensitive page content — avoid sending private URLs; (5) consider running in an isolated or disposable environment (container/VM) and restrict network access if you must test it. The inconsistencies here (no declared requirements but many external dependencies and cookie use) are the reason for caution.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk972c80frz6rkgyv2p0bxrm0dd83f7rx

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Agent Reach — Usage Guide

Upstream tools for 13+ platforms. Call them directly.

Run agent-reach doctor to check which channels are available.

⚠️ Workspace Rules

Never create files in the agent workspace. Use /tmp/ for temporary output and ~/.agent-reach/ for persistent data.

Web — Any URL

curl -s "https://r.jina.ai/URL"

Web Search (Exa)

mcporter call 'exa.web_search_exa(query: "query", numResults: 5)'
mcporter call 'exa.get_code_context_exa(query: "code question", tokensNum: 3000)'

Twitter/X (xreach)

xreach search "query" -n 10 --json          # search
xreach tweet URL_OR_ID --json                # read tweet (supports /status/ and /article/ URLs)
xreach tweets @username -n 20 --json         # user timeline
xreach thread URL_OR_ID --json               # full thread

YouTube (yt-dlp)

yt-dlp --dump-json "URL"                     # video metadata
yt-dlp --write-sub --write-auto-sub --sub-lang "zh-Hans,zh,en" --skip-download -o "/tmp/%(id)s" "URL"
                                             # download subtitles, then read the .vtt file
yt-dlp --dump-json "ytsearch5:query"         # search

Bilibili (yt-dlp)

yt-dlp --dump-json "https://www.bilibili.com/video/BVxxx"
yt-dlp --write-sub --write-auto-sub --sub-lang "zh-Hans,zh,en" --convert-subs vtt --skip-download -o "/tmp/%(id)s" "URL"

Server IPs may get 412. Use --cookies-from-browser chrome or configure proxy.

Reddit

curl -s "https://www.reddit.com/r/SUBREDDIT/hot.json?limit=10" -H "User-Agent: agent-reach/1.0"
curl -s "https://www.reddit.com/search.json?q=QUERY&limit=10" -H "User-Agent: agent-reach/1.0"

Server IPs may get 403. Search via Exa instead, or configure proxy.

GitHub (gh CLI)

gh search repos "query" --sort stars --limit 10
gh repo view owner/repo
gh search code "query" --language python
gh issue list -R owner/repo --state open
gh issue view 123 -R owner/repo

小红书 / XiaoHongShu (mcporter)

mcporter call 'xiaohongshu.search_feeds(keyword: "query")'
mcporter call 'xiaohongshu.get_feed_detail(feed_id: "xxx", xsec_token: "yyy")'
mcporter call 'xiaohongshu.get_feed_detail(feed_id: "xxx", xsec_token: "yyy", load_all_comments: true)'
mcporter call 'xiaohongshu.publish_content(title: "标题", content: "正文", images: ["/path/img.jpg"], tags: ["tag"])'

Requires login. Use Cookie-Editor to import cookies.

抖音 / Douyin (mcporter)

mcporter call 'douyin.parse_douyin_video_info(share_link: "https://v.douyin.com/xxx/")'
mcporter call 'douyin.get_douyin_download_link(share_link: "https://v.douyin.com/xxx/")'

No login needed.

微信公众号 / WeChat Articles

Search (miku_ai):

python3 -c "
import asyncio
from miku_ai import get_wexin_article
async def s():
    for a in await get_wexin_article('query', 5):
        print(f'{a[\"title\"]} | {a[\"url\"]}')
asyncio.run(s())
"

Read (Camoufox — bypasses WeChat anti-bot):

cd ~/.agent-reach/tools/wechat-article-for-ai && python3 main.py "https://mp.weixin.qq.com/s/ARTICLE_ID"

WeChat articles cannot be read with Jina Reader or curl. Must use Camoufox.

LinkedIn (mcporter)

mcporter call 'linkedin.get_person_profile(linkedin_url: "https://linkedin.com/in/username")'
mcporter call 'linkedin.search_people(keyword: "AI engineer", limit: 10)'

Fallback: curl -s "https://r.jina.ai/https://linkedin.com/in/username"

RSS (feedparser)

RSS

python3 -c "
import feedparser
for e in feedparser.parse('FEED_URL').entries[:5]:
    print(f'{e.title} — {e.link}')
"

Troubleshooting

  • Channel not working? Run agent-reach doctor — shows status and fix instructions.
  • Twitter fetch failed? Ensure undici is installed: npm install -g undici. Configure proxy: agent-reach configure proxy URL.

Setting Up a Channel ("帮我配 XXX")

If a channel needs setup (cookies, Docker, etc.), fetch the install guide: https://raw.githubusercontent.com/Panniantong/agent-reach/main/docs/install.md

User only provides cookies. Everything else is your job.

Files

2 total
Select a file
Select a file to preview.

Comments

Loading comments…