Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Agent Reach

Give your AI agent eyes to see the entire internet. 7500+ GitHub stars. Search and read 14 platforms: Twitter/X, Reddit, YouTube, GitHub, Bilibili, XiaoHongS...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
6 · 4.8k · 341 current installs · 351 all-time installs
byPnant@panniantong
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (multi-platform web reader) aligns with the commands and channels referenced (curl, yt-dlp, gh, mcporter, etc.). However, the skill does not declare that it needs authentication assets (cookies, xsec_token) or third-party proxies even though many platform integrations explicitly require them. Asking the user to provide cookies/tokens for channels like XiaoHongShu and WeChat is consistent with the feature set but is not declared in required credentials.
!
Instruction Scope
SKILL.md instructs the agent to run many shell commands and to hand off URLs/content to external services (e.g., https://r.jina.ai/URL and raw.githubusercontent.com). It explicitly instructs using browser cookies and tokens and to run local Python tools under ~/.agent-reach (Camoufox/wechat-article-for-ai). These instructions enable the agent to collect and transmit sensitive authentication cookies/tokens and to execute third-party code without an explicit install or trust step — a potential data-exfiltration and execution risk.
Install Mechanism
There is no install spec (instruction-only), which lowers direct install risk. But the guide presumes many external tools are present or will be installed (yt-dlp, gh, mcporter, xreach, miku_ai, feedparser, undici/npm packages). The lack of an install spec makes it unclear how those binaries get provisioned and whether the required code comes from trusted releases.
!
Credentials
Registry metadata lists no required env vars/credentials, yet the runtime instructions ask for cookies, xsec_token, proxy URLs, and possibly npm packages (undici) and other tokens. Requesting full browser cookies or session tokens is sensitive and not justified or gated in the skill metadata; this mismatch increases risk of accidental credential exposure.
!
Persistence & Privilege
The skill advises storing persistent data under ~/.agent-reach and temporary files in /tmp. While not marked always:true, this persistent storage location lets the skill (or tools it runs) retain cookies, tokens, or downloaded code across runs. The skill giving itself a home directory plus instructions to run local tools that may be downloaded later raises persistence and surprise-execution risk.
What to consider before installing
This skill appears to be a coherent multi-platform web-reader, but it asks you (implicitly) for sensitive cookies/tokens and to run many external tools and third-party proxies without declaring those needs. Before installing or using it: (1) Inspect the upstream GitHub repo and the exact scripts the skill would run (especially anything under ~/.agent-reach). (2) Never paste full browser cookies/session tokens into chat — prefer short-lived, scoped API tokens or read-only methods. (3) Prefer running these tools inside a sandbox/container and review any code downloaded into ~/.agent-reach. (4) Be cautious about the r.jina.ai/raw.githubusercontent.com calls: they forward your URLs/content to third parties. (5) If you must use the skill, restrict network access and check what the agent will store persistently; require the agent to ask for explicit approval before accepting any cookies/tokens. (6) If uncertain, decline to provide authentication data and use only public, read-only endpoints (or run the commands yourself).

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.1.0
Download zip
latestvk9788s492xcsjnqv6chtzc1jtx82ntz3

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Agent Reach — Usage Guide

Upstream tools for 13+ platforms. Call them directly.

Run agent-reach doctor to check which channels are available.

⚠️ Workspace Rules

Never create files in the agent workspace. Use /tmp/ for temporary output and ~/.agent-reach/ for persistent data.

Web — Any URL

curl -s "https://r.jina.ai/URL"

Web Search (Exa)

mcporter call 'exa.web_search_exa(query: "query", numResults: 5)'
mcporter call 'exa.get_code_context_exa(query: "code question", tokensNum: 3000)'

Twitter/X (xreach)

xreach search "query" -n 10 --json          # search
xreach tweet URL_OR_ID --json                # read tweet (supports /status/ and /article/ URLs)
xreach tweets @username -n 20 --json         # user timeline
xreach thread URL_OR_ID --json               # full thread

YouTube (yt-dlp)

yt-dlp --dump-json "URL"                     # video metadata
yt-dlp --write-sub --write-auto-sub --sub-lang "zh-Hans,zh,en" --skip-download -o "/tmp/%(id)s" "URL"
                                             # download subtitles, then read the .vtt file
yt-dlp --dump-json "ytsearch5:query"         # search

Bilibili (yt-dlp)

yt-dlp --dump-json "https://www.bilibili.com/video/BVxxx"
yt-dlp --write-sub --write-auto-sub --sub-lang "zh-Hans,zh,en" --convert-subs vtt --skip-download -o "/tmp/%(id)s" "URL"

Server IPs may get 412. Use --cookies-from-browser chrome or configure proxy.

Reddit

curl -s "https://www.reddit.com/r/SUBREDDIT/hot.json?limit=10" -H "User-Agent: agent-reach/1.0"
curl -s "https://www.reddit.com/search.json?q=QUERY&limit=10" -H "User-Agent: agent-reach/1.0"

Server IPs may get 403. Search via Exa instead, or configure proxy.

GitHub (gh CLI)

gh search repos "query" --sort stars --limit 10
gh repo view owner/repo
gh search code "query" --language python
gh issue list -R owner/repo --state open
gh issue view 123 -R owner/repo

小红书 / XiaoHongShu (mcporter)

mcporter call 'xiaohongshu.search_feeds(keyword: "query")'
mcporter call 'xiaohongshu.get_feed_detail(feed_id: "xxx", xsec_token: "yyy")'
mcporter call 'xiaohongshu.get_feed_detail(feed_id: "xxx", xsec_token: "yyy", load_all_comments: true)'
mcporter call 'xiaohongshu.publish_content(title: "标题", content: "正文", images: ["/path/img.jpg"], tags: ["tag"])'

Requires login. Use Cookie-Editor to import cookies.

抖音 / Douyin (mcporter)

mcporter call 'douyin.parse_douyin_video_info(share_link: "https://v.douyin.com/xxx/")'
mcporter call 'douyin.get_douyin_download_link(share_link: "https://v.douyin.com/xxx/")'

No login needed.

微信公众号 / WeChat Articles

Search (miku_ai):

python3 -c "
import asyncio
from miku_ai import get_wexin_article
async def s():
    for a in await get_wexin_article('query', 5):
        print(f'{a[\"title\"]} | {a[\"url\"]}')
asyncio.run(s())
"

Read (Camoufox — bypasses WeChat anti-bot):

cd ~/.agent-reach/tools/wechat-article-for-ai && python3 main.py "https://mp.weixin.qq.com/s/ARTICLE_ID"

WeChat articles cannot be read with Jina Reader or curl. Must use Camoufox.

LinkedIn (mcporter)

mcporter call 'linkedin.get_person_profile(linkedin_url: "https://linkedin.com/in/username")'
mcporter call 'linkedin.search_people(keyword: "AI engineer", limit: 10)'

Fallback: curl -s "https://r.jina.ai/https://linkedin.com/in/username"

RSS (feedparser)

RSS

python3 -c "
import feedparser
for e in feedparser.parse('FEED_URL').entries[:5]:
    print(f'{e.title} — {e.link}')
"

Troubleshooting

  • Channel not working? Run agent-reach doctor — shows status and fix instructions.
  • Twitter fetch failed? Ensure undici is installed: npm install -g undici. Configure proxy: agent-reach configure proxy URL.

Setting Up a Channel ("帮我配 XXX")

If a channel needs setup (cookies, Docker, etc.), fetch the install guide: https://raw.githubusercontent.com/Panniantong/agent-reach/main/docs/install.md

User only provides cookies. Everything else is your job.

Files

1 total
Select a file
Select a file to preview.

Comments

Loading comments…