Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Agent Reach Local

v1.0.0

Give your AI agent eyes to see the entire internet. 7500+ GitHub stars. Search and read 14 platforms: Twitter/X, Reddit, YouTube, GitHub, Bilibili, XiaoHongS...

0· 197·0 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for larry-at/agent-reach-local.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Agent Reach Local" (larry-at/agent-reach-local) from ClawHub.
Skill page: https://clawhub.ai/larry-at/agent-reach-local
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install agent-reach-local

ClawHub CLI

Package manager switcher

npx clawhub@latest install agent-reach-local
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The skill claims 'one command install' and 'zero config for 8 channels' and declares no required binaries or env vars, but the SKILL.md repeatedly instructs using many external tools (yt-dlp, gh, mcporter, xreach, python packages like miku_ai, feedparser, undici npm, Camoufox) and browser cookies. These runtime dependencies and credential needs are not declared in the metadata and are disproportionate to the 'no requirements' claim.
!
Instruction Scope
Instructions tell the agent to fetch arbitrary URLs via r.jina.ai, run command-line tools, call mcporter commands, run Python scripts that bypass anti-bot (Camoufox), and prompt the user to provide cookies for login-capable channels. That means the agent would be asked to read/store credentials and send content to third-party proxies — behavior broader than a simple 'read web' skill and not explicitly limited or disclosed in metadata.
!
Install Mechanism
No install spec is provided in the registry, yet the SKILL.md references 'one command install', links to a GitHub raw install.md, and instructs installing tools (npm undici, gh, yt-dlp, etc.). The absence of a packaged or reviewed install mechanism combined with instructions to fetch and run upstream tools is an incoherence and increases risk because installation would require manual steps that execute external code.
!
Credentials
Metadata declares no required env vars or credentials, but the instructions explicitly require user cookies, may ask for proxy URLs, and involve tools that need authentication for many platforms. Asking for session cookies (browser export) is sensitive and not represented in requires.env; this is disproportionate and risky without clear justification or safeguards.
Persistence & Privilege
The skill does not set always:true and is not force-installed, but SKILL.md instructs using ~/.agent-reach for persistent data and warns against the agent workspace. That implies storing credentials and state on disk under the user's home directory — a legitimate design choice but one that raises persistence and credential storage concerns which the metadata does not disclose.
Scan Findings in Context
[no-code-scan] expected: The regex-based scanner found no code files to analyze because this is instruction-only (SKILL.md). That explains the lack of code findings but means the security surface is entirely the prose instructions.
What to consider before installing
This skill's instructions require many command-line tools, third-party proxies (r.jina.ai), and user-provided cookies/credentials even though the registry metadata claims no requirements. Before installing or using it: (1) review the upstream GitHub install docs and any install scripts line-by-line; (2) do not paste full browser session cookies or long-lived tokens into chat — prefer read-only API tokens or temporary credentials where possible; (3) be aware that using r.jina.ai or similar proxies sends requested URLs and potentially scraped content to a third party; (4) if you must test, run installs and commands in an isolated environment (container or VM) and inspect ~/.agent-reach before trusting it with credentials; (5) consider disabling autonomous invocation for this skill until you’ve validated the install and credential handling. If you want, I can list the exact tools and sensitive inputs the SKILL.md uses so you can decide which to allow or block.

Like a lobster shell, security has layers — review code before you run it.

latestvk97akc5htq5wp1dyj14y7p76w983m1mz
197downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Agent Reach — Usage Guide

Upstream tools for 13+ platforms. Call them directly.

Run agent-reach doctor to check which channels are available.

⚠️ Workspace Rules

Never create files in the agent workspace. Use /tmp/ for temporary output and ~/.agent-reach/ for persistent data.

Web — Any URL

curl -s "https://r.jina.ai/URL"

Web Search (Exa)

mcporter call 'exa.web_search_exa(query: "query", numResults: 5)'
mcporter call 'exa.get_code_context_exa(query: "code question", tokensNum: 3000)'

Twitter/X (xreach)

xreach search "query" -n 10 --json          # search
xreach tweet URL_OR_ID --json                # read tweet (supports /status/ and /article/ URLs)
xreach tweets @username -n 20 --json         # user timeline
xreach thread URL_OR_ID --json               # full thread

YouTube (yt-dlp)

yt-dlp --dump-json "URL"                     # video metadata
yt-dlp --write-sub --write-auto-sub --sub-lang "zh-Hans,zh,en" --skip-download -o "/tmp/%(id)s" "URL"
                                             # download subtitles, then read the .vtt file
yt-dlp --dump-json "ytsearch5:query"         # search

Bilibili (yt-dlp)

yt-dlp --dump-json "https://www.bilibili.com/video/BVxxx"
yt-dlp --write-sub --write-auto-sub --sub-lang "zh-Hans,zh,en" --convert-subs vtt --skip-download -o "/tmp/%(id)s" "URL"

Server IPs may get 412. Use --cookies-from-browser chrome or configure proxy.

Reddit

curl -s "https://www.reddit.com/r/SUBREDDIT/hot.json?limit=10" -H "User-Agent: agent-reach/1.0"
curl -s "https://www.reddit.com/search.json?q=QUERY&limit=10" -H "User-Agent: agent-reach/1.0"

Server IPs may get 403. Search via Exa instead, or configure proxy.

GitHub (gh CLI)

gh search repos "query" --sort stars --limit 10
gh repo view owner/repo
gh search code "query" --language python
gh issue list -R owner/repo --state open
gh issue view 123 -R owner/repo

小红书 / XiaoHongShu (mcporter)

mcporter call 'xiaohongshu.search_feeds(keyword: "query")'
mcporter call 'xiaohongshu.get_feed_detail(feed_id: "xxx", xsec_token: "yyy")'
mcporter call 'xiaohongshu.get_feed_detail(feed_id: "xxx", xsec_token: "yyy", load_all_comments: true)'
mcporter call 'xiaohongshu.publish_content(title: "标题", content: "正文", images: ["/path/img.jpg"], tags: ["tag"])'

Requires login. Use Cookie-Editor to import cookies.

抖音 / Douyin (mcporter)

mcporter call 'douyin.parse_douyin_video_info(share_link: "https://v.douyin.com/xxx/")'
mcporter call 'douyin.get_douyin_download_link(share_link: "https://v.douyin.com/xxx/")'

No login needed.

微信公众号 / WeChat Articles

Search (miku_ai):

python3 -c "
import asyncio
from miku_ai import get_wexin_article
async def s():
    for a in await get_wexin_article('query', 5):
        print(f'{a[\"title\"]} | {a[\"url\"]}')
asyncio.run(s())
"

Read (Camoufox — bypasses WeChat anti-bot):

cd ~/.agent-reach/tools/wechat-article-for-ai && python3 main.py "https://mp.weixin.qq.com/s/ARTICLE_ID"

WeChat articles cannot be read with Jina Reader or curl. Must use Camoufox.

LinkedIn (mcporter)

mcporter call 'linkedin.get_person_profile(linkedin_url: "https://linkedin.com/in/username")'
mcporter call 'linkedin.search_people(keyword: "AI engineer", limit: 10)'

Fallback: curl -s "https://r.jina.ai/https://linkedin.com/in/username"

RSS (feedparser)

RSS

python3 -c "
import feedparser
for e in feedparser.parse('FEED_URL').entries[:5]:
    print(f'{e.title} — {e.link}')
"

Troubleshooting

  • Channel not working? Run agent-reach doctor — shows status and fix instructions.
  • Twitter fetch failed? Ensure undici is installed: npm install -g undici. Configure proxy: agent-reach configure proxy URL.

Setting Up a Channel ("帮我配 XXX")

If a channel needs setup (cookies, Docker, etc.), fetch the install guide: https://raw.githubusercontent.com/Panniantong/agent-reach/main/docs/install.md

User only provides cookies. Everything else is your job.

Comments

Loading comments...