Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Agent Reach

v0.1.0

Give your AI agent eyes to see the entire internet. Install and configure upstream tools for Twitter/X, Reddit, YouTube, GitHub, Bilibili, XiaoHongShu, Douyi...

8· 1.9k·11 current·14 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for ma-star/skill-9.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Agent Reach" (ma-star/skill-9) from ClawHub.
Skill page: https://clawhub.ai/ma-star/skill-9
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install ma-star/skill-9

ClawHub CLI

Package manager switcher

npx clawhub@latest install skill-9
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (install/configure upstream platform tools) matches the SKILL.md actions: installing an 'agent-reach' installer, tooling like xreach, mcporter, yt-dlp, and guiding cookie/proxy configuration. However, the skill's metadata declares no required env vars or config paths even though the instructions will store tokens/config under ~/.agent-reach and read browser cookies — a proportionality mismatch.
!
Instruction Scope
SKILL.md instructs the agent to collect and accept raw authentication cookies (paste 'Header String') and to optionally auto-extract cookies from a local browser ('--from-browser chrome'), which implies reading local browser storage. It also directs installing and running upstream CLIs and writing persistent config under ~/.agent-reach. These are sensitive operations (cookie/token collection, local file access) not declared in the skill metadata and grant broad access to user accounts.
!
Install Mechanism
There is no install spec in the registry, but the runtime instructions tell users to run pip install against a GitHub archive URL (main.zip). Installing from an unpinned branch/archive pulls arbitrary code that may change; the installer then pulls/sets up many third-party tools. This is higher risk than using a pinned release or reviewing the package beforehand.
!
Credentials
The registry lists no required credentials, yet the instructions require sensitive data (session cookies, proxy credentials, 'API Key' for third-party services) and store them under ~/.agent-reach. Asking users to paste cookie header strings or enabling browser cookie extraction is a direct request for secrets that is not reflected in metadata and increases the chance of accidental credential exposure.
Persistence & Privilege
The skill will create files and persistent configs under ~/.agent-reach and /tmp per the instructions. always:false (default) is appropriate, but persistent storage of credentials combined with autonomous agent invocation (default allowed) raises risk: stored secrets could be reused or accessed later. The skill does not declare these config paths in metadata.
What to consider before installing
This skill appears to do what it claims (set up many platform access tools), but it asks for high-risk actions that you should not do lightly. Before installing or running it: (1) do not paste real primary-account cookies — use a dedicated throwaway account if you must test; (2) avoid the '--from-browser' auto-extract option unless you run the installer locally and trust the codebase; (3) prefer OAuth/API tokens scoped minimally rather than raw session cookies; (4) do not pip install unpinned archives from main branches without reviewing the repository; ask the author for a pinned release or a reproducible install spec and review the code in https://github.com/Panniantong/agent-reach if possible; (5) expect the tool to write persistent credentials under ~/.agent-reach — inspect and securely delete/revoke them if needed; (6) consider running this in an isolated VM/container and revoke any cookies/tokens after use. The registry metadata not declaring required credentials or config paths is a red flag — request clarification from the publisher before proceeding.

Like a lobster shell, security has layers — review code before you run it.

latestvk973s7y9g65a9t2wf23ksk8w6982ajgk
1.9kdownloads
8stars
1versions
Updated 10h ago
v0.1.0
MIT-0

Agent Reach

Install and configure upstream tools for 13+ platforms. After setup, call them directly — no wrapper layer.

⚠️ Workspace Rules

Never create files, clone repos, or write output in the agent workspace. Use these directories instead:

PurposeDirectory
Temporary output (subtitles, downloads)/tmp/
Upstream tool repos~/.agent-reach/tools/
Config & tokens~/.agent-reach/

Violating this will pollute the user's workspace and degrade their agent experience over time.

Setup

pip install https://github.com/Panniantong/agent-reach/archive/main.zip
agent-reach install --env=auto
agent-reach doctor

install auto-detects your environment and installs core dependencies (Node.js, mcporter, xreach CLI, gh CLI, yt-dlp, feedparser). Run doctor to see what's active.

Management

agent-reach doctor        # channel status overview
agent-reach watch         # quick health + update check
agent-reach check-update  # check for new versions

Configure channels

agent-reach configure twitter-cookies "auth_token=xxx; ct0=yyy"
agent-reach configure proxy http://user:pass@ip:port
agent-reach configure --from-browser chrome    # auto-extract cookies from local browser

Configuring a channel ("帮我配 XXX")

When a user asks to configure/enable any channel:

  1. Run agent-reach doctor
  2. Find the channel — it shows status (✅/⚠️/⬜) and what to do next
  3. Execute what you can automatically (install packages, start services)
  4. For human-required steps (paste cookies), tell the user what to do
  5. Run agent-reach doctor again to verify

Do NOT memorize per-channel steps. Always rely on doctor output.

Cookie 导入(所有需要登录的平台通用)

⚠️ 重要提醒: 使用 Cookie 登录的平台存在封号风险,请提醒用户务必使用专用小号

所有需要 Cookie 的平台(Twitter、小红书等),优先使用 Cookie-Editor 导入

  1. 用户在自己的浏览器登录对应平台
  2. 安装 Cookie-Editor Chrome 插件
  3. 点击插件 → Export → Header String
  4. 把导出的字符串发给 Agent

本地电脑用户也可以用 agent-reach configure --from-browser chrome 一键自动提取。

扫码登录是备选方案(仅限本地有浏览器的情况),Cookie-Editor 更简单可靠。

Other human actions

  • Proxy: Reddit/Bilibili/XiaoHongShu may block server IPs — suggest a residential proxy if on a server

Using Upstream Tools Directly

After agent-reach install, call the upstream tools directly.

Note: agent-reach is an installer and config tool — it does NOT have read, search, or content-fetching commands. Use the upstream tools below instead.

Twitter/X (xreach CLI)

# Search tweets
xreach search "query" --json -n 10

# Read a specific tweet
xreach tweet https://x.com/user/status/123 --json

# Read a user's timeline
xreach tweets @username --json -n 20

YouTube (yt-dlp)

⚠️ yt-dlp 需要 JS runtime 才能下载 YouTube。agent-reach install 会自动配置 Node.js 作为 runtime。 如果遇到 "Sign in to confirm you're not a bot",是 IP 被 YouTube 反爬,换代理或加 cookies。

# Get video metadata
yt-dlp --dump-json "https://www.youtube.com/watch?v=xxx"

# Download subtitles only
yt-dlp --write-sub --write-auto-sub --sub-lang "zh-Hans,zh,en" --skip-download -o "/tmp/%(id)s" "URL"
# Then read the .vtt file

# Search (yt-dlp ytsearch)
yt-dlp --dump-json "ytsearch5:query"

# If "no JS runtime" warning: ensure Node.js is installed, then run:
#   mkdir -p ~/.config/yt-dlp && echo "--js-runtimes node" >> ~/.config/yt-dlp/config

Bilibili (yt-dlp)

⚠️ 服务器 IP 可能被 Bilibili 拦截(412 错误)。建议通过代理访问,或加 --cookies-from-browser chrome

# Get video metadata
yt-dlp --dump-json "https://www.bilibili.com/video/BVxxx"

# Download subtitles
yt-dlp --write-sub --write-auto-sub --sub-lang "zh-Hans,zh,en" --convert-subs vtt --skip-download -o "/tmp/%(id)s" "URL"

# If blocked (412 / login required):
yt-dlp --cookies-from-browser chrome --dump-json "URL"

Reddit (JSON API)

# Read a subreddit
curl -s "https://www.reddit.com/r/python/hot.json?limit=10" -H "User-Agent: agent-reach/1.0"

# Read a post with comments
curl -s "https://www.reddit.com/r/python/comments/POST_ID.json" -H "User-Agent: agent-reach/1.0"

# Search
curl -s "https://www.reddit.com/search.json?q=query&limit=10" -H "User-Agent: agent-reach/1.0"

Note: On servers, Reddit may block your IP. Use proxy or search via Exa instead.

小红书 / XiaoHongShu (mcporter + xiaohongshu-mcp)

⚠️ 需要登录。使用 Cookie-Editor 导入 cookies 或扫码登录。

# 搜索笔记
mcporter call 'xiaohongshu.search_feeds(keyword: "query")'

# 获取笔记详情(含评论)
mcporter call 'xiaohongshu.get_feed_detail(feed_id: "xxx", xsec_token: "yyy")'

# 获取全部评论
mcporter call 'xiaohongshu.get_feed_detail(feed_id: "xxx", xsec_token: "yyy", load_all_comments: true)'

# 发布图文笔记
mcporter call 'xiaohongshu.publish_content(title: "标题", content: "正文", images: ["/path/to/img.jpg"], tags: ["美食"])'

# 发布视频笔记
mcporter call 'xiaohongshu.publish_with_video(title: "标题", content: "正文", video: "/path/to/video.mp4", tags: ["vlog"])'

其他功能(点赞、收藏、评论、用户主页等):npx mcporter list xiaohongshu

抖音 / Douyin (mcporter + douyin-mcp-server)

# 解析抖音视频信息(分享链接 → 标题、作者、无水印视频URL等)
mcporter call 'douyin.parse_douyin_video_info(share_link: "https://v.douyin.com/xxx/")'

# 获取无水印视频下载链接
mcporter call 'douyin.get_douyin_download_link(share_link: "https://v.douyin.com/xxx/")'

# AI 提取视频语音文案(需要配置硅基流动 API Key)
mcporter call 'douyin.extract_douyin_text(share_link: "https://v.douyin.com/xxx/")'

无需登录即可解析视频。支持抖音分享链接和直接链接。

GitHub (gh CLI)

# Search repos
gh search repos "query" --sort stars --limit 10

# View a repo
gh repo view owner/repo

# Search code
gh search code "query" --language python

# List issues
gh issue list -R owner/repo --state open

# View a specific issue/PR
gh issue view 123 -R owner/repo

Web — Any URL (Jina Reader)

# Read any webpage as markdown
curl -s "https://r.jina.ai/URL" -H "Accept: text/markdown"

# Search the web
curl -s "https://s.jina.ai/query" -H "Accept: text/markdown"

Exa Search (mcporter + exa MCP)

# Web search
mcporter call 'exa.web_search_exa(query: "query", numResults: 5)'

# Code search (GitHub, StackOverflow, docs)
mcporter call 'exa.get_code_context_exa(query: "how to parse JSON in Python", tokensNum: 3000)'

# Company research
mcporter call 'exa.company_research_exa(companyName: "OpenAI")'

LinkedIn (mcporter + linkedin-scraper-mcp)

# View a profile
mcporter call 'linkedin.get_person_profile(linkedin_url: "https://linkedin.com/in/username")'

# Search people
mcporter call 'linkedin.search_people(keyword: "AI engineer", limit: 10)'

# View company
mcporter call 'linkedin.get_company_profile(linkedin_url: "https://linkedin.com/company/xxx")'

Fallback: curl -s "https://r.jina.ai/https://linkedin.com/in/username"

Boss直聘 (mcporter + mcp-bosszp)

# Browse recommended jobs
mcporter call 'bosszhipin.get_recommend_jobs_tool(page: 1)'

# Search jobs
mcporter call 'bosszhipin.search_jobs_tool(keyword: "Python", city: "北京", page: 1)'

# View job details
mcporter call 'bosszhipin.get_job_detail_tool(job_url: "https://www.zhipin.com/job_detail/xxx")'

Fallback: curl -s "https://r.jina.ai/https://www.zhipin.com/job_detail/xxx"

微信公众号 (wechat-article-for-ai + miku_ai)

Search (miku_ai — Sogou WeChat search):

# Search WeChat articles by keyword
python3 -c "
import asyncio
from miku_ai import get_wexin_article

async def search():
    articles = await get_wexin_article('AI Agent', 5)
    for a in articles:
        print(f'{a[\"title\"]} | {a[\"source\"]} | {a[\"date\"]}')
        print(f'  {a[\"url\"]}')

asyncio.run(search())
"

Read (Camoufox — stealth Firefox, bypasses WeChat anti-bot):

# Read a WeChat article (returns Markdown with images)
cd ~/.agent-reach/tools/wechat-article-for-ai && python3 main.py "https://mp.weixin.qq.com/s/ARTICLE_ID"

# Run as MCP server (for AI agent integration)
python3 mcp_server.py

Typical agent workflow: search → get URLs → immediately read full content.

Note: WeChat articles require a real browser to render. Jina Reader and curl cannot read them.

RSS (feedparser)

python3 -c "
import feedparser
d = feedparser.parse('https://example.com/feed')
for e in d.entries[:5]:
    print(f'{e.title} — {e.link}')
"

Troubleshooting

Twitter "fetch failed"

xreach CLI uses Node.js undici, which doesn't respect HTTP_PROXY. Solutions:

  1. Ensure undici is installed: npm install -g undici
  2. Configure proxy: agent-reach configure proxy http://user:pass@ip:port
  3. If still failing, use transparent proxy (Clash TUN, Proxifier)

Channel broken?

Run agent-reach doctor — it shows what's wrong and how to fix it.

Comments

Loading comments...