Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Skill 9 0.1.0

v1.0.0

Give your AI agent eyes to see the entire internet. Install and configure upstream tools for Twitter/X, Reddit, YouTube, GitHub, Bilibili, XiaoHongShu, Douyi...

0· 337·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for yiyi-9/skill-9-0-1-0.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Skill 9 0.1.0" (yiyi-9/skill-9-0-1-0) from ClawHub.
Skill page: https://clawhub.ai/yiyi-9/skill-9-0-1-0
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install skill-9-0-1-0

ClawHub CLI

Package manager switcher

npx clawhub@latest install skill-9-0-1-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description claim to provide access to many platforms and the SKILL.md indeed contains commands and workflows to install and configure those platform clients (xreach, yt-dlp, mcporter, etc.). That is coherent. However the registry metadata declares no required credentials/env vars while the instructions explicitly require pasting cookies, optionally API keys and proxy credentials — a mismatch between declared requirements and actual runtime needs.
!
Instruction Scope
Runtime instructions tell the agent to install software, extract cookies from a local browser (`--from-browser chrome`) and/or ask the user to paste full cookie header strings (session tokens), and to configure proxies (including user:pass). Those actions access highly-sensitive local secrets (browser cookies) and network credentials which are outside what an installer-only metadata declaration suggested. The instructions also ask the agent to write config and tokens into ~/.agent-reach, enabling long-lived credential storage.
!
Install Mechanism
There is no platform install spec in the registry, but SKILL.md instructs `pip install https://github.com/.../archive/main.zip`. Installing from a repository's 'main' archive is riskier than a pinned release (the main branch can change), and agent-reach will auto-install many dependencies (Node.js, CLIs, yt-dlp, mcporter, etc.) from unspecified sources. This increases the attack surface and supply-chain risk.
!
Credentials
The skill declares no required env vars or primary credential, yet the prose repeatedly requires cookies, proxy credentials, and sometimes API keys (e.g., references to an API Key in the truncated section). Requesting raw browser cookies or full proxy credentials is powerful and sensitive; such secrets are proportionate to the goal only if the user knowingly provides them for those specific platform accounts (preferably throwaway/test accounts). The mismatch between declared and actual credential needs is concerning.
Persistence & Privilege
always:false (no forced inclusion). The instructions explicitly persist configuration and tokens to ~/.agent-reach and advise using that directory for tools and tokens. Persisting credentials locally is expected for this functionality, but it amplifies risk when combined with installing unpinned code and the agent's ability to act autonomously; consider whether you want long-lived credentials stored there.
What to consider before installing
This skill appears to do what it says (install and wire up many platform CLIs), but it asks you to: (1) pip-install a GitHub 'main' zip (un-pinned release), (2) paste or let the tool extract browser cookies (session tokens), and (3) supply and store proxy credentials and possibly other API keys in ~/.agent-reach. Before installing: (a) review the agent-reach repository source and prefer a pinned release tag rather than archive/main.zip; (b) avoid pasting cookies from your primary accounts — use dedicated throwaway/test accounts if possible; (c) if you must provide credentials, run installation in an isolated environment (VM or container) and inspect what files are written to ~/.agent-reach; (d) consider manually performing login steps rather than giving full cookie strings to the agent; and (e) if you want lower risk, do not install this on a machine containing sensitive accounts or corporate data. The metadata/instructions mismatch (declared no credentials vs. instructions requesting many secrets) is another reason to be cautious. If you want, I can list specific lines in SKILL.md that request sensitive inputs and suggest safer alternatives (pinned install command, limited-scope tokens, containerized install).

Like a lobster shell, security has layers — review code before you run it.

latestvk970a65pyt81kcpjpyrejptv9h82ha1j
337downloads
0stars
1versions
Updated 4h ago
v1.0.0
MIT-0

Agent Reach

Install and configure upstream tools for 13+ platforms. After setup, call them directly — no wrapper layer.

⚠️ Workspace Rules

Never create files, clone repos, or write output in the agent workspace. Use these directories instead:

PurposeDirectory
Temporary output (subtitles, downloads)/tmp/
Upstream tool repos~/.agent-reach/tools/
Config & tokens~/.agent-reach/

Violating this will pollute the user's workspace and degrade their agent experience over time.

Setup

pip install https://github.com/Panniantong/agent-reach/archive/main.zip
agent-reach install --env=auto
agent-reach doctor

install auto-detects your environment and installs core dependencies (Node.js, mcporter, xreach CLI, gh CLI, yt-dlp, feedparser). Run doctor to see what's active.

Management

agent-reach doctor        # channel status overview
agent-reach watch         # quick health + update check
agent-reach check-update  # check for new versions

Configure channels

agent-reach configure twitter-cookies "auth_token=xxx; ct0=yyy"
agent-reach configure proxy http://user:pass@ip:port
agent-reach configure --from-browser chrome    # auto-extract cookies from local browser

Configuring a channel ("帮我配 XXX")

When a user asks to configure/enable any channel:

  1. Run agent-reach doctor
  2. Find the channel — it shows status (✅/⚠️/⬜) and what to do next
  3. Execute what you can automatically (install packages, start services)
  4. For human-required steps (paste cookies), tell the user what to do
  5. Run agent-reach doctor again to verify

Do NOT memorize per-channel steps. Always rely on doctor output.

Cookie 导入(所有需要登录的平台通用)

⚠️ 重要提醒: 使用 Cookie 登录的平台存在封号风险,请提醒用户务必使用专用小号

所有需要 Cookie 的平台(Twitter、小红书等),优先使用 Cookie-Editor 导入

  1. 用户在自己的浏览器登录对应平台
  2. 安装 Cookie-Editor Chrome 插件
  3. 点击插件 → Export → Header String
  4. 把导出的字符串发给 Agent

本地电脑用户也可以用 agent-reach configure --from-browser chrome 一键自动提取。

扫码登录是备选方案(仅限本地有浏览器的情况),Cookie-Editor 更简单可靠。

Other human actions

  • Proxy: Reddit/Bilibili/XiaoHongShu may block server IPs — suggest a residential proxy if on a server

Using Upstream Tools Directly

After agent-reach install, call the upstream tools directly.

Note: agent-reach is an installer and config tool — it does NOT have read, search, or content-fetching commands. Use the upstream tools below instead.

Twitter/X (xreach CLI)

# Search tweets
xreach search "query" --json -n 10

# Read a specific tweet
xreach tweet https://x.com/user/status/123 --json

# Read a user's timeline
xreach tweets @username --json -n 20

YouTube (yt-dlp)

⚠️ yt-dlp 需要 JS runtime 才能下载 YouTube。agent-reach install 会自动配置 Node.js 作为 runtime。 如果遇到 "Sign in to confirm you're not a bot",是 IP 被 YouTube 反爬,换代理或加 cookies。

# Get video metadata
yt-dlp --dump-json "https://www.youtube.com/watch?v=xxx"

# Download subtitles only
yt-dlp --write-sub --write-auto-sub --sub-lang "zh-Hans,zh,en" --skip-download -o "/tmp/%(id)s" "URL"
# Then read the .vtt file

# Search (yt-dlp ytsearch)
yt-dlp --dump-json "ytsearch5:query"

# If "no JS runtime" warning: ensure Node.js is installed, then run:
#   mkdir -p ~/.config/yt-dlp && echo "--js-runtimes node" >> ~/.config/yt-dlp/config

Bilibili (yt-dlp)

⚠️ 服务器 IP 可能被 Bilibili 拦截(412 错误)。建议通过代理访问,或加 --cookies-from-browser chrome

# Get video metadata
yt-dlp --dump-json "https://www.bilibili.com/video/BVxxx"

# Download subtitles
yt-dlp --write-sub --write-auto-sub --sub-lang "zh-Hans,zh,en" --convert-subs vtt --skip-download -o "/tmp/%(id)s" "URL"

# If blocked (412 / login required):
yt-dlp --cookies-from-browser chrome --dump-json "URL"

Reddit (JSON API)

# Read a subreddit
curl -s "https://www.reddit.com/r/python/hot.json?limit=10" -H "User-Agent: agent-reach/1.0"

# Read a post with comments
curl -s "https://www.reddit.com/r/python/comments/POST_ID.json" -H "User-Agent: agent-reach/1.0"

# Search
curl -s "https://www.reddit.com/search.json?q=query&limit=10" -H "User-Agent: agent-reach/1.0"

Note: On servers, Reddit may block your IP. Use proxy or search via Exa instead.

小红书 / XiaoHongShu (mcporter + xiaohongshu-mcp)

⚠️ 需要登录。使用 Cookie-Editor 导入 cookies 或扫码登录。

# 搜索笔记
mcporter call 'xiaohongshu.search_feeds(keyword: "query")'

# 获取笔记详情(含评论)
mcporter call 'xiaohongshu.get_feed_detail(feed_id: "xxx", xsec_token: "yyy")'

# 获取全部评论
mcporter call 'xiaohongshu.get_feed_detail(feed_id: "xxx", xsec_token: "yyy", load_all_comments: true)'

# 发布图文笔记
mcporter call 'xiaohongshu.publish_content(title: "标题", content: "正文", images: ["/path/to/img.jpg"], tags: ["美食"])'

# 发布视频笔记
mcporter call 'xiaohongshu.publish_with_video(title: "标题", content: "正文", video: "/path/to/video.mp4", tags: ["vlog"])'

其他功能(点赞、收藏、评论、用户主页等):npx mcporter list xiaohongshu

抖音 / Douyin (mcporter + douyin-mcp-server)

# 解析抖音视频信息(分享链接 → 标题、作者、无水印视频URL等)
mcporter call 'douyin.parse_douyin_video_info(share_link: "https://v.douyin.com/xxx/")'

# 获取无水印视频下载链接
mcporter call 'douyin.get_douyin_download_link(share_link: "https://v.douyin.com/xxx/")'

# AI 提取视频语音文案(需要配置硅基流动 API Key)
mcporter call 'douyin.extract_douyin_text(share_link: "https://v.douyin.com/xxx/")'

无需登录即可解析视频。支持抖音分享链接和直接链接。

GitHub (gh CLI)

# Search repos
gh search repos "query" --sort stars --limit 10

# View a repo
gh repo view owner/repo

# Search code
gh search code "query" --language python

# List issues
gh issue list -R owner/repo --state open

# View a specific issue/PR
gh issue view 123 -R owner/repo

Web — Any URL (Jina Reader)

# Read any webpage as markdown
curl -s "https://r.jina.ai/URL" -H "Accept: text/markdown"

# Search the web
curl -s "https://s.jina.ai/query" -H "Accept: text/markdown"

Exa Search (mcporter + exa MCP)

# Web search
mcporter call 'exa.web_search_exa(query: "query", numResults: 5)'

# Code search (GitHub, StackOverflow, docs)
mcporter call 'exa.get_code_context_exa(query: "how to parse JSON in Python", tokensNum: 3000)'

# Company research
mcporter call 'exa.company_research_exa(companyName: "OpenAI")'

LinkedIn (mcporter + linkedin-scraper-mcp)

# View a profile
mcporter call 'linkedin.get_person_profile(linkedin_url: "https://linkedin.com/in/username")'

# Search people
mcporter call 'linkedin.search_people(keyword: "AI engineer", limit: 10)'

# View company
mcporter call 'linkedin.get_company_profile(linkedin_url: "https://linkedin.com/company/xxx")'

Fallback: curl -s "https://r.jina.ai/https://linkedin.com/in/username"

Boss直聘 (mcporter + mcp-bosszp)

# Browse recommended jobs
mcporter call 'bosszhipin.get_recommend_jobs_tool(page: 1)'

# Search jobs
mcporter call 'bosszhipin.search_jobs_tool(keyword: "Python", city: "北京", page: 1)'

# View job details
mcporter call 'bosszhipin.get_job_detail_tool(job_url: "https://www.zhipin.com/job_detail/xxx")'

Fallback: curl -s "https://r.jina.ai/https://www.zhipin.com/job_detail/xxx"

微信公众号 (wechat-article-for-ai + miku_ai)

Search (miku_ai — Sogou WeChat search):

# Search WeChat articles by keyword
python3 -c "
import asyncio
from miku_ai import get_wexin_article

async def search():
    articles = await get_wexin_article('AI Agent', 5)
    for a in articles:
        print(f'{a[\"title\"]} | {a[\"source\"]} | {a[\"date\"]}')
        print(f'  {a[\"url\"]}')

asyncio.run(search())
"

Read (Camoufox — stealth Firefox, bypasses WeChat anti-bot):

# Read a WeChat article (returns Markdown with images)
cd ~/.agent-reach/tools/wechat-article-for-ai && python3 main.py "https://mp.weixin.qq.com/s/ARTICLE_ID"

# Run as MCP server (for AI agent integration)
python3 mcp_server.py

Typical agent workflow: search → get URLs → immediately read full content.

Note: WeChat articles require a real browser to render. Jina Reader and curl cannot read them.

RSS (feedparser)

python3 -c "
import feedparser
d = feedparser.parse('https://example.com/feed')
for e in d.entries[:5]:
    print(f'{e.title} — {e.link}')
"

Troubleshooting

Twitter "fetch failed"

xreach CLI uses Node.js undici, which doesn't respect HTTP_PROXY. Solutions:

  1. Ensure undici is installed: npm install -g undici
  2. Configure proxy: agent-reach configure proxy http://user:pass@ip:port
  3. If still failing, use transparent proxy (Clash TUN, Proxifier)

Channel broken?

Run agent-reach doctor — it shows what's wrong and how to fix it.

Comments

Loading comments...