Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Agent Reach.Skip

v1.0.0

Use the internet: search, read, and interact with 13+ platforms including Twitter/X, Reddit, YouTube, GitHub, Bilibili, XiaoHongShu (小红书), Douyin (抖音), WeCha...

0· 74·0 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for lulu-owo/agent-reach-skip.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Agent Reach.Skip" (lulu-owo/agent-reach-skip) from ClawHub.
Skill page: https://clawhub.ai/lulu-owo/agent-reach-skip
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install agent-reach-skip

ClawHub CLI

Package manager switcher

npx clawhub@latest install agent-reach-skip
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill advertises multi-platform web access and indeed contains channel checks and guides for many upstream tools (yt-dlp, xreach, mcporter, gh, Playwright, Docker, etc.). However the registry metadata lists no required env vars or binaries even though the code and SKILL.md clearly expect and instruct the user to install and provide multiple external tools, cookies, API keys, and proxies. The absence of declared requirements is an incoherence: the skill legitimately needs those external tools/credentials to fully function, so they should be declared.
!
Instruction Scope
SKILL.md and cli.py instruct automatic actions that go beyond simple 'search and read': the installer will attempt to auto-import browser cookies on local installs (agent_reach.cookie_extract.configure_from_browser) and the docs instruct users to hand over cookies, API keys and proxies. cookie_extract reads local browser cookie stores (via browser_cookie3) and writes Twitter/X auth_token/ct0, XiaoHongShu cookie string, and Bilibili SESSDATA into persistent config (~/.agent-reach/config.yaml). This is sensitive data collection and persistent storage; users may not expect the installer to extract browser cookies automatically unless they explicitly opt in. The instructions also reference executing docker/pip/npm commands and configuring MCP endpoints, which are reasonable for capability but broaden the agent's runtime actions.
Install Mechanism
No formal install spec is declared in the registry (instruction-only), but the package includes a CLI that will run system-level install commands (pip, npm, docker) if the user runs 'agent-reach install'. It also contains code that can copy files into agent skill directories (~/.openclaw/skills, ~/.claude/skills, ~/.agents/skills). The code links to GitHub repos and remote MCP endpoints (e.g. mcp.exa.ai) and references pulling docker images. These are expected for the functionality, but the lack of an explicit install manifest in registry metadata reduces transparency about what will be fetched or executed.
!
Credentials
Registry shows 'required env vars: none', yet the code expects and manages many sensitive items: twitter_auth_token/ct0, xhs_cookie, bilibili_sessdata/bili_jct, groq_api_key, github_token, reddit_proxy, exa_api_key, etc. Config.FEATURE_REQUIREMENTS and cookie_extract explicitly read/write these values into ~/.agent-reach/config.yaml and the CLI auto-imports cookies from the local browser unless safe mode/dry-run. Requiring full browser cookie access (via browser_cookie3) is high-privilege and not proportional to an untrusted automatic install without a clear explicit opt-in.
Persistence & Privilege
The skill persists configuration and credentials to ~/.agent-reach/config.yaml (Config.save sets file permissions to 0o600 where possible). The CLI also attempts to install the skill into agent skill directories (copying files into ~/.openclaw/skills etc.) and can run installers for upstream tooling. It does not set always:true and does not request system-wide configuration edits beyond its own directories. The main concern is the default installer behavior that attempts to auto-import cookies on local installs without an explicit one-time consent prompt in the presented SKILL.md (though safe-mode/dry-run options exist).
scan_findings_in_context

Like a lobster shell, security has layers — review code before you run it.

latestvk97c35gj9ns4vhfpcnv89tbpp984a7ct
74downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0

Agent Reach — Usage Guide

Upstream tools for 13+ platforms. Call them directly.

Run agent-reach doctor to check which channels are available.

⚠️ Workspace Rules

Never create files in the agent workspace. Use /tmp/ for temporary output and ~/.agent-reach/ for persistent data.

Web — Any URL

curl -s "https://r.jina.ai/URL"

Web Search (Exa)

mcporter call 'exa.web_search_exa(query: "query", numResults: 5)'
mcporter call 'exa.get_code_context_exa(query: "code question", tokensNum: 3000)'

Twitter/X (xreach)

xreach search "query" -n 10 --json          # search
xreach tweet URL_OR_ID --json                # read tweet (supports /status/ and /article/ URLs)
xreach tweets @username -n 20 --json         # user timeline
xreach thread URL_OR_ID --json               # full thread

YouTube (yt-dlp)

yt-dlp --dump-json "URL"                     # video metadata
yt-dlp --write-sub --write-auto-sub --sub-lang "zh-Hans,zh,en" --skip-download -o "/tmp/%(id)s" "URL"
                                             # download subtitles, then read the .vtt file
yt-dlp --dump-json "ytsearch5:query"         # search

Bilibili (yt-dlp)

yt-dlp --dump-json "https://www.bilibili.com/video/BVxxx"
yt-dlp --write-sub --write-auto-sub --sub-lang "zh-Hans,zh,en" --convert-subs vtt --skip-download -o "/tmp/%(id)s" "URL"

Server IPs may get 412. Use --cookies-from-browser chrome or configure proxy.

Reddit

curl -s "https://www.reddit.com/r/SUBREDDIT/hot.json?limit=10" -H "User-Agent: agent-reach/1.0"
curl -s "https://www.reddit.com/search.json?q=QUERY&limit=10" -H "User-Agent: agent-reach/1.0"

Server IPs may get 403. Search via Exa instead, or configure proxy.

GitHub (gh CLI)

gh search repos "query" --sort stars --limit 10
gh repo view owner/repo
gh search code "query" --language python
gh issue list -R owner/repo --state open
gh issue view 123 -R owner/repo

小红书 / XiaoHongShu (mcporter)

mcporter call 'xiaohongshu.search_feeds(keyword: "query")'
mcporter call 'xiaohongshu.get_feed_detail(feed_id: "xxx", xsec_token: "yyy")'
mcporter call 'xiaohongshu.get_feed_detail(feed_id: "xxx", xsec_token: "yyy", load_all_comments: true)'
mcporter call 'xiaohongshu.publish_content(title: "标题", content: "正文", images: ["/path/img.jpg"], tags: ["tag"])'

Requires login. Use Cookie-Editor to import cookies.

抖音 / Douyin (mcporter)

mcporter call 'douyin.parse_douyin_video_info(share_link: "https://v.douyin.com/xxx/")'
mcporter call 'douyin.get_douyin_download_link(share_link: "https://v.douyin.com/xxx/")'

No login needed.

微信公众号 / WeChat Articles

Search (miku_ai):

python3 -c "
import asyncio
from miku_ai import get_wexin_article
async def s():
    for a in await get_wexin_article('query', 5):
        print(f'{a[\"title\"]} | {a[\"url\"]}')
asyncio.run(s())
"

Read (Camoufox — bypasses WeChat anti-bot):

cd ~/.agent-reach/tools/wechat-article-for-ai && python3 main.py "https://mp.weixin.qq.com/s/ARTICLE_ID"

WeChat articles cannot be read with Jina Reader or curl. Must use Camoufox.

LinkedIn (mcporter)

mcporter call 'linkedin.get_person_profile(linkedin_url: "https://linkedin.com/in/username")'
mcporter call 'linkedin.search_people(keyword: "AI engineer", limit: 10)'

Fallback: curl -s "https://r.jina.ai/https://linkedin.com/in/username"

Boss直聘 (mcporter)

mcporter call 'bosszhipin.get_recommend_jobs_tool(page: 1)'
mcporter call 'bosszhipin.search_jobs_tool(keyword: "Python", city: "北京")'

Fallback: curl -s "https://r.jina.ai/https://www.zhipin.com/job_detail/xxx"

RSS

python3 -c "
import feedparser
for e in feedparser.parse('FEED_URL').entries[:5]:
    print(f'{e.title} — {e.link}')
"

Troubleshooting

  • Channel not working? Run agent-reach doctor — shows status and fix instructions.
  • Twitter fetch failed? Ensure undici is installed: npm install -g undici. Configure proxy: agent-reach configure proxy URL.

Setting Up a Channel ("帮我配 XXX")

If a channel needs setup (cookies, Docker, etc.), fetch the install guide: https://raw.githubusercontent.com/Panniantong/agent-reach/main/docs/install.md

User only provides cookies. Everything else is your job.

Comments

Loading comments...