Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Dating 约会助手

v1.1.0

Pre-date research assistant — scan someone's public social media profiles to understand their interests, personality, lifestyle and values before meeting the...

0· 56·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The declared purpose is 'public social media scanning only', and the top-level SKILL.md repeatedly states the skill will not log in or collect private content. However the bundle contains multiple 'deep-profile-collect' submodules (xiaohongshu, douyin, bilibili, douban, weibo) that are explicitly written to operate on the currently logged-in user, use fetch({credentials:'include'}), inject XHR interceptors, and collect likes/collections/follow lists. Those capabilities are disproportionate to a pure 'public-only' pre-date research assistant and create a meaningful mismatch between stated purpose and actual capabilities.
!
Instruction Scope
Runtime instructions instruct the agent to: search for another skill's SKILL.md, auto git-clone the ManoBrowser repo if absent, check/require a ManoBrowser MCP endpoint/API key (via TOOLS.md), execute chrome_navigate and chrome_execute_script workflows that run JS in the user's browser context (including XHR interception and DOM scraping), and write full raw platform JSON into clawcap-data/{name}/{platform}.json. The SKILL.md tries to limit which submodule steps to run, but the included scripts are capable of accessing login-only data when run in a logged-in browser — the instructions also insist on persisting 'full raw data', which may include sensitive info.
Install Mechanism
The registry contains no formal install spec (instruction-only), but the SKILL.md will automatically git clone ManoBrowser from GitHub at runtime if not found. Cloning from GitHub is a common practice but it is a remote code fetch executed by the agent at runtime (not a vetted package install). The only included script (check_manobrowser.sh) also issues curl requests to the MCP endpoint. There is no extract of arbitrary binary archives, and the GitHub URL is a standard host, but the implicit runtime clone increases the attack surface.
!
Credentials
The skill declares no required environment variables or primary credential, yet it depends on ManoBrowser MCP configuration (endpoint and API key) and on the user's browser login sessions to function. Those credentials/configs are not declared in requires.env; the SKILL.md expects the agent to read other skill files and TOOLS.md for connection info. The skill will read and reuse browser cookies/session state (via chrome_execute_script and fetch with credentials) — access to logged-in sessions is disproportionate to a stated 'public-only' scraper.
!
Persistence & Privilege
The instructions require the skill to persist full raw scraped data into a shared directory (clawcap-data/{nickname}/{platform}.json) and to write reports into clawcap-data/reports/. Data is held locally and reused if younger than 7 days. While local storage is not the same as exfiltration, automatic persistent storage of raw scraped data (including any login-visible lists) increases risk and lifetime of potentially sensitive information. The skill also scans for other skills' SKILL.md files (reads other skills' artifacts) and will auto-clone dependencies — these behaviors expand its system footprint.
What to consider before installing
What to consider before installing: - Main mismatch: The skill says it only reads public content, but the bundle contains several platform submodules that are designed to run against a logged-in browser (they use fetch(..., {credentials: 'include'}), inject XHR interceptors, and scrape likes/collections/follow lists). If your browser is logged into these services, those submodules — if run — can capture non-public or account-linked data. - Data persists locally: The skill insists on saving full raw JSON into clawcap-data and reusing it for 7 days. That means scraped data (possibly including login-visible lists) will be written to disk and remain there until you remove it. - ManoBrowser dependency: The skill will try to locate or git-clone ManoBrowser and expects a ManoBrowser MCP endpoint/API key configured (TOOLS.md). Cloning a repo and invoking a browser-control MCP are actions you should only allow if you trust the repo and understand how the MCP endpoint and API key are provisioned. - If you accept this skill only if it truly limits itself to public content: - Inspect and remove or disable the deep-collect submodules that require logged-in access (or ensure the agent's runtime will never run their login-only steps). - Ensure ManoBrowser is not auto-installed or that you control the exact ManoBrowser release source. - Restrict file permissions on clawcap-data and review/delete stored JSON reports. Consider running the skill in an isolated environment or VM. - Audit/check the ManoBrowser MCP configuration and API key storage (TOOLS.md) before granting access. - If you want higher assurance, ask the developer to: provide a minimal build that only contains public-profile scraping code; explicitly declare required credentials/configs; remove any XHR-interception or collection code for likes/collections; and add an explicit opt-in step before writing raw data to disk. Confidence: high — the behavioral mismatch between "public-only" claims and the included logged-in scraping modules and persistence requirements is explicit in the provided files.

Like a lobster shell, security has layers — review code before you run it.

latestvk979dzbxmb9jg8mfez1s6h6h2s84knvy

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments