Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

月老 Matchmaker

v1.0.0

AI Matchmaker powered by real social media data. Two people scan their accounts — AI cross-analyzes interests, values, lifestyle, aesthetics, and social habi...

0· 47·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (matchmaking from social media) align with the provided code and SKILL.md: the skill includes per‑platform collectors (Weibo, Douban, Bilibili, Xiaohongshu, Douyin) and instructions to extract profiles, posts, likes, collections, timestamps, etc. Accessing logged‑in browser state via a browser automation connector (ManoBrowser) is a reasonable technical requirement for this purpose.
!
Instruction Scope
The SKILL.md and submodule files instruct the agent to execute large, exact JS payloads inside the user's browser: fetch(..., credentials:'include'), DOM scraping, XHR interception, opening new tabs and reading page JS globals (e.g., window.__INITIAL_STATE__, window.$CONFIG.user). These actions read cookies, session data and private content and can collect sensitive personal information. The README states 'data stored locally', but the workflow requires configuring an MCP endpoint/API key (used to communicate with ManoBrowser) — if that endpoint is remote/untrusted it could forward collected data off‑device. The instruction to 'must copy and execute the JS scripts exactly' gives the agent broad capability to run arbitrary in‑browser code.
Install Mechanism
No formal install spec (instruction‑only) — lowest installer risk. The skill will auto‑git-clone ManoBrowser from GitHub if not found (git clone https://github.com/ClawCap/ManoBrowser.git). Cloning a public GitHub repo is typical and expected here. There are no downloads from obscure hosts in the skill itself, but the scripts reference an example MCP endpoint (https://datasaver.deepminingai.com/...) — that's an external host shown as an example for MCP configuration and merits caution if used.
!
Credentials
The skill requests no env vars itself, but it depends on the ManoBrowser MCP endpoint and API key being configured (the check script and SKILL.md rely on them). That connector grants the agent the ability to execute JS in your logged‑in browser context and to fetch authenticated pages (fetch with credentials:'include'). For the intended functionality this is proportionate — but only if the MCP endpoint is truly local/trusted. If the endpoint is set to a remote third‑party, collected personal data (including private posts, likes, followers) could be routed off‑device. The README's 'data local' assurance depends on the user's MCP configuration.
Persistence & Privilege
always:false and user‑invocable:true. The skill does not request permanent inclusion or attempt to modify other skills. It will create local data files (matchmaker-data/) per its described workflow; storing and deleting those is under user control.
Scan Findings in Context
[fetch_credentials_include] expected: Multiple platform collectors use fetch(..., {credentials:'include'}) to retrieve pages/APIs while preserving the user's session cookies. This is required to gather private/logged‑in data but also means the scripts can read content only available to the logged‑in user.
[xhr_intercept_override] expected: XHROpen/send are overridden to intercept API responses (Xiaohongshu) so the skill can collect data that virtual scrolling hides. This is a scraping technique needed for full data capture, but it is powerful and can capture any intercepted responses.
[read_window_initial_state] expected: Scripts read page JS globals like window.__INITIAL_STATE__ and window.$CONFIG.user (Weibo/XHS), which provide structured user data. This is effective for scraping but implies access to in‑page JavaScript objects and potentially sensitive metadata.
[git_clone_github] expected: The skill will auto clone ManoBrowser from GitHub if not present. Pulling dependencies from a public GitHub repo is expected, but users should verify the repo before running.
[curl_mcp_endpoint] expected: The included check_manobrowser.sh uses curl to POST to an MCP endpoint for connectivity checking. This is expected for validating ManoBrowser configuration, but the example endpoint points to a third‑party host — ensure the configured endpoint is trusted (ideally local).
What to consider before installing
This skill works by running JavaScript inside your logged‑in Chrome session (via a ManoBrowser connector) to scrape private profile data (posts, likes, collections, followers, timestamps). That's necessary to produce the 'compatibility report', but it is powerful: the scripts read cookies/session data, intercept XHR responses, and open tabs to extract in‑page JS objects. Before installing or running it: 1) Verify ManoBrowser/MCP endpoint is local or a service you trust — if you configure the MCP to a third‑party server it could receive all collected data. 2) Inspect the ManoBrowser repo the skill will clone (https://github.com/ClawCap/ManoBrowser) yourself; don't run a git clone blindly. 3) Only run the collectors when both people explicitly consent and use their own logged‑in sessions; collecting someone else's private account without consent is a privacy/legal risk. 4) Consider running a trial on non‑sensitive/public accounts first, and inspect the generated matchmaker-data/ directory; delete it after use if you want to remove local traces. 5) If you are uncomfortable granting in‑browser access, do not install the skill. If you want higher assurance, run the scraping steps manually under your control rather than allowing the agent to execute them autonomously.

Like a lobster shell, security has layers — review code before you run it.

latestvk97dtw1jvbv8z6q68pvdwn86cn84jy16

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments