Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Multi-Source Feed

v1.0.1

Set up and manage an AI-curated daily tech brief from customizable sources. Use when user says "set up multi-source-feed", "configure my daily brief", or "ms...

0· 212·1 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Overall capabilities (scrape X, RSS, HN, GitHub, Product Hunt, use Tavily search, generate LLM memos) align with the name/description. However the registry metadata only declares TAVILY_API_KEY while the SKILL.md and code also require a Product Hunt token and local browser session cookies for X scraping; those additional requirements are not reflected in the declared env/configs.
!
Instruction Scope
Runtime instructions tell the agent to clone a GitHub repo, pip-install dependencies, run Playwright, open a Chrome remote-debugging port, capture the user's logged-in X session via CDP (saved to x_session.json), and add crontab entries. Capturing and storing browser session/cookies is sensitive (it contains auth tokens) and the SKILL.md instructs the agent to perform these actions automatically; this broad access is beyond a simple 'fetch RSS' skill and should be explicitly consented to and audited.
Install Mechanism
No formal install spec in the registry (instruction-only), but SKILL.md instructs cloning the GitHub repo and running pip install and playwright install — standard but high-impact because it writes files, installs packages and browser drivers, and executes local Python scripts. The code files are included in the bundle but the instructions still clone from upstream, which is an odd duplication that merits caution (verify the GitHub repo/tag first).
!
Credentials
Registry declares a single required env var (TAVILY_API_KEY) and marks it as primary, but SKILL.md also requires PRODUCTHUNT_API_TOKEN and instructs writing both keys to a .env file. Messaging channel credentials (Telegram/Discord/Feishu) are not declared in the metadata but the system will send memos to the user's configured channel — it's ambiguous whether OpenClaw supplies those or the skill expects local tokens. The skill also requests access to local Chrome session cookies (x_session.json) which are highly sensitive; these data accesses are not represented in the metadata.
Persistence & Privilege
The skill is not set to always: true. It writes files (x_session.json, feed files, memo files), and instructs adding cron jobs (system crontab and OpenClaw cron). Persisting cookies and scheduling recurring jobs are within the intended scope but increase long-term exposure and blast radius if misused; this is expected for a daily-scraping pipeline but should be acknowledged by the user.
What to consider before installing
What to check before installing: - Missing/unclear declared secrets: the registry only lists TAVILY_API_KEY, but SKILL.md asks for PRODUCTHUNT_API_TOKEN as well — confirm you want to provide both and add PRODUCTHUNT_API_TOKEN to the .env. Ask the author why the registry metadata omits it. - X/Twitter session access: the skill requires you to open Chrome with --remote-debugging-port and then runs login_save_session.py to extract your browser storage_state (cookies/auth) into x_session.json. That file contains your logged-in session and can be used to act as you on X. Only proceed if you trust the code; consider inspecting login_save_session.py and the scraping scripts yourself. Do not upload or share x_session.json. - Review the code and origin: the SKILL.md clones https://github.com/zidooong/multi-source-feed; verify that repository, its commit/tag, and its README match the bundle you received. The bundle contains code but still instructs cloning upstream — reconcile this before running. Prefer cloning a specific release/tag rather than HEAD. - Run in an isolated environment: install into a disposable VM/container or a dedicated user account to limit blast radius (cron jobs and stored sessions will be local to that environment). - Inspect what the agent will send: confirm how the skill will deliver memos (OpenClaw gateway vs local integrations) and whether any messaging channel tokens are required or will be requested. If channel tokens are needed, ensure they are provided consciously and least-privileged. - If you want higher assurance: ask the publisher for a signed release, a small security note explaining cookie handling and retention policies, and a minimal install option that skips X scraping (so you can use only RSS/HN/GitHub/Product Hunt/Tavily). Confidence notes: assessment is 'suspicious' (medium confidence) because much of the behavior is coherent with the stated goal (aggregating many public sources), but the metadata/instructions mismatch (missing declared env vars, sensitive cookie capture, and duplicated clone vs included code) creates unexplained risks that should be resolved before trusting automatic installation.

Like a lobster shell, security has layers — review code before you run it.

latestvk97e8ky39z8kx44jwm4a36rn2d82ybjh

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

📡 Clawdis
Binspython3
EnvTAVILY_API_KEY
Primary envTAVILY_API_KEY

Comments