Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

finviz-crawler

v3.0.0

Continuous financial news crawler for finviz.com with SQLite storage, article extraction, and query tool. Use when monitoring financial markets, building new...

0· 598·2 current·2 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The code and SKILL.md implement a local crawler + SQLite query tool as advertised (crawler, article files, DB, service support). Required binary (python3) is appropriate. Minor inconsistencies exist in defaults (SKILL.md mentions ~/workspace/finviz while scripts use ~/Downloads/Finviz), but overall the requested capabilities (pip packages, Playwright browsers) align with the crawler purpose.
!
Instruction Scope
Runtime instructions (and the included install.py) instruct the agent / user to run scripts that install packages, download Playwright browsers, create user systemd/launchd service files, and read/write files under the user's home. The code also reads environment variables (FINVIZ_EXPIRY_DAYS, FINVIZ_TZ, FINVIZ_TICKERS) that are not declared in the skill metadata. The crawler and query tools perform deletion of local files/DB rows (remove-ticker), which is expected for management but worth highlighting as a destructive operation.
Install Mechanism
There is no packaged install spec; installation relies on running scripts/install.py which uses pip to install packages and runs crawl4ai/playwright installers. This is a typical pattern for Python projects; it does not download arbitrary archives from unknown URLs. Playwright/browser installs are heavier operations (download Chromium). The install script also runs systemctl/launchctl commands to enable services.
!
Credentials
The skill does not request credentials or declared env vars, yet the code reads and uses several environment variables (FINVIZ_EXPIRY_DAYS, FINVIZ_TZ, TZ, FINVIZ_TICKERS) without listing them in metadata. More importantly, the crawler uses the third‑party 'crawl4ai' library/SDK: depending on that library's design, fetched pages or page metadata may be transmitted to remote services (privacy/exfiltration risk). No secrets are requested, but network transmission of scraped content to a third party is a meaningful proportionality/privacy concern for a local crawler.
Persistence & Privilege
The installer creates user-level persistent services (systemd user unit or launchd plist) and attempts to enable them (systemctl --user enable). This grants persistent background execution of the crawler under the user account. The skill does not request 'always: true' and does not modify other skills' configs, but creating and enabling a persistent service is a notable privilege and operational effect the user should be aware of.
What to consider before installing
What to consider before installing: - Review crawl4ai behaviour: the crawler imports and uses the 'crawl4ai' SDK. That library may perform remote requests or forward scraped HTML to a hosted scraping service. If you need scraped content to remain local, inspect crawl4ai's docs/source or run the project after replacing crawl4ai usage with a local-only Playwright solution. - Run install in a controlled environment: the installer runs pip installs and downloads Playwright browsers. Use a Python virtual environment (venv) or container to avoid polluting your system Python and to contain network activity. - Inspect scripts before running: scripts/install.py writes service files and enables a user-level systemd/launchd service and creates files under ~/Downloads/Finviz (or other configured paths). The query script can delete article files and DB rows when removing tickers—back up anything important. - Check defaults and paths: SKILL.md and scripts disagree on default paths (~/workspace/finviz vs ~/Downloads/Finviz). Confirm and set explicit --db / --articles-dir options to ensure data goes where you expect. - If you need stronger privacy control: consider disabling automatic service creation, run the crawler manually on demand, or modify the code to use only local Playwright/browser automation (and confirm no third-party endpoints are contacted). - Network and legal considerations: scraping sites may violate terms of service. The code attempts to respect robots.txt and rate limits, but you should ensure compliance with any sites you scrape. If you want, I can: (1) point to the code locations that call crawl4ai for you to inspect further, (2) suggest a safe set of edits to force local-only crawling, or (3) produce commands to run the installer inside a venv and verify network activity during a test run.

Like a lobster shell, security has layers — review code before you run it.

latestvk977rhe5xn6n7x1a6t8xgrcr1x81mane

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

Binspython3

Comments