Back to skill
Skillv3.0.0

ClawScan security

finviz-crawler · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

SuspiciousFeb 22, 2026, 10:09 PM
Verdict
suspicious
Confidence
medium
Model
gpt-5-mini
Summary
The skill largely matches its stated purpose (a local Finviz news crawler) but has several mismatches and privacy/persistence concerns—most notably use of the crawl4ai library (which may send scraped pages to a third-party service), undeclared environment variables, and installer behavior that writes and enables background services and modifies user files.
Guidance
What to consider before installing: - Review crawl4ai behaviour: the crawler imports and uses the 'crawl4ai' SDK. That library may perform remote requests or forward scraped HTML to a hosted scraping service. If you need scraped content to remain local, inspect crawl4ai's docs/source or run the project after replacing crawl4ai usage with a local-only Playwright solution. - Run install in a controlled environment: the installer runs pip installs and downloads Playwright browsers. Use a Python virtual environment (venv) or container to avoid polluting your system Python and to contain network activity. - Inspect scripts before running: scripts/install.py writes service files and enables a user-level systemd/launchd service and creates files under ~/Downloads/Finviz (or other configured paths). The query script can delete article files and DB rows when removing tickers—back up anything important. - Check defaults and paths: SKILL.md and scripts disagree on default paths (~/workspace/finviz vs ~/Downloads/Finviz). Confirm and set explicit --db / --articles-dir options to ensure data goes where you expect. - If you need stronger privacy control: consider disabling automatic service creation, run the crawler manually on demand, or modify the code to use only local Playwright/browser automation (and confirm no third-party endpoints are contacted). - Network and legal considerations: scraping sites may violate terms of service. The code attempts to respect robots.txt and rate limits, but you should ensure compliance with any sites you scrape. If you want, I can: (1) point to the code locations that call crawl4ai for you to inspect further, (2) suggest a safe set of edits to force local-only crawling, or (3) produce commands to run the installer inside a venv and verify network activity during a test run.

Review Dimensions

Purpose & Capability
noteThe code and SKILL.md implement a local crawler + SQLite query tool as advertised (crawler, article files, DB, service support). Required binary (python3) is appropriate. Minor inconsistencies exist in defaults (SKILL.md mentions ~/workspace/finviz while scripts use ~/Downloads/Finviz), but overall the requested capabilities (pip packages, Playwright browsers) align with the crawler purpose.
Instruction Scope
concernRuntime instructions (and the included install.py) instruct the agent / user to run scripts that install packages, download Playwright browsers, create user systemd/launchd service files, and read/write files under the user's home. The code also reads environment variables (FINVIZ_EXPIRY_DAYS, FINVIZ_TZ, FINVIZ_TICKERS) that are not declared in the skill metadata. The crawler and query tools perform deletion of local files/DB rows (remove-ticker), which is expected for management but worth highlighting as a destructive operation.
Install Mechanism
noteThere is no packaged install spec; installation relies on running scripts/install.py which uses pip to install packages and runs crawl4ai/playwright installers. This is a typical pattern for Python projects; it does not download arbitrary archives from unknown URLs. Playwright/browser installs are heavier operations (download Chromium). The install script also runs systemctl/launchctl commands to enable services.
Credentials
concernThe skill does not request credentials or declared env vars, yet the code reads and uses several environment variables (FINVIZ_EXPIRY_DAYS, FINVIZ_TZ, TZ, FINVIZ_TICKERS) without listing them in metadata. More importantly, the crawler uses the third‑party 'crawl4ai' library/SDK: depending on that library's design, fetched pages or page metadata may be transmitted to remote services (privacy/exfiltration risk). No secrets are requested, but network transmission of scraped content to a third party is a meaningful proportionality/privacy concern for a local crawler.
Persistence & Privilege
noteThe installer creates user-level persistent services (systemd user unit or launchd plist) and attempts to enable them (systemctl --user enable). This grants persistent background execution of the crawler under the user account. The skill does not request 'always: true' and does not modify other skills' configs, but creating and enabling a persistent service is a notable privilege and operational effect the user should be aware of.