suspicious.insecure_tls_verification
- Location
- scripts/news_digest_v2/fetcher.py:266
- Finding
- HTTPS certificate verification is disabled.
AdvisoryAudited by Static analysis on May 10, 2026.
Detected: suspicious.insecure_tls_verification
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
A manipulated network response could lead to inaccurate or malicious-looking content being included in the digest.
After an SSL error, the scraper retries with TLS certificate verification disabled, which can let a network attacker alter fetched news pages.
response = requests.get(url, headers=HEADERS, timeout=timeout, verify=False)
Prefer sources with valid HTTPS, remove or gate the verify=False fallback, and treat scraped content as untrusted.
Future package changes or a compromised dependency source could affect the scripts the user runs.
The documented setup installs unpinned Python packages; this is common and purpose-aligned, but package versions and provenance are not locked.
pip install requests beautifulsoup4
Install in a virtual environment and consider pinning known-good dependency versions.
If enabled, the LLM provider key should be treated as sensitive account access.
The skill can use an optional LLM API key for summarization; this credential use is disclosed and aligned with the optional LLM feature.
`NEWS_DIGEST_LLM_API_KEY` | (empty) | LLM API key for Stage 2.5 summarization
Set the API key only when needed, store it securely, and use a key scoped to this purpose if possible.
Public news content is normally low sensitivity, but customized private or internal sources could be shared with an external LLM provider.
When the optional LLM stage is enabled, scraped article content may be sent to the configured LLM API/provider.
Stage 2.5: LLM → Batch LLM summarization (optional, requires API key)
Use a trusted LLM endpoint and avoid enabling LLM summarization for private sources unless that data sharing is acceptable.
If the user configures the cron job, the scraper can run unattended and produce or forward digests daily.
The skill documents an optional daily scheduled run; this is disclosed automation rather than hidden persistence.
schedule: "0 20 * * *" # Daily 20:00 ... run: python scripts/news_digest_v2/run_all_stages.py
Enable scheduling only intentionally, and review the destination and contents before automatic sharing.