Media News Digest
PassAudited by ClawScan on May 10, 2026.
Overview
This skill is a coherent news-digest generator, but users should notice that it can fetch from external services, use optional API keys, archive reports, and send messages or emails when configured.
This appears suitable for its stated purpose. Before installing, decide whether you want recurring scheduled delivery, verify the Discord channel or email recipients, provide only the API keys you actually need, and use a dedicated workspace archive path.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The agent may run the skill's local scripts to fetch, process, and save digest data.
The skill instructs the agent to run local Python pipeline scripts and write output files. This is expected for a local news-digest pipeline, but users should understand it executes local commands.
python3 <SKILL_DIR>/scripts/run-pipeline.py ... --archive-dir <WORKSPACE>/archive/media-news-digest/ ... --output /tmp/md-merged.json --verbose --force
Install only from the expected source and configure workspace paths deliberately; review command output if the pipeline fails or falls back to individual scripts.
If configured, the agent can make API calls using your Twitter/X, Brave, Tavily, or twitterapi.io credentials.
The skill can use optional provider API keys for Twitter/X and web search. This credential use is disclosed and directly related to the digest purpose.
Twitter and web search API keys are passed via environment variables and used only for outbound API calls. No credentials are written to disk by this skill.
Use least-privilege API keys where possible, avoid sharing unnecessary keys, and rotate keys if you later remove the skill.
Fetched article titles, snippets, or posts could contain misleading text, but the skill acknowledges untrusted content handling and gives safety constraints.
The skill processes untrusted news and social-media content for summarization, while also giving mitigation guidance against unsafe interpolation.
Use this output to select articles — do NOT write ad-hoc Python to parse the JSON ... Do not interpolate fetched/untrusted content into shell arguments or email subjects
Keep the provided summarization and sanitization workflow; do not let fetched article text control shell commands, subjects, recipients, or delivery targets.
If you set up a cron prompt, the agent may continue generating and delivering digests on that schedule.
The skill supports scheduled recurring digest generation. This is disclosed and aligned with a daily/weekly news digest, but it creates ongoing agent activity if the user enables it.
## Cron Integration Reference `references/digest-prompt.md` in cron prompts. ### Daily Digest
Confirm the schedule, destination channel or email, and how to disable the cron job before enabling recurring delivery.
Old generated digest files in the configured archive path may be removed automatically as part of normal operation.
The skill includes a retention cleanup instruction. It is scoped to the skill's workspace archive, so it appears proportionate, but it is still a deletion action.
Save to `<WORKSPACE>/archive/media-news-digest/<MODE>-YYYY-MM-DD.md`. Delete files older than 90 days.
Use a dedicated archive directory for this skill and back up any reports you want to keep longer than 90 days.
