Morning Briefing AI News

v0.1.0

AI industry news aggregation from curated RSS feeds. Catches model releases from major labs.

0· 136·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for justintieu/ai-news-skill.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Morning Briefing AI News" (justintieu/ai-news-skill) from ClawHub.
Skill page: https://clawhub.ai/justintieu/ai-news-skill
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install ai-news-skill

ClawHub CLI

Package manager switcher

npx clawhub@latest install ai-news-skill
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The skill is an RSS-based news aggregator and only requires access to a curated list of feed URLs (feeds.json). It declares no binaries, credentials, or unrelated resources, which is appropriate for the stated purpose.
Instruction Scope
Runtime instructions are limited to reading feeds.json, fetching RSS/Atom feeds via web_fetch, parsing and filtering items, optional deduplication/categorization, and optionally writing briefing files when the user explicitly requests --save. There are no instructions to read unrelated system files, access credentials, or post data to unexpected endpoints. It will contact external feed URLs (including raw.githubusercontent.com for some feeds) — expected for an RSS aggregator.
Install Mechanism
There is no install spec and no code files to write or execute. Being instruction-only minimizes install-time risk.
Credentials
The skill declares no environment variables, credentials, or config paths. The only filesystem access is to feeds.json in the skill directory and optional user-specified save paths, which matches the documented save behavior.
Persistence & Privilege
The skill is not always-on and does not request system-wide persistence. Saving to disk is explicitly gated behind user request (--save or explicit phrasing). It does not modify other skills or global agent settings in the instructions.
Assessment
This skill looks coherent for an RSS-based AI-news briefing. Before installing or enabling it: (1) review feeds.json and the listed feed URLs (some are proxied via raw.githubusercontent.com/Olshansk), because those remote files can change and will be fetched by the agent; (2) note that fetching feeds will make outbound requests to those feed hosts (server logs/IPs), which is normal for aggregators; (3) the skill only writes briefing files when you explicitly request --save — if you automate it (cron/job), double-check the save path and delivery channel configuration (e.g., Telegram chat IDs) so you don't inadvertently publish private data; (4) if you want tighter control, restrict autonomous invocation for the agent or review/approve runs before saving or announcing; and (5) if you require higher assurance, inspect the upstream GitHub repo (https://github.com/tensakulabs/ai-news-skill) and the Olshansk feed sources to confirm they meet your trust criteria.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

📰 Clawdis
ai news tldr hackernews decoder morning briefingvk972t9w8b8h9hm8rw012yhzk0s83f6bzlatestvk972t9w8b8h9hm8rw012yhzk0s83f6bz
136downloads
0stars
1versions
Updated 1mo ago
v0.1.0
MIT-0

AI News Skill

Fetch and summarize AI industry news from curated RSS feeds. Designed to catch major model releases (GPT, Claude, Gemini, Grok, GLM, etc.) and important AI news.

Usage

Invoke with /ai-news or ask for "AI news", "morning briefing", "what's new in AI".

Feeds

This skill fetches from 15 curated sources:

Aggregators (daily/weekly digests):

  • TLDR AI — Daily AI industry digest
  • Hacker News — Breaking tech news
  • The Decoder — AI-focused site
  • Last Week in AI — Weekly roundup
  • Marktechpost — Research coverage

Lab Blogs (via Olshansk/rss-feeds):

  • Anthropic News — Company announcements
  • Claude Blog — Product updates and guides
  • Anthropic Red Team — Safety research
  • OpenAI Research — GPT releases (⚠️ currently empty)
  • xAI News — Grok releases (⚠️ stale since Sept 2025)
  • Google AI — Gemini/DeepMind
  • Claude Code Changelog — CLI updates

AI Coding Tools (via Olshansk/rss-feeds):

  • Cursor Blog — Cursor editor releases
  • Windsurf Blog — Windsurf/Codeium releases
  • Ollama Blog — Local model runner updates

Instructions

Step 0: Idempotency Check (Save Mode Only)

If --save was requested AND --regenerate was NOT specified:

  1. Determine today's briefing path (same logic as Step 7.5)
  2. Check if ai-news-YYYY-MM-DD.md already exists at that path
  3. If it exists → read and return its contents to the user with note: (Cached — run with --regenerate to refresh)
  4. If it does not exist → proceed to Step 1

If --save was NOT requested, skip this step entirely.

Step 1: Load Feed Configuration

Read the feed list from feeds.json in this skill directory.

Step 2: Fetch Feeds (Parallel)

For each feed in feeds.aggregators, feeds.labs, and feeds.tools, use web_fetch to retrieve the RSS content:

web_fetch(url: "<feed_url>", extractMode: "text")

Fetch priority 1 feeds first, then priority 2.

Step 3: Parse RSS Items

RSS feeds are XML. Extract items using these patterns:

<item>
  <title>...</title>
  <link>...</link>
  <pubDate>...</pubDate>
  <description>...</description>
</item>

For Atom feeds:

<entry>
  <title>...</title>
  <link href="..."/>
  <published>...</published>
  <summary>...</summary>
</entry>

Step 4: Filter by Date

Compute cutoff = current datetime - exactly 86400 seconds (24 hours). Discard any item where pubDate or published is earlier than cutoff. Do not round or approximate — if an item is 25 hours old, it is excluded.

Weekly mode (user says "week" or "last 7 days"): use 604800 seconds instead.

Step 5: Deduplicate

Same story may appear in multiple feeds. Dedupe by:

  1. Exact title match
  2. Similar title (>80% overlap)
  3. Same link URL

Keep the version from the highest-priority source.

Step 6: Categorize

Assign each item to a category:

CategoryKeywords
Model Releases"release", "launch", "announce", model names (GPT, Claude, Gemini, Grok, GLM, Llama)
Research"paper", "research", "study", "benchmark"
Industry"funding", "acquisition", "hire", "layoff", "IPO"
Product"feature", "update", "API", "pricing"
Opinion"think", "believe", "future", "prediction"

Step 7: Format Output

# AI News Briefing
**Date:** [today's date]
**Sources:** [count] feeds checked
**Period:** Last 24 hours

---

## Model Releases
1. **[Title]** — [1 sentence summary]. [Source]
2. ...

## Research
1. **[Title]** — [1 sentence summary]. [Source]
...

## Industry News
1. **[Title]** — [1 sentence summary]. [Source]
...

---

**Coverage Gap:** Chinese AI labs (Zhipu, DeepSeek, Baidu, Alibaba) don't publish RSS. Check manually for major releases.

Step 7.5: Save to File (Optional)

Only activate this step if the user explicitly requested it — via --save, save to file, or similar phrasing. Do NOT save by default.

If saving is requested:

  1. Determine the save directory:
    • If the user specified a path in their request, use that path exactly
    • Otherwise default to ~/ai-news/ where date is today's date in YYYY-MM-DD format
  2. Write the full formatted briefing to <dir>/ai-news-YYYY-MM-DD.md using the Write tool
  3. Write a links reference file to <dir>/ai-news-YYYY-MM-DD-links.md using the Write tool:
    # AI News Links — YYYY-MM-DD
    ## Model Releases
    - [Title](url)
    ## Research
    - [Title](url)
    ## Industry News
    - [Title](url)
    
    Include every item from the briefing with its source URL.
  4. Confirm to the user:
    Saved to <dir>/ai-news-YYYY-MM-DD.md
    Links  → <dir>/ai-news-YYYY-MM-DD-links.md
    

Step 8: Handle Errors

If a feed fails to fetch:

  • Log the error
  • Continue with remaining feeds
  • Note failed sources in output

Customization

Edit feeds.json to add/remove feeds, change priorities, or enable optional feeds. See README.md for cron integration and full documentation.

Comments

Loading comments...