Install
openclaw skills install morning-aiDaily-scheduled AI news tracker. Collects updates from 80+ AI entities across 6 sources every 24 hours (default 08:00 UTC+8). Generates scored, deduplicated Markdown reports. Supports unattended cron/scheduled execution with date-stamped idempotent output.
openclaw skills install morning-aiPermissions overview: Collects public data from Reddit, Hacker News, GitHub, HuggingFace, arXiv, and X/Twitter. Requires optional API keys configured in
.envor~/.config/morning-ai/.env. Writes report files to the current working directory. See Configuration for details.
Track 80+ AI entities across 6 sources. Collect updates from the past 24 hours, score and deduplicate them, and generate a structured Markdown daily report. Covers 4 types: Product (feature launches, version releases), Model (new models, open-source weights), Benchmark (leaderboard changes, papers), Funding (rounds, acquisitions, milestones).
Run this command FIRST before doing anything else:
if [ -f "$HOME/.config/morning-ai/.env" ] || [ -f ".claude/morning-ai.env" ] || [ -f ".env" ]; then echo "CONFIG_STATUS=READY"; else echo "CONFIG_STATUS=MISSING"; fi
Branch on the output:
CONFIG_STATUS=READY — read the config file, report which sources are active, then proceed to Step 1.CONFIG_STATUS=MISSING — STOP. You MUST complete the First-Time Onboarding below before proceeding to Step 1.MISSING)CRITICAL: STOP HERE. You MUST complete all onboarding steps below interactively with the user. Do NOT run Step 1 (data collection) until a config file exists and the gate check returns
READY. Running data collection without configuration will produce incomplete results.
Walk the user through setup interactively, waiting for their response at each step:
GITHUB_TOKEN for higher rate limits)| Key | Source | Get it at |
|---|---|---|
GITHUB_TOKEN | GitHub releases & repos (higher rate limit) | https://github.com/settings/tokens |
| Key | Description |
|---|---|
IMAGE_GEN_PROVIDER | Provider: gemini | minimax | none (default: none) |
IMAGE_STYLE | Style: classic | dark | glassmorphism | newspaper | tech |
GEMINI_API_KEY | Google Gemini/Imagen (https://aistudio.google.com/apikey) |
MINIMAX_API_KEY | MiniMax global(https://www.minimax.io) |
MINIMAX_API_KEY | MiniMax cn (https://platform.minimaxi.com) |
Ask about social content distribution (optional):
SOCIAL_ENABLED=true~/.config/morning-ai/social_channels.json (see skills/gen-social/SKILL.md for schema). For quick single-channel setup, just set SOCIAL_PLATFORM, SOCIAL_STYLE, and SOCIAL_LANG env vars.Ask about message digest (optional):
MESSAGE_ENABLED=trueMESSAGE_MIN_SCORE (default 5), MESSAGE_MAX_ITEMS (default 10), MESSAGE_LINKS (bottom or inline)Create the config file — collect the keys the user provides and write them to ~/.config/morning-ai/.env in KEY=value format (one per line). Create the directory if needed: mkdir -p ~/.config/morning-ai
Confirm — show how many sources are now active (N/9)
Verify — re-run the gate check to confirm CONFIG_STATUS=READY:
if [ -f "$HOME/.config/morning-ai/.env" ] || [ -f ".claude/morning-ai.env" ] || [ -f ".env" ]; then echo "CONFIG_STATUS=READY"; else echo "CONFIG_STATUS=MISSING"; fi
Only proceed to Step 1 if the output is READY.
If the user wants to skip API key setup and use only free sources, create a minimal config file first, then proceed to Step 1:
mkdir -p ~/.config/morning-ai && echo "# morning-ai config — free sources only" > ~/.config/morning-ai/.env
| Parameter | Default | Example |
|---|---|---|
--lang | en (English) | --lang zh (Chinese), --lang ja (Japanese) |
Rules:
--lang is explicitly specified, the report MUST be written entirely in English. All report text — titles, summaries, section headers, table labels, bullet points, "Why It Matters" analysis, and all other human-readable content — must be in English.--lang is specified, use that language for all human-readable content instead.--lang setting also applies to infographic prompt content (see Step 4).Prerequisite: Step 0 must have returned
CONFIG_STATUS=READY. If you have not completed Step 0, go back and run it now.
Run the Python collector to gather data from automated sources:
cd {SKILL_DIR} && python3 skills/tracking-list/scripts/collect.py --date {YYYY-MM-DD} --depth default -o {CWD}/data_{YYYY-MM-DD}.json
Parameters:
--date: Target date, default today (YYYY-MM-DD)--depth: Collection depth — quick (fast, fewer results), default, or deep (comprehensive)--sources: Specific sources only, e.g. --sources reddit hackernews github-o: Output JSON file pathWhat it does:
[Yesterday 08:00, Today 08:00) UTC+8Timeout: Allow up to 3 minutes for default depth, 5 minutes for deep.
If the user provides --exclude types (e.g. --exclude Funding), note which types to filter out in Step 3.
After the automated collection completes, use web search to discover recent X/Twitter updates from tracked entities. The tracked X handles are listed in {SKILL_DIR}/lib/entities.py under X_HANDLES.
Search X/Twitter in three layers, in priority order:
Layer 1 — Official Accounts (highest priority):
Search for recent posts from official company/product accounts. Handles are listed in entity files under {SKILL_DIR}/entities/.
Layer 2 — CEO / Core Personnel Accounts: Check key people's accounts for announcements, previews, and context that official accounts may not cover. Listed as "Key People" in each entity file.
Layer 3 — KOLs & Benchmark Institutions:
Check AI opinion leaders and evaluation accounts for independent analysis, benchmark results, and trending discoveries. See {SKILL_DIR}/entities/kol.md and {SKILL_DIR}/entities/benchmarks-academic.md.
For each search depth:
| Depth | Layer 1 (Official) | Layer 2 (Personnel) | Layer 3 (KOLs) |
|---|---|---|---|
quick | Top 5 entities by priority | Skip | Skip |
default | All major entities (~20) | Top CEO accounts (~10) | Top KOLs (~5) |
deep | All entities with X handles | All personnel accounts | All KOLs + benchmark accounts |
Use web search queries like:
site:x.com @{handle} since:{yesterday} — for specific account postssite:x.com "{entity name}" AI announcement — for broader discoverysite:x.com AI model release OR benchmark OR open-source {date} — for trending AI newsWhen a discovered post is a retweet (RT) or quote tweet:
[Yesterday 08:00, Today 08:00) UTC+8source_url, not the RT/quote URLsource_label (e.g., "@AnthropicAI on X (via @karpathy RT)")[Yesterday 08:00, Today 08:00) UTC+8| Priority | Source Type | Credibility |
|---|---|---|
| 1 | Official blog / changelog | Highest |
| 2 | Official X/Twitter account | High |
| 3 | API changelog / docs | High |
| 4 | Official GitHub release | High |
| 5 | CEO / core personnel X account | Medium-High |
| 6 | Benchmark institution X account | Medium |
| 7 | KOL X account | Reference only — requires cross-verification |
Items sourced only from KOL accounts (Priority 7) should be scored conservatively and flagged for cross-verification with an official source.
For each verified X/Twitter update:
source: "x", source_url pointing to the original tweet, and source_label as "@{handle} on X"cross_ref rather than creating a duplicate — this strengthens the verification scoreAfter data collection completes, read the tracking specification to understand scoring criteria, record format, and timeliness rules:
Read {SKILL_DIR}/skills/tracking-list/SKILL.md
This specification defines:
Internalize the specification before writing the report. Pay special attention to the scoring reference tables and type classification guide.
Read {SKILL_DIR}/templates/report.mdreport_{YYYY-MM-DD}.md in the working directoryReport generation rules:
--lang is explicitly specified. If source data is in a different language, translate it. Entity names (proper nouns) stay as-is.[Source Name](URL) pointing to the original content. This applies to all sections: TLDR, detailed entries, and compact table rows.skills/tracking-list/SKILL.md → "Factual Detail Verification" for the full protocol. Never write a number from memory or inference — omit unverifiable details.--exclude was specified)[[Source](URL)] at the end.This step is optional. Skip if no image generation capability is available or configured.
Read the infographic specification:
Read {SKILL_DIR}/skills/gen-infographic/SKILL.md
Generate cover + per-type sections + stitch (see Image Strategy in skills/gen-infographic/SKILL.md):
Cover image: Sort by score and select the top 4-5 updates (across all types). Build prompt using the Cover Prompt Template (9:16 portrait).
Per-type section images: For each type (Model/Product/Benchmark/Funding) with 7+ score items, build a prompt using the Per-Type Prompt Template (9:16 portrait).
IMAGE_GEN_TYPES=auto): only types with 7+ score itemsIMAGE_GEN_TYPES=all for all types, none for cover onlyGenerate images and stitch:
Option A — Native tool (Claude Code or other tools with built-in image generation): Use your tool's built-in image generation capability, one call per image. Then stitch sections together.
Option B — Python script batch mode (any environment, requires IMAGE_GEN_PROVIDER configured):
Build a manifest JSON with all prompts and outputs, then run:
cd {SKILL_DIR} && python3 skills/gen-infographic/scripts/gen_infographic.py --batch {CWD}/manifest.json --stitch
Supported providers: gemini, minimax. See Configuration for API keys. Requires pip install Pillow.
The final output is news_infographic_YYYY-MM-DD_combined.png — a single long image containing cover + all section images.
Insert images into the report:
Skip this step if SOCIAL_ENABLED is not true or no social channels are configured.
Generate platform-optimized copy and images for social media distribution (X, Xiaohongshu, etc.).
Read the social content specification:
Read {SKILL_DIR}/skills/gen-social/SKILL.md
Load channel configuration:
SOCIAL_CHANNELS_FILE exists → read the JSON channel listSOCIAL_PLATFORM env var is set → build a single channel from SOCIAL_PLATFORM + SOCIAL_STYLE + SOCIAL_LANGFor each channel:
a. Read the channel's template: {SKILL_DIR}/skills/gen-social/templates/{platform}/{style}.md
b. Select top items from the report data (filter by min_score, limit by items, translate if lang differs from source)
c. Generate copy following the template's format rules, tone, and character limits
d. Validate character counts — each tweet ≤ 280 chars, Xiaohongshu title ≤ 20 chars, body ≤ 1000 chars
e. Write copy to {CWD}/social/social_{YYYY-MM-DD}_{channel_id}.md
f. If channel has image: true — generate platform-adapted images using the same providers as Step 4
{CWD}/social/social_{YYYY-MM-DD}_{channel_id}_{N}.pngWrite manifest to {CWD}/social/social_{YYYY-MM-DD}_manifest.json listing all generated files
Channel config examples: See skills/gen-social/SKILL.md for the full JSON schema and quick-setup env vars.
Skip this step if MESSAGE_ENABLED is not true.
Generate a concise, share-friendly message digest suitable for messaging platforms (WeChat, Telegram, Slack, etc.). The digest provides bold titles with one-line summaries and reference links — optimized for copy-paste sharing.
Read the message specification:
Read {SKILL_DIR}/skills/gen-message/SKILL.md
Read the digest template:
Read {SKILL_DIR}/skills/gen-message/templates/digest.md
Select items from the report data (data_{YYYY-MM-DD}.json):
MESSAGE_MIN_SCORE (default: 5)MESSAGE_MAX_ITEMS (default: 10)MESSAGE_LANG for language (default: from --lang)Generate text digest following the template format:
{CWD}/message_{YYYY-MM-DD}.md🔗 URL) after each item by default (or grouped at bottom if MESSAGE_LINKS=bottom)If image generation is available (IMAGE_GEN_PROVIDER is configured):
{CWD}/message_{YYYY-MM-DD}.png using the same image generation method as Step 4 (native tool or Python script)Output files:
message_{YYYY-MM-DD}.md — copy-paste text for messagingmessage_{YYYY-MM-DD}.png — accompanying image (only if image generation is configured)The entities/ directory contains detailed entity registries organized by tracking group:
| File | Scope | Entities |
|---|---|---|
entities/ai-labs.md | Frontier AI Labs + China AI | OpenAI, Anthropic, Google, Meta AI, xAI, Microsoft, Qwen, DeepSeek, + 11 more |
entities/model-infra.md | Model Infrastructure | NVIDIA, Mistral, Cohere, Perplexity, AWS, Together, Groq, Apple |
entities/coding-agent.md | Coding Agent | Cursor, Cline, OpenCode, Droid, OpenClaw, Windsurf, + 5 more |
entities/ai-apps.md | AI Applications | v0, bolt.new, Lovable, Replit, Lovart, Manus, + 2 more |
entities/vision-media.md | Vision & Media | Midjourney, Runway, Pika, FLUX, ElevenLabs, + 7 more |
entities/benchmarks-academic.md | Benchmarks & Academic | LMSYS, HuggingFace, arXiv channels, industry media |
entities/kol.md | Key Opinion Leaders | Andrej Karpathy, AK, Andrew Ng, Swyx, Simon Willison, + 3 more |
entities/trending-discovery.md | Trending Discovery | GitHub Trending, Product Hunt, Hacker News, Reddit |
Each file lists X/Twitter accounts, key people, official blogs, changelogs, GitHub repos, and other source URLs for every tracked entity. Read these files when you need to verify or supplement the automated collection.
Users can add their own tracked entities by placing markdown files in entities/custom/ (or ~/.config/morning-ai/entities/, or a path set via CUSTOM_ENTITIES_DIR). Custom entity files use a simplified format — see entities/custom-example.md for the template. Custom entities are automatically merged into the built-in registries at runtime and collected alongside the default 80+ entities.
Morning-AI is designed for daily automated execution. Each run produces date-stamped files (report_YYYY-MM-DD.md, data_YYYY-MM-DD.json), making it safe to run on a recurring schedule.
Use --schedule to set a custom cron expression (default: 0 8 * * *):
| Parameter | Format | Default | Example |
|---|---|---|---|
--schedule | Cron expression (5-field) | 0 8 * * * (daily 8am) | 0 9 * * 1-5 (weekdays 9am) |
The schedule is passed to the agent's native scheduler (CronCreate, /loop, system cron, etc.). Morning-AI itself does not run a scheduler — it relies on the host agent or system to trigger runs.
Claude Code (CronCreate / loop):
/loop 24h /morning-ai
With custom schedule:
/morning-ai --schedule "0 9 * * 1-5"
System cron (manual setup):
0 8 * * * cd /path/to/workspace && claude -p "/morning-ai"
OpenClaw / always-on bot:
schedule: "0 8 * * *"
skill: morning-ai
.env in skill directory~/.config/morning-ai/.env# ~/.config/morning-ai/.env
GITHUB_TOKEN=ghp_xxx
| Source | API | Rate Limit |
|---|---|---|
| Public JSON | Generous | |
| Hacker News | Algolia API | Generous |
| GitHub | Public API (optional token for higher limits) | 60 req/hr (unauthenticated) |
| HuggingFace | Public API | Generous |
| arXiv | Public API | Generous |
| X/Twitter | Web search | Generous |
See skills/gen-message/SKILL.md for message digest configuration variables (MESSAGE_ENABLED, MESSAGE_MIN_SCORE, MESSAGE_MAX_ITEMS, etc.).
.env files. Never transmitted except to their respective APIs.report_*.md, data_*.json), message digest files (message_*.md, message_*.png), and cache files to the skill/working directory.