Sports Science Daily
v1.0.0Automated sports science intelligence engine — fetches 55+ sources (PubMed, expert blogs, wearable tech), filters noise, translates to Chinese, and syncs to...
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The stated purpose (aggregate 55+ sources, translate, sync to Feishu/Notion) is coherent with requesting Feishu credentials. However the skill also requires Google Translate and optionally Notion credentials (referenced in the prose) but NOTION env vars and any Google Translate API key are not declared in requires.env. The metadata lists a GitHub homepage, but no project files are bundled — the instructions assume a local Python project that is not present in the skill package.
Instruction Scope
SKILL.md tells the agent/user to 'navigate to the project directory' and run python3 main.py, read/write processed_history.json, and sync to remote APIs. Because no code files are included in the skill bundle, these runtime instructions cannot be executed as-is by the agent and grant broad discretion (network calls, local file read/write). The instructions also reference external APIs (Google Translate, PubMed, various RSS feeds) and optional Notion sync that are not consistently declared.
Install Mechanism
This is an instruction-only skill with no install spec and no files that will be written or executed by the platform — that has lower install risk. The SKILL.md references pip and Python dependencies, but those are for an external project which is not bundled.
Credentials
The declared required env vars are FEISHU_APP_ID, FEISHU_APP_SECRET, FEISHU_RECEIVE_ID which match the Feishu sync purpose. However the instructions also require Google Translate access (no API key env declared) and optionally NOTION_TOKEN / NOTION_PAGE_ID (Notion is not listed as required). The skill reads/writes a local processed_history.json file (not declared under config paths). Missing declarations and undisclosed credential use are disproportionate and create ambiguity about what secrets will be used or needed.
Persistence & Privilege
The skill is not always:true and uses default invocation behavior. It does not request persistent/privileged platform settings. It mentions writing a local history file (project-local), which is normal for this functionality but should be confirmed in the upstream code.
What to consider before installing
Do not hand over real Feishu/Notion/Google credentials or run anything yet. The skill bundle contains only documentation (SKILL.md / SKILL_ZH.md) and no code — but its instructions assume a local Python project (main.py, src/, requirements.txt). Before installing or running:
- Ask the skill owner for the repository or packaged source and verify the GitHub homepage URL actually hosts the project; review the code yourself or have someone you trust audit it.
- Confirm exactly which API keys are required (Google Translate key, Notion token) and why; ensure those env vars are declared and scoped (least privilege). Never paste long-lived credentials into an unverified skill.
- Inspect the exporter code (feishu.py, notion.py) to see what data they send and to which endpoints; ensure no unexpected external endpoints exist.
- Be cautious about the skill reading/writing local files (processed_history.json): check for any code that might read other filesystem locations or exfiltrate unexpected data.
If you cannot obtain the upstream code for review or the author cannot justify the missing/undocumented credential usage, treat this skill as untrusted and do not run it with real credentials. If you want to proceed safely, run it in an isolated environment (throwaway VM/container) with test credentials and network monitoring first.Like a lobster shell, security has layers — review code before you run it.
latestsport health sports research
Sports Science Daily — AI Agent Skill
An automated intelligence engine that aggregates 55+ global sports science sources into a single daily report, with smart filtering, auto-translation, and multi-platform sync.
What It Does
- Fetches peer-reviewed papers from 23 PubMed journals (BJSM, Sports Medicine, JSCR, MSSE, etc.)
- Crawls RSS feeds from 14 expert blogs/podcasts (Huberman, Attia, Nuckols, Dr. Mike, NSCA, etc.)
- Monitors 18 industry sources (The Quantified Scientist, DC Rainmaker, Oura, Garmin, ScienceDaily, ACSM, etc.)
- Filters noise using a 4-layer keyword system (positive/research/strong/negative keywords + trusted source whitelist)
- Translates all content to Chinese (or any target language) via Google Translate API
- Sorts each section by date (newest first)
- Deduplicates against local history to prevent repeat content
- Syncs the final report as a Feishu Cloud Document with notification card, and optionally to Notion
Prerequisites
- Python 3.8+ with
feedparserandrequestsinstalled (pip3 install -r requirements.txt) - Feishu App Credentials (for cloud document sync):
FEISHU_APP_ID: Feishu app IDFEISHU_APP_SECRET: Feishu app secretFEISHU_RECEIVE_ID: Target user/chat ID for message card
- (Optional) Notion Integration for Notion page sync:
NOTION_TOKENandNOTION_PAGE_ID
Instructions
-
Navigate to the project directory: Ensure you are in the
sports-science-dailyproject root. -
Run the update:
python3 main.py --days 2 -
Available options:
Flag Default Description --days N7 Lookback period in days --no-historyoff Force re-fetch all items (ignore dedup) --no-bloggersoff Skip blogger feeds, only industry + PubMed --lang LANGzh-CN Output language (en, es, ja, etc.) -
Output:
- Local Markdown file:
YYYY-MM-DD_运动科学日报.md - Feishu Cloud Document (auto-created with shareable link)
- Feishu message card sent to configured recipient
- Updated
processed_history.jsonfor deduplication
- Local Markdown file:
-
"No New Content" scenario: If output shows "🎉 没有发现新内容", increase
--daysor use--no-history.
Project Architecture
main.py # CLI entry point
src/
├── config.py # All sources, journals, blocklists
├── crawler.py # RSS + PubMed API fetching
├── formatter.py # Markdown generation + keyword filtering
├── translator.py # Google Translate API
├── history.py # Deduplication management
└── exporters/
├── feishu.py # Feishu cloud doc sync + message card
└── notion.py # Notion page sync
Security & Privacy
- External APIs: PubMed (eutils.ncbi.nlm.nih.gov), Google Translate, Feishu OpenAPI, Notion API, various RSS feeds
- Local files: Reads/writes
processed_history.jsonand.mdreports - No PII exposure: Only fetches public research data and news feeds
Comments
Loading comments...
