Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Eir Daily Content Curator

v1.2.0

Daily AI news curation — learns interests from your profile, searches the web, delivers structured summaries and daily briefs. Use when: 'set up daily news',...

0· 47·0 current·0 all-time
Security Scan
Capability signals
CryptoRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The code, scripts, and SKILL.md are coherent with a personalized content curation pipeline: searching, crawling, packing tasks, generating summaries, and optionally delivering to Eir. Required binaries (python3) and local config usage align with the described functionality. Minor mismatch: SKILL.md declares no required environment variables, but references and code accept an optional EIR_API_URL env var and rely on a stored Eir API key for Eir mode (expected for an external delivery mode).
!
Instruction Scope
Several prompt/reference files explicitly tell the agent to read USER.md and the 'main agent's USER.md' and to 'analyze recent interactions' to extract interests and to use reader_context when generating content. Those instructions potentially cause the agent to read workspace files and conversation/activity context outside the skill's own config/data directory. The writer prompts then require embedding that reader_context into generated items that may be POSTed to api.heyeir.com in Eir mode. This is scope-expanding behavior (reading other workspace data and potentially sending personalized content externally) and contradicts the SECURITY.md statement that 'Local files or conversation history' are not sent externally.
Install Mechanism
No install spec; the package is instruction + scripts that the user runs locally (python scripts and an optional node connect script). That is the lowest install risk — nothing is automatically downloaded/executed by an installer. Network code is present (HTTP fetches to search/crawl/Eir), but those are expected for a crawler/curation tool.
!
Credentials
The skill does not declare required environment credentials in metadata, yet the Eir integration expects an API key (stored in config/eir.json via connect.mjs) and the docs allow overriding the base URL with EIR_API_URL. More importantly, prompts instruct extracting interests and reader context from local files and (in Eir mode) submitting interest signals and generated content to api.heyeir.com. That means personal profile data (USER.md content) may be used in payloads sent externally when Eir mode is enabled. The required secrets and files are mostly local and opt-in, but the documentation/SECURITY.md contains inconsistent claims about what is or is not sent externally.
Persistence & Privilege
The skill is not always-enabled and does not request elevated system privileges. It writes configuration and credentials to its own config/ directory (connect.mjs saves config/eir.json), which is expected for an integration. There is no evidence it alters other skills' configs or system-wide settings.
What to consider before installing
Before installing or running this skill: - Decide whether you will use Eir mode (external delivery) or only standalone local mode; Eir mode requires pairing and will store an API key to config/eir.json and will POST generated content and interest signals to api.heyeir.com. Only enable Eir mode if you trust the external service and understand what user/profile data will be included. - Inspect and sanitize USER.md and any workspace files the agent might read. The interest-extraction and writer prompts explicitly tell the agent to read USER.md and the 'main agent' USER.md and to analyze 'recent interactions' — that can include private context and conversation history. If that data is sensitive, either remove it or run the skill in an isolated environment. - Test in standalone mode first (no connect step) to validate search/crawl/generate flows locally; the code is runnable (python3 scripts/setup.py, python3 -m pipeline.*). - Clarify inconsistencies: SECURITY.md states 'Local files or conversation history' are not sent externally, but the Eir mode and writer prompts imply reader_context and interest signals may be sent when you enable Eir. Ask the publisher to confirm exactly what USER.md fields are transmitted and whether reader_context is included in POSTs. - Review the connect.mjs behavior before running: it contacts the Eir API and writes the returned apiKey into config/eir.json. Keep that file gitignored and secure. - If you have limited trust: run the pipeline in an isolated VM or container with network disabled, or avoid running the connect step and any cron jobs that would trigger automatic POSTs to Eir. - If you want stronger guarantees, request the maintainer to: (1) explicitly list required environment variables/credentials in the skill metadata; (2) provide a toggle that prevents any external POSTs even if Eir-mode files exist; and (3) make explicit which workspace files the agent will read so you can audit/sanitize them beforehand.
!
scripts/connect.mjs:18
File read combined with network send (possible exfiltration).
About static analysis
These patterns were detected by automated regex scanning. They may be normal for skills that integrate with external APIs. Check the VirusTotal and OpenClaw results above for context-aware analysis.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

📰 Clawdis
Binspython3
content pipelinevk971ccq9mdqbfxx0py7be6kah985a9t1curate contentvk971ccq9mdqbfxx0py7be6kah985a9t1daily digestvk971ccq9mdqbfxx0py7be6kah985a9t1daily newsvk971ccq9mdqbfxx0py7be6kah985a9t1interest trackingvk971ccq9mdqbfxx0py7be6kah985a9t1latestvk972bc0nc40b5nvr0hth2m9f4585c1awpersonalized news briefingvk971ccq9mdqbfxx0py7be6kah985a9t1
47downloads
0stars
5versions
Updated 2h ago
v1.2.0
MIT-0

Daily Content Curator

Curates personalized content based on your interests. Supports two modes:

  • Standalone — works locally, no external account needed
  • Eir — full AI-powered curation with heyeir.com delivery

Standalone Mode

Flow

1. Configure          → Set up search API + interests (one-time)
2. Search             → Search API queries for each interest topic
3. Select + Crawl     → Agent picks best candidates, fetches full content
4. Generate           → Agent writes structured summaries from task files
5. Daily Brief        → Agent compiles brief from generated items

Steps 1-3 are Python scripts you run directly. Steps 4-5 are agent-driven — you tell your OpenClaw agent to read the task files and generate content. The agent uses whatever LLM model is configured in your OpenClaw session (e.g. Claude, GPT-4, Gemini).

Quick Start

1. Initialize workspace — creates config/ directory and default settings:

python3 scripts/setup.py --init --settings '{
  "mode": "standalone",
  "language": "en",
  "search": {
    "search_base_url": "https://api.search.brave.com/res/v1",
    "search_api_key": "YOUR_BRAVE_API_KEY"
  }
}'

Search provider examples:

Providersearch_base_urlGet API key
Brave Searchhttps://api.search.brave.com/res/v1brave.com/search/api
Tavilyhttps://api.tavily.comtavily.com

Want richer results? Install SearXNG and/or Crawl4AI locally. Add searxng_url and crawl4ai_url to your search config — they work as fallback or primary search/crawl providers.

2. Set up interests — edit the generated config/interests.json:

{
  "topics": [
    {"label": "AI Agents", "keywords": ["autonomous agents", "tool use"], "freshness": "7d"},
    {"label": "Prompt Engineering", "keywords": ["prompting", "chain-of-thought"]}
  ],
  "language": "en",
  "max_items_per_day": 8
}

Interests can also be auto-extracted — see references/interest-extraction-prompt.md.

3. Run the search + crawl pipeline (from the scripts/ directory):

cd scripts
python3 -m pipeline.search              # Search for each topic
python3 -m pipeline.candidate_selector  # Group results for agent selection
python3 -m pipeline.crawl               # Fetch full content
python3 -m pipeline.pack_tasks          # Bundle into task files

All python3 -m pipeline.* commands must be run from the scripts/ directory.

4. Generate content (agent-driven):

After pack_tasks, task files are in data/v9/tasks/. Tell your OpenClaw agent:

Read the task files in data/v9/tasks/ and generate content for each one.
Use the writer prompt in references/writer-prompt-standalone.md.
Save output to data/output/{YYYY-MM-DD}/.

Or schedule the full pipeline as a cron job:

openclaw cron add --name "daily-curate" \
  --cron "0 8 * * *" --tz "Asia/Shanghai" \
  --session isolated \
  --message "Read SKILL.md for eir-daily-content-curator, then run the full standalone pipeline: search → select → crawl → pack → generate content from task files → compile daily brief."

Output

Content saved to data/output/{YYYY-MM-DD}/. Daily brief compiles the top items:

# Daily Brief — 2026-04-20

🔥 **Meta cuts 8,000 jobs for AI pivot** — ...
📡 **China bans AI companions for minors** — ...
🌱 **New prompt engineering benchmark** — ...

Dependencies

Required: Python 3.10+ (standard library only — no pip install needed).

Optional: Node.js 18+ (only for Eir connect script). SearXNG (fallback search). Crawl4AI (fallback crawl).


Eir Mode

Full curation with delivery to the Eir app via a 3-job pipeline:

Job A: material-prep     → Search → Select → Crawl → Pack tasks
Job B: content-gen       → Spawn subagents → Generate → POST to Eir
Job C: daily-brief       → Check status → Fill gaps → Compile brief → POST + Deliver

Setup

  1. Get a pairing code from heyeir.com → Settings → Connect OpenClaw
  2. Run: node scripts/connect.mjs <PAIRING_CODE>
  3. Set "mode": "eir" in config/settings.json

For full Eir setup, cron configuration, content rules, and API details, see references/eir-setup.md.


Pipeline Modules

All in scripts/pipeline/:

ModulePurpose
search.pySearch via configurable API, SearXNG fallback
crawl.pyFetch content via Browse API, Crawl4AI fallback
grounding.pyConfigurable search API client
candidate_selector.pyGroup results, prepare for agent selection
pack_tasks.pyBundle candidates into task files
validate_content.pyValidate generated content against spec
config.pyShared configuration and path resolution
eir_config.pyWorkspace and credential resolution

Search Fallback Chain

Search API (primary) → SearXNG (optional) → Crawl4AI/web_fetch (content)

References

FileContentsUsed by
references/writer-prompt-eir.mdContent generation rules (Eir mode)Agent
references/writer-prompt-standalone.mdContent generation rules (standalone)Agent
references/content-spec.mdField types, limits, validation rulesAgent
references/eir-setup.mdEir mode setup, cron, API endpointsAgent / User
references/eir-api.mdFull Eir API referenceAgent
references/eir-interest-rules.mdCuration tier guidelinesAgent
references/interest-extraction-prompt.mdInterest extraction promptAgent

The writer-prompt-*.md files are instructions for the agent — the agent reads them to know how to generate content from task files. You don't need to read them unless customizing output format.


Security & Data Flow

This skill makes outbound network requests to:

  • Your configured search API (e.g. Brave, Tavily) — sends search queries based on your interest topics
  • heyeir.com API (Eir mode only, opt-in) — sends generated content summaries and interest signals

What is NOT sent externally:

  • Local files or conversation history
  • Environment variables or system credentials
  • Any data in standalone mode (unless you configure a search API)

Credentials are stored locally in config/eir.json (gitignored). See SECURITY.md for full details.


Quick Reference

TaskCommand
Initialize workspacepython3 scripts/setup.py --init --settings '{...}'
Check setuppython3 scripts/setup.py --check
Searchcd scripts && python3 -m pipeline.search
Select candidatescd scripts && python3 -m pipeline.candidate_selector
Crawlcd scripts && python3 -m pipeline.crawl
Pack taskscd scripts && python3 -m pipeline.pack_tasks
Validatecd scripts && python3 -m pipeline.validate_content
Connect Eirnode scripts/connect.mjs <PAIRING_CODE>

Comments

Loading comments...