Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

ai-newsletter

v1.0.0

A daily AI‑focused newsletter pipeline that uses Brave Search to discover AI news, then Firecrawl to scrape and summarize selected articles, producing ≈20 cu...

0· 60·0 current·0 all-time
byJeff Yang@j3ffyang

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for j3ffyang/ai-newsletter.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "ai-newsletter" (j3ffyang/ai-newsletter) from ClawHub.
Skill page: https://clawhub.ai/j3ffyang/ai-newsletter
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install ai-newsletter

ClawHub CLI

Package manager switcher

npx clawhub@latest install ai-newsletter
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's name and description (daily AI newsletter using Brave Search + Firecrawl) match the runtime instructions: searching, filtering, scraping, and summarizing. Requesting Brave and Firecrawl access is appropriate for the stated purpose. However, the registry metadata lists no required env vars or primary credential while the SKILL.md explicitly instructs users to set BRAVE_API_KEY and FIRECRAWL_API_KEY — a mismatch between claimed capabilities and declared requirements.
Instruction Scope
SKILL.md stays within the newsletter/scrape/summarize scope: it calls web_search, filter_ai_news, extract_urls, web_scrape, and language_model. It does not instruct reading arbitrary system files. It does, however, require two helper tools (filter_ai_news and extract_urls) which are not provided; the skill expects those to exist in the user's OpenClaw setup or be implemented by the user. Summarization implies sending scraped article text to whatever language_model backend is configured — this is expected but has privacy implications.
Install Mechanism
No install spec; instruction-only skill. That minimizes direct disk writes and remote-code install risk. The SKILL.md suggests optional pip installation and composio login, but those are optional and external to the skill bundle.
!
Credentials
The SKILL.md requires BRAVE_API_KEY and FIRECRAWL_API_KEY (and optionally composio credentials/config) but the registry metadata declares no required env vars or primary credential. This omission is problematic because it hides the fact that API keys (sensitive credentials) are necessary. Requiring API keys for the described integrations is proportionate, but the metadata should declare them explicitly. Also note that scraped article contents will be sent to the configured language model provider — that can expose content to third parties.
Persistence & Privilege
The skill is not always:true and does not request persistent/privileged system access. It does not modify other skills or system-wide settings according to the provided instructions.
What to consider before installing
What to check before installing: - Expect to provide BRAVE_API_KEY and FIRECRAWL_API_KEY (the SKILL.md requires them); the registry metadata currently does not declare these — ask the author to update the manifest or add the keys to your config.yaml/environment. - The skill expects two helper tools (filter_ai_news and extract_urls) that are not bundled. Inspect or implement those helpers yourself before running — they control filtering, deduplication, and URL extraction and could significantly change behavior. - Scraped article text will be summarized by your configured language_model backend. Verify your model provider's data-retention/privacy policy if scraped content is sensitive. - Running automated scraping can hit rate limits or violate site terms; review Firecrawl and target-site policies and set polite rate limits/whitelist as needed. - Because the package is instruction-only, there is no code to audit in the bundle; audit any helper-tool implementations and the config that wires web_search/web_scrape tools before enabling the skill. - If you need higher assurance, request the author add required env vars to the registry metadata and provide or reference canonical implementations of the helper tools so you can review them.

Like a lobster shell, security has layers — review code before you run it.

latestvk97bs57bmwb1skc7eyq04qvdc985gkfk
60downloads
0stars
1versions
Updated 2d ago
v1.0.0
MIT-0

🤖 Daily AI Newsletter (Brave + Firecrawl)

An OpenClaw skill that:

  • Uses Brave Search (web_search) to find recent AI‑related news.
  • Filters and ranks results to keep ≈20 strong‑relevance items.
  • Uses Firecrawl (web_scrape) to scrape each chosen article.
  • Summarizes each article with the model (language_model).
  • Produces a structured newsletter_items list, plus markdown_newsletter and json_newsletter outputs for your newsletter engine.

🧩 Prerequisites

Before this skill runs, you must:

  1. Install OpenClaw (Docker, pip, or GitHub clone).
  2. Expose MCP‑compatible tools or configure them in config.yaml.
  3. Register Brave Search API:
    • Account at https://search.brave.com/api.
    • Set BRAVE_API_KEY in your environment or config.
    • In config.yaml:
      tools:
        web_search:
          backend: brave
          brave:
            api_key: YOUR_BRAVE_API_KEY
            api_url: https://api.search.brave.com
            mode: fresh
      
  4. Register Firecrawl:
    • Account at https://firecrawl.dev.
    • Set FIRECRAWL_API_KEY.
    • Configure a web_scrape tool:
      tools:
        web_scrape:
          backend: firecrawl
          firecrawl:
            composio_service: firecrawl
            # or
            # mcp_url: https://mcp.firecrawl.dev/...
            # api_key: YOUR_FIRECRAWL_API_KEY
      
  5. (Optional) Install and authorize Composio:
    pip install composio_pydantic
    composio login
    composio add firecrawl
    
  6. Define two helper tools (or ensure they exist in your OpenClaw):
    • filter_ai_news – filters and ranks search results.
    • extract_urls – extracts URLs from a list of articles.

🧭 Skill Overview

  • Goal: Produce a daily AI‑focused newsletter with ≈20 articles.
  • Method:
    1. Search latest AI news with Brave (web_search).
    2. Filter & rank results (filter_ai_news).
    3. Extract URLs of selected articles (extract_urls).
    4. Scrape & enrich each URL with Firecrawl (web_scrape) + summarization (language_model).
    5. Compile into markdown_newsletter and json_newsletter for your engine.

🪄 Skill Definition

skill: ai_newsletter_daily
description: Daily AI‑focused newsletter via Brave + Firecrawl.
version: 1.0.0

inputs:
  - name: target_news_count
    type: integer
    default: 20
    description: Number of AI articles to include (must be ≥ 1).

  - name: search_query
    type: string
    default: "latest AI news today"
    description: Query to send to Brave Search.

  - name: include_domains
    type: list[string]
    default: []
    description: Optional whitelist of domains (e.g., "arxiv.org").

  - name: exclude_domains
    type: list[string]
    default: ["youtube.com", "reddit.com", "facebook.com", "twitter.com"]
    description: Domains to skip.

outputs:
  - name: newsletter_items
    type: list[object]
    description: |
      Up to `target_news_count` AI‑related articles, each with
      title, URL, domain, summary, published_at, and relevance_score.

  - name: markdown_newsletter
    type: string
    description: Markdown‑formatted newsletter body for your engine.

  - name: json_newsletter
    type: object
    description: |
      JSON‑structured newsletter with date and article list
      for your email / RSS / CMS pipeline.

parameters:
  - name: max_brave_results
    type: integer
    default: 50
    description: Max number of results from Brave before filtering.

  - name: filter_by_ai_keywords
    type: boolean
    default: true
    description: Whether to score relevance by AI‑related keywords.

🧩 Step 1: Search for AI news (Brave)

Use web_search to get raw AI‑related links from Brave.

task:
  name: search_ai_news
  description: Fetch raw AI news results from Brave Search.

steps:
  - call:
      tool: web_search
      args:
        query: "{{ search_query }}"
        count: "{{ max_brave_results }}"

  - set:
      var: raw_results
      value: result.tool_output.results
      # or `tool_output.items` depending on your Brave adapter shape

🧩 Step 2: Filter and rank results

Run a helper tool that filters and ranks the raw results.

task:
  name: filter_ai_articles
  description: Filter and rank AI‑related articles from raw results.

depends_on:
  - search_ai_news

steps:
  - call:
      tool: filter_ai_news
      args:
        results: raw_results
        targets: target_news_count
        include_domains: include_domains
        exclude_domains: exclude_domains
        filter_by_ai_keywords: filter_by_ai_keywords

  - set:
      var: kept_articles
      value: result.tool_output.filtered_results

🔧 filter_ai_news implementation (in your code, not in YAML) should:

  • Remove duplicates by hostname+path.
  • Filter by include_domains/exclude_domains.
  • Score relevance to AI (using LLM or keyword‑based scoring).
  • Sort by relevance + freshness.
  • Take only targets items.

🧩 Step 3: Extract URLs

Turn filtered kept_articles into a plain URL list.

task:
  name: extract_urls
  description: Extract article URLs from filtered results.

depends_on:
  - filter_ai_articles

steps:
  - call:
      tool: extract_urls
      args:
        articles: kept_articles

  - set:
      var: article_urls
      value: result.tool_output.urls

🔧 extract_urls implementation:

  • Should map [{ title, url, ... }] → urls (i.e., urls = [article.url for article in articles]).

🧩 Step 4: Deep‑dive scrape with Firecrawl

For each URL, scrape with Firecrawl and summarize with the model.

task:
  name: scrape_and_enrich_articles
  description: Scrape and enrich each selected article.

depends_on:
  - extract_urls

steps:
  - for_each:
      items: article_urls
      item_var: article_url
      steps:
        - call:
            tool: web_scrape
            args:
              url: "{{ article_url }}"

        - set:
            var: raw_content
            value: result.tool_output
            # Assumed to have metadata: { title, published_at, ... }

        - call:
            tool: language_model
            args:
              model: gpt‑4
              prompt: |
                You are a summarizer for an AI‑focused newsletter.
                Given this article content from Firecrawl:
                ---
                {{ raw_content }}
                ---
                Summarize it in exactly one short paragraph.
                Do not include code or technical details unless crucial.
                Return only the paragraph, no headings or extra text.

        - set:
            var: summary
            value: result.tool_output.text

        - call:
            tool: enrich_article
            args:
              article:
                title: raw_content.metadata.title
                url: article_url
                domain: raw_content.metadata.url_domain || extract_domain(article_url)
                summary: summary
                published_at: raw_content.metadata.published_at || "unknown"

        - set:
            var: enriched_item
            value: result.tool_output.article

        - append:
            list: enriched_articles
            value: enriched_item

🔧 enrich_article implementation:

  • Should fill in domain from url and compute relevance_score (e.g., via keyword‑pattern or lightweight LLM scoring).

🧩 Step 5: Compile the newsletter (20 items)

Sort and truncate to target_news_count, then render Markdown and JSON.

task:
  name: compile_newsletter
  description: Turn enriched articles into structured newsletter output.

depends_on:
  - scrape_and_enrich_articles

steps:
  - sort:
      list: enriched_articles
      by: "relevance_score DESC, published_at DESC"

  - set:
      var: final_list
      value: enriched_articles[0:target_news_count]
      # or via `tool: limit_list` if your runtime prefers tool calls

  - set:
      var: markdown_output
      value: |
        # 🤖 Daily AI Newsletter — {{ today }}

        {% for item in final_list %}
        ## {{ item.title }} ({{ item.domain }})

        {{ item.summary }}

        [Read more]({{ item.url }})  
        {% endfor %}

  - set:
      var: json_output
      value: |
        {
          "date": "{{ today }}",
          "articles": final_list
        }

  - output:
      newsletter_items: final_list
      markdown_newsletter: markdown_output
      json_newsletter: json_output

🚀 How to deploy on ClawHub

  1. Save this file as:

    skills/ai_newsletter_daily.md
    

    in your OpenClaw project or ClawHub skill folder.

  2. Make sure your OpenClaw stack exposes these tools:

    • web_search → Brave Search.
    • web_scrape → Firecrawl.
    • language_model → your LLM provider.
    • filter_ai_news, extract_urls, enrich_article → helper tools implemented in your codebase.
  3. In ClawHub, trigger the skill:

    • As a daily cron job (e.g., 0 9 * * *).
    • Or via a webhook that overrides search_query and target_news_count.
  4. Consume outputs:

    • markdown_newsletter → send to your Markdown‑based newsletter engine.
    • json_newsletter → pipe into your email engine, RSS generator, or CMS.

✅ Expected behavior

  • When ai_newsletter_daily runs:
    • Brave returns ≈50 AI‑related links.
    • filter_ai_news keeps ≈20 high‑relevance items.
    • web_scrape + language_model enrich each article.
    • compile_newsletter yields:
      • newsletter_items – structured list of 20 AI‑focused items.
      • markdown_newsletter – ready‑to‑render Markdown.
      • json_newsletter – JSON‑endpoint‑ready payload.

You can now land this as a production‑ready ClawHub‑style skill and extend filter_ai_news, enrich_article, etc., in your OpenClaw code.

Comments

Loading comments...