Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Feedship

v1.5.0

Manage RSS/Atom feeds, subscribe to websites, search and read articles. Use when working with feeds, RSS, Atom, subscribing to content sources, managing an i...

0· 133·1 current·1 all-time
byYan Zer0@yanpeipan

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for yanpeipan/feedship.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Feedship" (yanpeipan/feedship) from ClawHub.
Skill page: https://clawhub.ai/yanpeipan/feedship
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: uv
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install feedship

ClawHub CLI

Package manager switcher

npx clawhub@latest install feedship
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description (RSS/Atom, discovery, semantic search) matches the instructions: installing feedship with 'ml' and 'cloudflare' extras, fetching feeds, and building embeddings is coherent. The declared required binary 'uv' is consistent with the provided install commands (uv tool install).
Instruction Scope
SKILL.md stays within feed-management scope: it instructs installing the tool, fetching feeds, discovery, and running searches. It does suggest persistent changes (adding exports to ~/.bashrc) and running feedship fetch --all to populate local vector DB; these are reasonable for the described functionality but are actions that modify user environment and create local storage.
Install Mechanism
There is no built-in install spec in the skill; instructions tell the user to run 'uv tool install' pulling from PyPI or a GitHub repo. Using PyPI/GitHub is expected, but the install will pull heavy ML dependencies (sentence-transformers, chromadb) and a scraping extra; those are proportionate but increase attack surface. The GitHub URL shown is a direct repo install (typical but worth verifying).
Credentials
No required credentials are declared. The doc suggests optionally setting HF_ENDPOINT and PIP_INDEX_URL for mirrors; these are plausible for environments with restricted network access but could redirect package and model downloads to arbitrary endpoints if misconfigured. The skill does not request unrelated secrets.
Persistence & Privilege
always:false (no forced inclusion). Metadata includes a cron default to fetch every 30 minutes, which is coherent for a feed fetcher. The instructions also propose adding exports to ~/.bashrc (user-level persistence). These behaviors create ongoing network activity and local data (database/vector store) but do not request elevated system privileges.
Assessment
This skill appears to be what it says: a feed manager with optional semantic search. Before installing: 1) Verify the 'uv' CLI referenced is the tool you expect and is trusted on your system. 2) Inspect the feedship package source (PyPI page or the GitHub repo) before running install, especially if using the git+https install. 3) Be aware the 'ml' extras will pull large ML libraries (sentence-transformers, chromadb) — they will consume disk space and may download models. 4) Exercise caution when following the mirror instructions: only set HF_ENDPOINT or PIP_INDEX_URL to mirrors you trust (a malicious mirror can serve poisoned packages or model files). 5) Note that running the tool and the cron behavior will create local databases (article storage and embeddings) and schedule periodic network fetches; if you want no persistence, avoid adding exports to ~/.bashrc and review what feedship stores (config/db paths reported by feedship info). If you need higher assurance, ask the publisher for an official homepage or signed releases, or audit the repository before installing.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Binsuv
latestvk97dq9kptndztg8rp7n0p9qmx584b8wm
133downloads
0stars
4versions
Updated 3w ago
v1.5.0
MIT-0

Feedship Skill

Version: 1.5 For: Claude Code and OpenClaw compatible agents Description: Manage information feeds, subscribe to RSS/GitHub sources, and search articles

Setup

Before using this skill, install feedship with ML and cloud extras:

uv tool install 'feedship[ml,cloudflare]' --python 3.12 --force

Note: cloudflare extra provides scrapling (HTML fetching); ml extra provides sentence-transformers + chromadb (semantic search). Both are required for full functionality.

China / Restricted Networks

For environments where PyPI or HuggingFace is not accessible, use mirrors:

# Add to ~/.bashrc for persistence
echo 'export HF_ENDPOINT=https://hf-mirror.com' >> ~/.bashrc
echo 'export PIP_INDEX_URL=https://mirrors.aliyun.com/pypi/simple/' >> ~/.bashrc
source ~/.bashrc

# Install
uv tool install 'feedship[cloudflare,ml]' --force

Upgrade

# From PyPI (if accessible)
uv tool upgrade feedship

# From GitHub (latest commits)
uv tool install 'feedship @ git+https://github.com/yanpeipan/feedship.git' \
  --pip-args='-i https://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com' \
  --include-deps --force

After installation, verify with: feedship --version

First-time setup for semantic search: After installing, run feedship fetch --all to populate the vector database with article embeddings. Semantic search requires embeddings to be generated first (chromadb storage).


Commands

feed

Manage RSS/Atom feeds and GitHub release trackers.

feed add

feedship feed add <url> [options]

Add a new feed by URL with automatic provider detection.

Options:

  • --auto-discover/--no-auto-discover — Enable feed auto-discovery (default: enabled)
  • --automatic on|off — Automatically add all discovered feeds (default: off)
  • --discover-depth N — Discovery crawl depth 1-10 (default: 1)
  • --weight FLOAT — Feed weight for semantic search (default: 0.3)

Examples:

feedship feed add https://example.com
feedship feed add https://github.com/python/cpython --automatic on
feedship feed add https://example.com --discover-depth 3

feed list

feedship feed list [-v]

List all subscribed feeds with status.

Options:

  • -v, --verbose — Show detailed output
  • --json — Output in JSON format for programmatic consumption

feed remove

feedship feed remove <feed-id>

Remove a subscribed feed by ID.


fetch

feedship fetch [--all|<feed-ids>] [--concurrency N]

Fetch new articles from subscribed feeds.

Options:

  • --all — Fetch all subscribed feeds
  • --concurrency N — Max concurrent fetches 1-100 (default: 10)

Examples:

feedship fetch --all
feedship fetch abc12345
feedship fetch abc12345 def67890 --concurrency 20

article

Manage and view fetched articles.

article list

feedship article list [options]

Options:

  • --limit N — Maximum articles (default: 20)
  • --feed-id <id> — Filter by feed ID
  • --since <date> — Start date (YYYY-MM-DD)
  • --until <date> — End date (YYYY-MM-DD)
  • --on <date> — Specific date (can repeat for multiple)
  • --json — Output in JSON format for programmatic consumption

article view

feedship article view <article-id>

View full article content and metadata.

article open

feedship article open <article-id>

Open article in system browser.

article related

feedship article related <article-id> [--limit N]

Find semantically related articles.


search

feedship search <query> [options]

Search articles using full-text or semantic search.

Options:

  • --limit N — Maximum results (default: 20)
  • --feed-id <id> — Filter by feed ID
  • --semantic — Use semantic (vector) search instead of keyword
  • --rerank — Apply Cross-Encoder reranking
  • --since <date> — Start date filter
  • --until <date> — End date filter
  • --on <date> — Specific date filter
  • --json — Output in JSON format for programmatic consumption

Examples:

feedship search "machine learning"
feedship search "python news" --semantic
feedship search "updates" --semantic --rerank

discover

feedship discover <url> [--discover-depth N]

Discover RSS/Atom/RDF feeds on a website without subscribing.

Options:

  • --discover-depth N — Crawl depth 1-10 (default: 1)
  • --json — Output in JSON format for programmatic consumption

Examples:

feedship discover example.com
feedship discover example.com --discover-depth 3

info

feedship info [options]

Display system information, configuration, and storage status.

Options:

  • --json — Output in JSON format for programmatic consumption

Output includes:

  • Version information
  • Configuration file location
  • Database/storage path
  • Feed count and article count
  • Installed extras (ml, cloudflare)

Examples:

feedship info
feedship info --json

Output Formats

Tables

feed list, article list, search, discover output Rich tables:

  • Magenta headers
  • Alternating row styles
  • Truncated columns with overflow indicators

Panels

article view uses Rich Panel:

  • Title: Article title
  • Subtitle: Feed name | Date
  • Content: Full article text

Progress Bars

fetch uses Rich progress bars showing:

  • Current feed being fetched
  • New articles count
  • Elapsed time

Common Patterns

Initial Setup

# Add a website feed
feedship feed add https://example.com --automatic on

# Fetch all feeds
feedship fetch --all

# View recent articles
feedship article list --limit 50

Daily Workflow

# Fetch new articles
feedship fetch --all

# Search for topics
feedship search "machine learning" --semantic

# Read an article
feedship article view abc12345

# Open in browser for full view
feedship article open abc12345

Feed Management

# List feeds
feedship feed list -v

# Remove stale feed
feedship feed remove old123

# Discover new feeds on site
feedship discover news-site.com --discover-depth 2

Scheduled Fetching (OpenClaw Best Practice)

For automated periodic fetching, use platform-specific schedulers:

macOS (LaunchAgent):

<!-- ~/Library/LaunchAgents/com.feedship.fetch.plist -->
<key>ProgramArguments</key><array><string>/usr/local/bin/feedship</string><string>fetch</string><string>--all</string></array>
<key>StartInterval</key><integer>3600</integer>  <!-- every hour -->

Linux (systemd timer):

# ~/.config/systemd/user/feedship.timer
[Timer] OnBootSec=5min OnUnitActiveSec=1h

Cron:

0 * * * * feedship fetch --all >> ~/.feedship/fetch.log 2>&1

OpenClaw Cron (every 30 minutes):

openclaw cron add \
  --name "feedship-fetch" \
  --agent agent \
  --cron "*/30 * * * *" \
  --tz Asia/Shanghai \
  --session isolated \
  --message "uv run --with feedship[ml,cloudflare] feedship fetch --all" \
  --timeout-seconds 1800

Optional Dependencies

ML Extra (pip install feedship[ml])

Required for semantic search and related articles:

  • sentence-transformers
  • chromadb
  • torch

Cloudflare Extra (pip install feedship[cloudflare])

For enhanced web scraping with:

  • browserforge
  • playwright
  • curl-cffi

Comments

Loading comments...