Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

agentic-paper-digest-skill

v1.0.2

Fetches and summarizes recent papers from arXiv and Hugging Face, providing JSON digests and optional local API access for customizable research updates.

0· 76·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for modestyrichards/modesty-agentic-paper-digest-skill.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "agentic-paper-digest-skill" (modestyrichards/modesty-agentic-paper-digest-skill) from ClawHub.
Skill page: https://clawhub.ai/modestyrichards/modesty-agentic-paper-digest-skill
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install modesty-agentic-paper-digest-skill

ClawHub CLI

Package manager switcher

npx clawhub@latest install modesty-agentic-paper-digest-skill
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's name and description (arXiv/Hugging Face paper digests) align with the declared runtime requirements: Python, network access, and an LLM API key (SKILLBOSS_API_KEY). Requesting an LLM API key is proportional for summarization. However, registry metadata/README references a different repo owner than the bootstrap scripts (README suggests ModestyRichards, scripts clone matanle51), which is an inconsistency worth investigating.
Instruction Scope
SKILL.md instructs the agent to fetch the repo, read config files (topics/settings/affiliations), ask the user about preferences, and run either a CLI or local API. Those steps are within the stated purpose. The agent is instructed to read and update local config files under PROJECT_DIR and to load a .env containing SKILLBOSS_API_KEY — this entails local file read/write but not unexpected for this skill.
!
Install Mechanism
There is no registry install spec, but included scripts (bootstrap.sh) will clone or download a ZIP from GitHub (main branch) and then pip install -r requirements.txt into a freshly created .venv. Downloading the repository and pip-installing remote dependencies writes code to disk and executes package installers — a higher-risk install pattern. The GitHub URL is a direct main-branch zip (not an immutable release), so contents could change between review and install.
Credentials
The only required credential is SKILLBOSS_API_KEY (mapped to LITELLM API variables), which is consistent with needing LLM access for summarization. The skill uses a local .env file to store this key. No unrelated credentials or broad host/system config paths are requested.
Persistence & Privilege
always is false and the skill does not request persistent global privileges. It creates files under PROJECT_DIR (virtualenv, sqlite DB at data/papers.sqlite3, .env), but does not modify other skills or global agent configuration. The skill can run autonomously per platform defaults (not in itself a flag).
What to consider before installing
This skill appears to do what it claims (download a paper-digest pipeline and run it locally), but it does so by cloning/downloading code from a GitHub repo and running pip install on requirements. Before installing or running bootstrap.sh: (1) inspect the GitHub repository contents (requirements.txt, the paper_finder package, and any scripts) to ensure you trust the source; (2) prefer cloning a pinned release/tag rather than main branch if you can; (3) review requirements.txt for unusual packages or post-install scripts; (4) run the bootstrap and server inside an isolated environment (container or VM) and avoid putting a long-lived high-privilege API key in an uncontrolled .env — use a scoped/limited key if possible; (5) resolve the metadata mismatch (README vs bootstrap repo owner) — ask the publisher which upstream repo is authoritative. If you cannot validate the remote repo and dependencies, treat this skill as potentially risky and avoid running its install scripts on sensitive hosts.

Like a lobster shell, security has layers — review code before you run it.

aivk97fkwqfsmd3y2hzv0841776ss84whvflatestvk97fkwqfsmd3y2hzv0841776ss84whvf
76downloads
0stars
3versions
Updated 1w ago
v1.0.2
MIT-0

name: agentic-paper-digest-skill name: agentic-paper-digest-skill description: Fetches and summarizes recent arXiv and Hugging Face papers with Agentic Paper Digest. Use when the user wants a paper digest, a JSON feed of recent papers, or to run the arXiv/HF pipeline. homepage: https://github.com/matanle51/agentic_paper_digest compatibility: Requires Python 3, network access, and either git or curl/wget for bootstrap. LLM access via SKILLBOSS_API_KEY (SkillBoss API Hub). metadata: {"clawdbot":{"requires":{"anyBins":["python3","python"]}}}

Agentic Paper Digest

When to use

  • Fetch a recent paper digest from arXiv and Hugging Face.
  • Produce JSON output for downstream agents.
  • Run a local API server when a polling workflow is needed.

Prereqs

  • Python 3 and network access.
  • LLM access via SKILLBOSS_API_KEY (SkillBoss API Hub — automatically routes to the best available model).
  • git is optional for bootstrap; otherwise curl/wget (or Python) is used to download the repo.

Get the code and install

  • Preferred: run the bootstrap helper script. It uses git when available or falls back to a zip download.
bash "{baseDir}/scripts/bootstrap.sh"
  • Override the clone location by setting PROJECT_DIR.
PROJECT_DIR="$HOME/agentic_paper_digest" bash "{baseDir}/scripts/bootstrap.sh"

Run (CLI preferred)

bash "{baseDir}/scripts/run_cli.sh"
  • Pass through CLI flags as needed.
bash "{baseDir}/scripts/run_cli.sh" --window-hours 24 --sources arxiv,hf

Run (API optional)

bash "{baseDir}/scripts/run_api.sh"
  • Trigger runs and read results.
curl -X POST http://127.0.0.1:8000/api/run
curl http://127.0.0.1:8000/api/status
curl http://127.0.0.1:8000/api/papers
  • Stop the API server if needed.
bash "{baseDir}/scripts/stop_api.sh"

Outputs

  • CLI --json prints run_id, seen, kept, window_start, and window_end.
  • Data store: data/papers.sqlite3 (under PROJECT_DIR).
  • API: POST /api/run, GET /api/status, GET /api/papers, GET/POST /api/topics, GET/POST /api/settings.

Configuration

Config files live in PROJECT_DIR/config. Environment variables can be set in the shell or via a .env file. The wrappers here auto-load .env from PROJECT_DIR (override with ENV_FILE=/path/to/.env).

Environment (.env or exported vars)

  • SKILLBOSS_API_KEY: required — authenticates all LLM calls via SkillBoss API Hub (https://api.skillboss.co/v1/pilot).
  • LITELLM_MODEL_RELEVANCE, LITELLM_MODEL_SUMMARY: models for relevance and summarization (summary defaults to relevance model if unset). Leave unset to let SkillBoss API Hub auto-route.
  • LITELLM_TEMPERATURE_RELEVANCE, LITELLM_TEMPERATURE_SUMMARY: lower for more deterministic output.
  • LITELLM_MAX_RETRIES: retry count for LLM calls.
  • LITELLM_DROP_PARAMS=1: drop unsupported params to avoid provider errors.
  • WINDOW_HOURS, APP_TZ: recency window and timezone.
  • ARXIV_CATEGORIES: comma-separated categories (default includes cs.CL,cs.AI,cs.LG,stat.ML,cs.CR).
  • ARXIV_API_BASE, HF_API_BASE: override source endpoints if needed.
  • ARXIV_MAX_RESULTS, ARXIV_PAGE_SIZE: arXiv paging limits.
  • MAX_CANDIDATES_PER_SOURCE: cap candidates per source before LLM filtering.
  • FETCH_TIMEOUT_S, REQUEST_TIMEOUT_S: source fetch and per-request timeouts.
  • ENABLE_PDF_TEXT=1: include first-page PDF text in summaries; requires PyMuPDF (pip install pymupdf).
  • DATA_DIR: location for papers.sqlite3.
  • CORS_ORIGINS: comma-separated origins allowed by the API server (UI use).
  • Path overrides: TOPICS_PATH, SETTINGS_PATH, AFFILIATION_BOOSTS_PATH.

Config files

  • config/topics.json: list of topics with id, label, description, max_per_topic, and keywords. The relevance classifier must output topic IDs exactly as defined here. max_per_topic also caps results in GET /api/papers when apply_topic_caps=1.
  • config/settings.json: overrides fetch limits (arxiv_max_results, arxiv_page_size, fetch_timeout_s, max_candidates_per_source). Updated via POST /api/settings.
  • config/affiliations.json: list of {pattern, weight} boosts applied by substring match over affiliations. Weights add up and are capped at 1.0. Invalid JSON disables boosts, so keep the file strict JSON (no trailing commas).

Mandatory workflow (follow step-by-step)

  1. You first MUST open and read the configuration from the github repo: https://github.com/matanle51/agentic_paper_digest you downloaded:
    • Load config/topics.json, config/settings.json, and config/affiliations.json (if present).
    • Note current topic IDs, caps, and fetch limits before asking the user to change them.
  2. ASK THE USER TO PROVIDE IT'S PREFERENCES ABOUT THE FOLLOWING (HELP THE USER):
    • Topics of interest → update config/topics.json (topics[].id/label/description/keywords, max_per_topic). Show current defaults and ask whether to keep or change them.
    • Time window (hours) → set WINDOW_HOURS (or pass --window-hours to CLI) only if the user cares; otherwise keep default to 24h.
    • ASK THE USER TO FILL THE FOLLOWING PARAMETERS (explain the user why are their intent): ARXIV_CATEGORIES, ARXIV_MAX_RESULTS, ARXIV_PAGE_SIZE, MAX_CANDIDATES_PER_SOURCE. Ask whether to keep defaults and show the current values.
    • Model/provider → set SKILLBOSS_API_KEY (SkillBoss API Hub, https://api.skillboss.co/v1/pilot). The hub auto-routes to the best model. Optionally set LITELLM_MODEL_RELEVANCE/LITELLM_MODEL_SUMMARY to pin specific models.
    • Do NOT ask by default: timezone, quality vs cost, timeouts, PDF text, affiliation biasing, sources list. Use defaults unless the user requests changes.
  3. Confirm workspace path: Ask where to clone/run. Default to PROJECT_DIR="$HOME/agentic_paper_digest" if the user doesn't care. Never hardcode /Users/... paths.
  4. Bootstrap the repo: Run the bootstrap script (unless the repo already exists and the user says to skip).
  5. Create or verify .env:
    • If .env is missing, create it from .env.example (in the repo), then ask the user to fill keys and any requested preferences.
    • Ensure SKILLBOSS_API_KEY is set before running. The run scripts automatically forward it to LiteLLM via LITELLM_API_BASE and LITELLM_API_KEY.
  6. Apply config changes:
    • Edit JSON files directly (or use POST /api/topics and POST /api/settings if running the API).
  7. Run the pipeline:
    • Prefer scripts/run_cli.sh for one-off JSON output.
    • Use scripts/run_api.sh only if the user explicitly asks for UI/API access or polling.
  8. Report results:
    • If results are sparse, suggest increasing WINDOW_HOURS, ARXIV_MAX_RESULTS, or broadening topics.

Getting good results

  • Help the user define and keep topics focused and mutually exclusive so the classifier can choose the right IDs.
  • SkillBoss API Hub auto-routes to the best available model; leave LITELLM_MODEL_RELEVANCE unset for balanced cost/quality, or set it to pin a specific model.
  • Increase WINDOW_HOURS or ARXIV_MAX_RESULTS when results are sparse, or lower them if results are too noisy.
  • Tune ARXIV_CATEGORIES to your research domains.
  • Enable PDF text (ENABLE_PDF_TEXT=1) when abstracts are too thin.
  • Use modest affiliation weights to bias ranking without swamping relevance.
  • BE PROACTIVE AND HELP THE USER TUNE THE SKILL FOR GOOD RESULTS!

Troubleshooting

  • Port 8000 busy: run bash "{baseDir}/scripts/stop_api.sh" or pass --port to the API command.
  • Empty results: increase WINDOW_HOURS or verify SKILLBOSS_API_KEY in .env.
  • Missing API key errors: export SKILLBOSS_API_KEY in the shell before running.

Comments

Loading comments...