Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Paper Impact Analyzer

v1.1.0

Analyze academic paper impact using multiple data sources (arXiv, GitHub, OpenAlex, Semantic Scholar). Input an arXiv ID and get a multi-dimensional impact a...

0· 116·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for haataa/paper-impact-analyzer.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Paper Impact Analyzer" (haataa/paper-impact-analyzer) from ClawHub.
Skill page: https://clawhub.ai/haataa/paper-impact-analyzer
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: python
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install paper-impact-analyzer

ClawHub CLI

Package manager switcher

npx clawhub@latest install paper-impact-analyzer
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the implementation: the code fetches arXiv metadata, searches/queries GitHub, queries OpenAlex and Semantic Scholar, and synthesizes a rating. Required runtime (python) and the lack of API keys align with the declared design (keyless APIs). Duplicate files (root and skills/ copies) look like packaging redundancy but are consistent with the skill purpose.
!
Instruction Scope
SKILL.md instructs only to run the included Python script with arXiv IDs (no other file or env access). However, the script creates an SSL context that disables certificate verification (SSL_CTX.verify_mode = ssl.CERT_NONE and check_hostname = False) and uses an http:// arXiv endpoint. That weakens transport security for all outbound HTTPS calls made by the script (makes it susceptible to MITM on untrusted networks). This behavior is not called out in the SKILL.md.
Install Mechanism
No install spec provided (instruction-only install). The skill includes Python source but does not try to install external packages or download code at runtime. This is low-risk from an installer perspective.
Credentials
The skill requests no environment variables or credentials and uses public, keyless APIs. The set of external endpoints it contacts (arXiv, api.github.com, api.openalex.org, Semantic Scholar) is proportional to its stated purpose.
Persistence & Privilege
The skill is not always-enabled, does not request elevated persistence, and there is no evidence it modifies other skills or system-wide configurations. Running the script makes network calls but does not persist credentials or reconfigure the agent.
Assessment
This skill appears to be internally consistent with its description: it runs a Python script that queries arXiv, GitHub, OpenAlex, and Semantic Scholar and prints a Markdown impact report. Before running it, review and consider the following: 1) The script intentionally disables SSL certificate verification for outbound HTTPS requests and uses plain HTTP for the arXiv query — this exposes you to man-in-the-middle risk on untrusted networks. If you will run it on a laptop or cloud VM, either (a) modify the script to remove the SSL bypass (use the default SSL context) and use HTTPS for arXiv, or (b) run it in a network you trust. 2) The script makes multiple external network requests — expect rate limiting from GitHub and Semantic Scholar when unauthenticated; running batch jobs may hit limits. 3) The package contains duplicate files (root and skills/ copies) which is likely harmless but unusual; you may prefer to keep only one copy. 4) If you are concerned about privacy or data leakage, inspect the full script locally before execution; it does not read local environment variables or files in the visible portions, but you should verify the truncated parts if you plan to run it. 5) Run the script in an isolated environment (container or VM) if you want to limit risk. If you want, I can point out the exact lines to change to re-enable certificate verification and use HTTPS for arXiv.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

📊 Clawdis
Binspython
latestvk9732wexvank3zt2tfwhb60yqd83pex1
116downloads
0stars
2versions
Updated 1mo ago
v1.1.0
MIT-0

Paper Impact Analyzer

Multi-source, fault-tolerant academic paper impact analysis.

When to Use

  • Evaluating a paper's academic influence or community adoption
  • Comparing impact across multiple papers
  • Deciding whether a paper is worth reading based on external signals
  • Checking GitHub stars, citation counts, venue acceptance for a paper
  • Assessing author credibility (h-index) for a paper
  • Batch-analyzing papers in a survey or literature review

How to Use

Single paper

Run the analysis script with an arXiv ID:

python scripts/analyze.py 2603.04948

Multiple papers

Pass multiple arXiv IDs separated by spaces:

python scripts/analyze.py 2603.04948 2602.15922 2603.05488 2602.22661

Output

The script prints a structured Markdown impact report for each paper, including:

DimensionExample
Publication date2026-03-05 (20 days ago)
Venue acceptanceICLR 2026
GitHub repo2,263 stars / 214 forks
Citation count12 (OpenAlex) / 15 (S2)
Author h-indexFirst author h=23
AffiliationsUC Berkeley, UT Austin

Plus a synthesized overall rating (S/A/B/C/D) with confidence level and data completeness.

Data Sources (Priority Order)

  1. arXiv API — paper metadata, authors, abstract (always available)
  2. GitHub API — repo stars, forks, issues (most reliable external signal)
  3. OpenAlex API — citation count (free, no API key needed)
  4. Semantic Scholar API — citations, influential citations, author h-index (rate-limited)

Each source fails independently. The script always produces output using whatever data is available.

Design Philosophy

  • Graceful degradation: Every API call is wrapped in try/except with timeouts. If Semantic Scholar returns 429, the report still includes arXiv + GitHub + OpenAlex data.
  • Age-aware scoring: Papers < 3 months old are scored primarily on GitHub + venue + team. Papers > 1 year old are scored primarily on citations.
  • No API keys required: All data sources used are free and keyless.
  • Single file: The entire implementation is in scripts/analyze.py with zero external dependencies (stdlib only).

Comments

Loading comments...