Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

paper-cluster-survey-v2-2

v2.2.0

Extract structured paper records from one or more local PDFs, arXiv links, DOI links, or general paper URLs, then classify the papers and write an academic s...

0· 192·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for huang888596/paper-cluster-survey-v2-2.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "paper-cluster-survey-v2-2" (huang888596/paper-cluster-survey-v2-2) from ClawHub.
Skill page: https://clawhub.ai/huang888596/paper-cluster-survey-v2-2
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install paper-cluster-survey-v2-2

ClawHub CLI

Package manager switcher

npx clawhub@latest install paper-cluster-survey-v2-2
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the included scripts and SKILL.md. The scripts implement normalization, fetching/extraction, and review rendering which are the stated capabilities. Optional tooling (pdftotext, mutool, python3+pypdf) is referenced for higher-quality PDF extraction but is not required to run the scripts.
Instruction Scope
The SKILL.md confines actions to normal extraction, classification, and review drafting. Runtime scripts will (a) read local PDF paths you provide, (b) fetch HTTP/HTTPS URLs you provide (following redirects), and (c) run local PDF extraction tools if available. There is no instruction to read unrelated system files or to transmit data to third-party endpoints other than the original paper URLs. Note: because the extractor will fetch arbitrary URLs supplied by the user and follow redirects, supplying untrusted URLs can cause network access (including to internal endpoints) and return their contents into the pipeline.
Install Mechanism
No install spec is provided (instruction-only skill). The repository contains Node.js scripts (ESM) but nothing that downloads remote install artifacts or executes remote installers. This is a low-risk install surface; running the scripts requires Node.js available in the environment.
Credentials
The skill requests no environment variables, credentials, or config paths. The scripts use local filesystem access for user-supplied PDF paths and temporary directories for downloads, which is expected and proportional to the purpose.
Persistence & Privilege
The skill does not request always:true and does not modify other skills or system-wide agent settings. It runs as transient scripts and writes temporary files when downloading PDFs; this is normal for the task.
Assessment
This skill appears coherent and implements what it claims: normalizing sources, extracting text/metadata from PDFs and paper URLs, classifying, and rendering a review. Before using it consider: (1) Provide only trusted URLs and local files — the extractor will fetch arbitrary HTTP(S) URLs and follow redirects, which can reach internal network endpoints (SSRF-like risk) and return their contents into the review pipeline. (2) High-quality PDF extraction can depend on optional local tools (pdftotext, mutool, or python3+pypdf); if those are not installed the script falls back to less-accurate methods. (3) The scripts invoke child processes and write temporary files under the OS temp directory — run them in a sandbox or environment you control if you are concerned about sensitive data. (4) No credentials are requested by the skill. If you plan to install, ensure Node.js 18+ is available and review any inputs (URLs/paths) you hand to the skill.
scripts/extract-paper-records.mjs:162
Shell command execution detected (child_process).
!
scripts/extract-paper-records.mjs:121
File read combined with network send (possible exfiltration).
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

latestvk971vabk0hmk3crvf8c868dwxx837wfe
192downloads
0stars
1versions
Updated 4h ago
v2.2.0
MIT-0

Paper Cluster Survey V2.2

Overview

Turn raw paper URLs and PDFs into usable review inputs. Extract structured metadata and text evidence first, then classify the papers, produce a classification table, and write a review that follows common academic survey conventions instead of a rigid fill-in-the-blanks template.

Workflow

1. Normalize the source set

  • Accept multiple local PDF paths, arXiv URLs, DOI URLs, and general paper URLs.
  • Use scripts/normalize-sources.mjs when the source set is mixed or should be stored as a reusable manifest.
  • Preserve the original source string for traceability.

2. Extract paper records before reasoning

  • Use scripts/extract-paper-records.mjs to turn PDFs and URLs into structured records before classification.
  • The extraction pass should gather as much of the following as possible:
    • title
    • authors
    • year
    • venue
    • abstract
    • task
    • method
    • datasets
    • metrics
    • main_contribution
    • limitations
    • source
    • extraction_notes
  • Treat extracted records as the primary context for classification and survey drafting.
  • If important fields are missing, only fall back to direct source reading for the specific missing details.

Read extraction-pipeline.md when deciding how much to trust the extracted fields and when to re-open the raw source.

3. Verify evidence quality

  • Do not classify from titles alone when abstract or body text is available.
  • Prefer abstract, introduction, and method section.
  • Mark uncertain inferences explicitly.
  • If the extractor had to fall back to weak methods, keep claims conservative.

4. Design the classification scheme

  • Produce a classification scheme before writing the review.
  • Prefer task-based categories first.
  • If tasks are too similar, classify by method family.
  • Use application-domain categories only when they best explain the corpus.
  • Keep the taxonomy shallow unless the corpus is large.
  • Assign one primary category to each paper unless the user explicitly wants multi-label grouping.

Read taxonomy-guidelines.md when the category design is ambiguous.

5. Output the classification table

  • Always provide one classification table before the review.
  • Include:
    • paper
    • year
    • category
    • rationale
    • evidence used
  • Keep rationales brief and evidence-based.

6. Decide the review shape

Default rule:

  • Write one integrated literature review for the entire corpus after the classification table.

Exception:

  • If the user explicitly asks for "each category write a survey", "分别写综述", "per-category review", or equivalent, write separate review sections for each category.

7. Write the review in academic survey style

The review must read like a normal survey paper, not a bullet summary dump.

  • Use a concise academic title.
  • Include an abstract when the output is formal enough to justify it.
  • Include keywords when they help position the review.
  • Use an introduction that explains background, significance, scope, source selection, and the organizing logic of the review.
  • Organize the main body by the most defensible basis for the corpus:
    • chronology
    • research themes
    • method families
    • viewpoints or schools
  • End with a conclusion or concluding discussion.
  • Add future directions, outlook, or open problems when the corpus supports them.
  • List references in GB/T 7714 style when bibliographic data is available.

Typical sections in a strong review are:

  • title
  • abstract
  • keywords
  • introduction
  • themed main sections
  • discussion, conclusion, or both
  • future directions or open problems when useful
  • references

Not every output needs every section. Match the structure to the user's request, the corpus size, and the field while staying recognizably review-like.

Read review-paper-style.md when drafting the prose review or choosing section structure.

8. Keep the prose review-like

  • Prefer connected academic prose over bullet dumps.
  • Use tables only to support comparison, not replace the review.
  • Do not invent datasets, metrics, or reference details.
  • If extracted metadata is incomplete, keep partial references and state what is missing.

Output Contract

Return results in this order unless the user asks otherwise:

  1. Corpus summary
  2. Classification scheme
  3. Classification table
  4. Formal review article
  5. References

If the user wants structured output, read output-schema.md.

Bundled Scripts

scripts/normalize-sources.mjs

  • Normalize mixed PDF and URL inputs into a JSON manifest.
  • Use when the source set is large, mixed, or should be reused.

scripts/extract-paper-records.mjs

  • Fetch URLs, resolve likely paper metadata, and extract paper text evidence from URLs or PDFs.
  • Prefer this script before asking the model to reason over a large source set.
  • Use its output as the main context object for classification and review drafting.

scripts/render-formal-review-template.mjs

  • Render a flexible academic-review scaffold from structured paper records.
  • Default to one integrated review.
  • Use --per-category only when the user explicitly asks for separate category reviews.

Quality Bar

  • Run extraction before classification unless the user already gave structured paper records.
  • Keep classification and review consistent with extracted evidence.
  • Use raw source re-reading only to fill important gaps.
  • If the extractor had to rely on weak fallbacks, say so.

Comments

Loading comments...