Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Journal Matchmaker

v1.0.0

Recommend suitable high-impact factor or domain-specific journals for manuscript submission based on abstract content. Trigger when user provides paper abstr...

0· 391·0 current·0 all-time
byAIpoch@aipoch-ai
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name and description match the included files: SKILL.md documents running scripts/main.py and the repository contains a local journal database and field definitions used for matching. There are no unexpected credentials, binaries, or third-party services required.
Instruction Scope
SKILL.md instructs the agent/user to run the bundled Python script with an abstract and optional filters. The instructions and the code (shown imports and local JSON references) operate on local files (references/*.json) and do keyword/TF-IDF matching; I saw no instructions to read unrelated system files, environment variables, or to send data to external endpoints.
Install Mechanism
No install spec is provided (instruction-only with a bundled script). Dependencies are minimal (requirements.txt contains only 'dataclasses'). Nothing is downloaded or extracted at install time, so there is no high-risk install mechanism.
Credentials
The skill declares no required environment variables, credentials, or config paths. The code imports only standard libraries and reads local JSON reference files; there are no requests for unrelated secrets or access to external accounts.
Persistence & Privilege
always is false (skill is not force-included). The skill does not request persistent system privileges or modify other skills' configuration. Its filesystem access is limited to reading/writing workspace files (per SKILL.md) and local references.
Assessment
This skill appears coherent and limited to local processing of abstracts using the provided journal database. Before installing or running it: (1) Review the bundled references/journals.json if you rely on accurate impact factors (they can be stale); (2) Avoid passing sensitive or unpublished full manuscripts to any third-party runtime; run the script in an isolated/sandboxed workspace if you want extra safety; (3) If you allow passing filenames as --abstract, ensure the script treats them safely (SKILL.md mentions input validation — confirm the implementation prevents ../ path traversal when using file inputs); (4) Treat its recommendations as advisory (not authoritative) and double-check journal scope/IF via official sources before submission.

Like a lobster shell, security has layers — review code before you run it.

Journalvk9776fh9jp53avegb4vgk64fcd821smgPublicationvk9776fh9jp53avegb4vgk64fcd821smglatestvk9776fh9jp53avegb4vgk64fcd821smg
391downloads
0stars
1versions
Updated 7h ago
v1.0.0
MIT-0

Journal Matchmaker

Analyzes academic paper abstracts to recommend optimal journals for submission, considering impact factors, scope alignment, and domain expertise.

Use Cases

  • Find the best-fit journal for a new manuscript
  • Identify high-impact factor journals in specific research areas
  • Compare journal scopes against paper content
  • Discover domain-specific publication venues

Usage

python scripts/main.py --abstract "Your paper abstract text here" [--field "field_name"] [--min-if 5.0] [--count 5]

Parameters

ParameterTypeRequiredDefaultDescription
--abstractstrYes-Paper abstract text to analyze
--fieldstrNoAuto-detectResearch field (e.g., "computer_science", "biology")
--min-iffloatNo0.0Minimum impact factor threshold
--max-iffloatNoNoneMaximum impact factor (optional)
--countintNo5Number of recommendations to return
--formatstrNotableOutput format: table, json, markdown

Examples

# Basic usage
python scripts/main.py --abstract "This paper presents a novel deep learning approach..."

# Specify field and minimum impact factor
python scripts/main.py --abstract "abstract.txt" --field "ai" --min-if 10.0 --count 10

# Output as JSON for integration
python scripts/main.py --abstract "..." --format json

How It Works

  1. Abstract Analysis: Extracts key terms, methodology, and research focus
  2. Field Classification: Identifies the primary research domain
  3. Journal Matching: Compares content against journal scopes and aims
  4. Impact Factor Filtering: Applies IF constraints if specified
  5. Ranking: Scores and ranks journals by relevance and impact

Technical Details

  • Difficulty: Medium
  • Approach: Keyword extraction + journal database matching
  • Data Source: Journal metadata from references/journals.json
  • Algorithm: TF-IDF + cosine similarity for scope matching

References

  • references/journals.json - Journal database with impact factors and scopes
  • references/fields.json - Research field classifications
  • references/scoring_weights.json - Algorithm tuning parameters

Notes

  • Journal database should be updated periodically (quarterly recommended)
  • Impact factor data sourced from Journal Citation Reports (JCR)
  • Scope descriptions parsed from official journal websites
  • For emerging fields, manual curation may be needed

Risk Assessment

Risk IndicatorAssessmentLevel
Code ExecutionPython/R scripts executed locallyMedium
Network AccessNo external API callsLow
File System AccessRead input files, write output filesMedium
Instruction TamperingStandard prompt guidelinesLow
Data ExposureOutput files saved to workspaceLow

Security Checklist

  • No hardcoded credentials or API keys
  • No unauthorized file system access (../)
  • Output does not expose sensitive information
  • Prompt injection protections in place
  • Input file paths validated (no ../ traversal)
  • Output directory restricted to workspace
  • Script execution in sandboxed environment
  • Error messages sanitized (no stack traces exposed)
  • Dependencies audited

Prerequisites

# Python dependencies
pip install -r requirements.txt

Evaluation Criteria

Success Metrics

  • Successfully executes main functionality
  • Output meets quality standards
  • Handles edge cases gracefully
  • Performance is acceptable

Test Cases

  1. Basic Functionality: Standard input → Expected output
  2. Edge Case: Invalid input → Graceful error handling
  3. Performance: Large dataset → Acceptable processing time

Lifecycle Status

  • Current Stage: Draft
  • Next Review Date: 2026-03-06
  • Known Issues: None
  • Planned Improvements:
    • Performance optimization
    • Additional feature support

Comments

Loading comments...