S.H.I.T底刊摘要

v0.1.2

Automates extraction and AI-based analysis of research papers from shitjournal.org, capturing titles, abstracts, DOIs, and publication dates in JSON format.

0· 185·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description claim to scrape and analyze papers from shitjournal.org; the code and SKILL.md use Playwright + JSDOM to render and parse the site, which is appropriate. No unrelated credentials, binaries, or config paths are requested.
Instruction Scope
SKILL.md explicitly instructs installing playwright and jsdom and running index.js. The runtime instructions and index.js only perform network browsing of the target site, DOM parsing of a specific selector, and console logging (with a placeholder for LLM integration). The instructions do not read local secrets or other system files.
Install Mechanism
There is no registry install spec embedded in the skill; SKILL.md instructs running npm install and `npx playwright install chromium`, which will download Chromium binaries via Playwright's installer. This is expected for Playwright but does involve fetching a large browser binary from Playwright's distribution servers — not an unknown/personal host.
Credentials
The skill declares no required environment variables or credentials and the code does not access environment variables. If you later add LLM integration, API keys would be needed; as-is there is no credential access.
Persistence & Privilege
always is false and the skill does not request permanent elevated presence or modify other skills. Autonomous invocation is allowed (platform default) but not combined with other red flags.
Assessment
This skill appears to do what it says: it uses Playwright to render the target site and JSDOM to extract article data. Before installing, consider: (1) run it in a sandboxed environment because Playwright will download and execute a Chromium binary; (2) confirm scraping the target site complies with its terms/robots policy and your legal/privacy constraints; (3) review the GitHub repo and authorship (SKILL.md lists a repo) and consider pinning dependency versions; (4) if you enable the not-yet-implemented LLM/hosting features, expect to add API keys — only provide those to trusted code; (5) basic hygiene: run npm audit, limit network access if needed, and inspect or run the script locally before giving it broader permissions.

Like a lobster shell, security has layers — review code before you run it.

latestvk976ja7es7986xt6ppv44ryq9x82vxfb

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments