Jason Academic Writing

v1.0.1

Complete academic paper writing pipeline with integrity checks and multi-agent review system. Optimized prompts for Methods/Results/Discussion sections. Feat...

0· 27·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description (end-to-end academic paper pipeline) align with the included scripts and declared requirements. OpenAI env vars are required because multiple stages call an LLM; research and integrity steps call Semantic Scholar and CrossRef as documented. No unrelated cloud credentials or surprising binaries are requested.
Instruction Scope
SKILL.md instructs running the included Python scripts and references only pipeline inputs/outputs. Runtime behavior includes sending full manuscript text to an external LLM and calling public APIs (Semantic Scholar, CrossRef) — expected for this purpose. A notable implicit behavior: several scripts call dotenv.load_dotenv(), which will load a .env file from the working directory (and thus can read local secrets not declared in SKILL.md). Also, model output is parsed by regex and then json.loads (fragile) which may misinterpret or leak unstructured text; integrity_check's claim-verification is partly a placeholder. These are scope/robustness issues rather than evidence of misdirection.
Install Mechanism
Instruction-only skill with bundled Python scripts and a requirements.txt. No remote install/downloads, no archive extraction, and no external install URLs. Risk level for installation is low; standard Python dependencies (requests, python-dotenv, openai) are listed.
Credentials
Declared required env vars (OPENAI_API_KEY, OPENAI_BASE_URL) are appropriate for LLM-driven review/revise stages. Extra behavior: scripts use dotenv.load_dotenv() which may read a .env file and set additional environment variables (e.g., LLM_MODEL). That could result in reading local secrets not documented in the skill metadata. No unrelated credentials (AWS, cloud provider keys, DB passwords) are requested.
Persistence & Privilege
always is false and the skill does not request persistent platform privileges. It writes outputs and reports to a working directory (expected for a pipeline) and does not modify other skill configurations or system-wide settings.
Scan Findings in Context
[requests-crossref-semanticscholar] expected: scripts/integrity_check.py and scripts/research.py call CrossRef and Semantic Scholar APIs to verify citations and gather literature — required for citation/integrity checks.
[openai-client-usage] expected: scripts/review.py and scripts/revise.py instantiate an OpenAI client and send manuscript text to an LLM for reviewer/synthesis and revision tasks — consistent with the skill's described multi-agent review functionality.
[dotenv-load] unexpected: Multiple scripts call dotenv.load_dotenv(). This is common for local dev but is not documented in SKILL.md; it means the skill may read a .env file in the working directory (potentially exposing additional secrets).
[regex-json-extraction] expected: Review and synthesizer code extract JSON from LLM responses using regex before json.loads. This is brittle and may fail or mis-parse outputs; it's an implementation concern rather than a mismatch with the skill's purpose.
[subprocess-run] expected: scripts/main.py uses subprocess.run to orchestrate pipeline stages (calls the included Python scripts). That's expected for an orchestrator script.
Assessment
High-level: the skill appears to do what it says — an end-to-end academic writing pipeline that queries Semantic Scholar/CrossRef and sends manuscript content to an LLM for multi-role review and revisions. Before installing or running: - Review your OPENAI_BASE_URL and OPENAI_API_KEY policies: the pipeline sends full manuscript text to the external LLM; use an API key with appropriate scope and rotate/limit it if needed. Avoid using organization-wide keys if you have privacy concerns. - Check for a .env file in the working directory (or other directories that dotenv might load). The scripts call load_dotenv(), so local secrets in a .env could be read and used; remove or audit .env before running. - Inspect and test in an isolated environment (container or throwaway VM) because the skill writes files and makes network calls. - Be aware the code has brittle JSON-parsing of LLM outputs and at least one partially truncated/buggy function (review.py shows non-robust math and a truncated fallback) — expect runtime errors or unexpected reviewer outputs; plan to validate outputs manually. - If you will process unpublished or sensitive manuscripts, treat outputs and logs carefully: they are stored locally and may be transmitted to third-party APIs. If you want, I can point out the exact lines that call dotenv and where LLM outputs are parsed, or help you harden the scripts (e.g., remove load_dotenv, add explicit JSON-safe wrappers, sanitize/excerpt text sent to LLM).

Like a lobster shell, security has layers — review code before you run it.

latestvk979f4z5trak9sx81kmz4hn01x84e67p

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

EnvOPENAI_API_KEY, OPENAI_BASE_URL

Comments