Academic Composer Upload
Analysis
This skill appears coherent and disclosed, with purpose-aligned local Python analysis and Semantic Scholar search, but users should notice the remote-LLM data flow, dependency setup, and academic-integrity implications.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.
`shell` | Runs `scholar.py` ... `pipeline.py` ... `measure.py`; `network` | `scholar.py` queries `api.semanticscholar.org`
The skill intentionally gives the agent local command execution and network access, but the artifacts disclose these uses and tie them to source search and local style analysis.
spacy>=3.7.0
The setup uses a broadly versioned Python dependency; this is normal for the local writing-analysis feature but means users rely on their Python package source and environment.
expand into fully cited essays ... If style score > 15: rewrite flagged passages to improve naturalness ... NOT intended for submitting AI-generated content as one's own
The skill can produce polished academic drafts, but the artifacts also include an explicit academic-integrity warning, making this a disclosed misuse risk rather than deceptive behavior.
Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.
If the agent uses a remote model provider ... essay content will be transmitted to that provider as part of the LLM conversation.
The skill clearly discloses that essay generation and rewriting happen through the host LLM, which may be remote and may receive draft content.
