phd-research-companion
AdvisoryAudited by Static analysis on Apr 30, 2026.
Overview
No suspicious patterns detected.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Some documented commands may fail, and a user or agent might be tempted to fetch or create missing helper files outside the reviewed package.
The documentation references a run wrapper and create_experiment_design.py, but those files are not listed in the supplied file manifest; this is a package completeness/provenance gap rather than evidence of malicious behavior.
├── run # Quick CLI wrapper for all commands ... ├── create_experiment_design.py # Comparison/ablation/robustness YAML configs
Verify the installed package contains the documented wrapper and scripts before use; do not add unreviewed replacement files from unknown sources.
A configured cron/background task could keep running searches and writing logs/results after the original session.
The skill documents optional scheduled recurring execution for literature updates. This is disclosed and purpose-aligned, but it creates persistence if the user enables it.
Cron Job Integration(定期自动化任务) ... set up a cron job: ... 0 8 * * * cd /home/user/workspace/skills/phd-research-companion/scripts && python multi_source_search.py
Only enable background or cron workflows when needed, keep outputs inside a project directory, and remove the cron entry or stop background jobs when finished.
If a user runs this test helper, top-level code fragments from other package scripts could execute unexpectedly.
The test helper dynamically executes the beginning of local script files to check syntax, rather than only parsing or compiling them.
exec(open(full_path).read().split('if __name__')[0][:1000]) # Basic syntax checkPrefer a safer syntax check such as py_compile or ast.parse, and only run the test helper from a reviewed copy of the skill.
Users may overestimate the reliability or completeness of generated literature results and compliance outputs.
The code marks key literature-search functionality as mock/TODO, while the documentation presents multi-source literature aggregation as an available automated feature.
Search arXiv API (mock implementation - replace with actual requests). ... # TODO: Implement actual arXiv API request
Treat generated outputs as drafts, manually verify citations/search coverage, and confirm real API support before relying on the workflow for research decisions.
