paper-deep-dive
v1.0.0以结构化、证据驱动、读者友好的方式深度解读单篇论文。用于用户要求论文深读、详细分析、博客级讲解、研究脉络梳理、方法架构拆解、关键概念解释,或判断实验是否真的支撑论文 claim;也用于基于论文 PDF、arXiv 页面、附录、官方代码和项目页完成系统性论文解读。
⭐ 0· 48·0 current·0 all-time
by@tom-zju
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name and description (deep paper analysis) align with the skill contents: SKILL.md and reference docs provide templates, evidence-label rules, visualization guidance and output templates. The skill does not require unrelated binaries, credentials, or config paths.
Instruction Scope
Runtime instructions focus on reading the paper (PDF/arXiv), appendices, official code and project pages, then producing structured analysis with evidence labels and diagrams. There are no instructions to read arbitrary system files, environment variables, or to transmit data to unexpected endpoints. The references are used as internal guidance only.
Install Mechanism
No install spec and no code files — instruction-only. Nothing will be downloaded or written to disk by the skill itself.
Credentials
The skill requests no environment variables, credentials, or config paths. The documented inputs (PDF, arXiv, code repo links) are proportional to the stated purpose.
Persistence & Privilege
always:false and no special privileges requested. disable-model-invocation is false (normal platform default) but the skill does not ask for persistent presence or to modify other skills/configs.
Assessment
This skill is an instruction-only template for producing careful, evidence-tagged paper analyses and is internally coherent. Before installing or using it: (1) be mindful of any PDFs or private code you hand the agent — supplying confidential documents could expose them to networked model calls or logs; (2) the skill expects the agent to fetch/consume external resources when you provide links (arXiv, project pages, GitHub)—confirm your agent/network policies for outbound fetches; (3) the skill's framework reduces but does not eliminate LLM hallucination—always verify critical claim-to-evidence mappings against the original paper or code; (4) no credentials or system-level access are required, so there is low platform risk from the skill itself. If you need higher assurance, review sample outputs produced by the skill on public papers and confirm the agent will not automatically fetch resources you don't want shared.Like a lobster shell, security has layers — review code before you run it.
latestvk97dxc07v4encfxherxwdxmsqs84c4ws
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
