Karpathy Compile

v1.0.0

Compile raw wiki entries from Phase 1 into structured, distilled knowledge points using LLM, grouping by topic and saving refined outputs.

0· 19·0 current·0 all-time
bysune@sora-mury
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description (compile wiki → knowledge points) align with the code: parsing wiki markdown, grouping by tags, calling an LLM distiller, and writing markdown outputs. Minor incoherence: SKILL.md mentions M-Flow as a dependency but the included code does not call or integrate with any M-Flow APIs—it only writes local files. The skill also references Ollama/qwen in docs and the code uses an OpenAI-compatible client to a default local endpoint, which is consistent.
Instruction Scope
SKILL.md and the code keep scope to reading Phase 1 wiki files and producing knowledge-points files. Tests in scripts/test_compile.py and scripts/test_e2e.py dynamically load other skills (phase 1, lint, retrieval) from sibling directories; running those tests will execute other skills' code (which may have their own side effects). The compile pipeline itself reads only local knowledge/wiki files and writes to knowledge/knowledge-points.
Install Mechanism
This is instruction-only at install level (no install spec). Files are included in the skill bundle; nothing is downloaded or installed during install. The code does import openai, but there is no install step declared—this is an operational/runtime dependency rather than an install-time risk.
Credentials
The skill declares no required environment variables or credentials, which is consistent with local file I/O. Implementation detail: LLMDistiller hardcodes endpoint='http://localhost:11434/v1' and api_key='ollama' and uses the openai client; this is odd but not directly dangerous. It means the code expects a local Ollama-compatible server by default. If someone changes the endpoint to a remote URL, the pipeline would send wiki content to that endpoint — so users should ensure the LLM endpoint is trusted before running. No requests for unrelated credentials or config paths are present.
Persistence & Privilege
The skill is not always-enabled and is user-invocable; it does not request persistent platform privileges or modify other skills' configs. It writes outputs only to a local knowledge/knowledge-points directory inside the repository tree.
Assessment
This skill appears to do what it says: parse local wiki markdown, distill topics with an LLM, and save knowledge-point markdown files. Before running/installing: 1) ensure you run the distillation only against a trusted LLM endpoint — by default it targets http://localhost:11434 and uses a hardcoded api_key value, so either run a local Ollama-compatible server or update the code to use a secure, authenticated endpoint you control; 2) be cautious running the provided test scripts (test_compile.py/test_e2e.py) because they dynamically import and execute sibling skills (phase1, lint, retrieval) which may perform network access or require credentials; review those other skills first; 3) note the SKILL.md mentions M-Flow though this package doesn't integrate with it—if you expect M-Flow integration, verify or extend the code; 4) consider installing the openai Python package and, if you prefer, modify the code to read API keys from environment variables rather than using a hardcoded string. If you need me to, I can review the other phase skill files (karpathy-query-feedback, karpathy-lint, dual-retrieval) to surface any additional concerns.

Like a lobster shell, security has layers — review code before you run it.

latestvk97fv87gec18dx7865bp8yd0cn848jq4

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments