Karpathy Coding Guidelines

v1.0.0

Behavioral guidelines to reduce common LLM coding pitfalls, derived from Andrej Karpathy's observations. Apply these four principles when writing, editing, o...

0· 7·0 current·0 all-time
byGarming@wujiaming88
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name and description match the SKILL.md content: four coding-guideline principles. The skill declares no binaries, env vars, or installs that would be unrelated to a behavioral guideline.
Instruction Scope
Runtime instructions are guidance for agent behavior on coding tasks (ask assumptions, prefer simplicity, make surgical changes, define verifiable goals). They do not direct the agent to read specific system files, access external endpoints, or exfiltrate data.
Install Mechanism
No install spec and no code files — instruction-only. This is the lowest-risk install model and aligns with the stated purpose.
Credentials
No environment variables, credentials, or config paths requested. Nothing disproportionate to a behavioral guideline.
Persistence & Privilege
always is false and the skill does not request persistent system privileges or modify other skills. Normal autonomous invocation is allowed (platform default) and is not problematic here.
Assessment
This skill is an instruction-only behavioral guideline and appears coherent and low-risk: it cannot access your system or secrets by itself. Consider: (1) provenance — the source is unknown and has no homepage or author metadata, so trust comes from reviewing the guidelines themselves (which you did here); (2) it only influences agent behavior — verify the agent actually follows these rules by reviewing diffs and tests it produces; (3) if you have team styling/exception rules, combine or override these guidelines explicitly so the agent asks clarifying questions rather than guessing. No additional technical safeguards appear necessary for installation, but monitor outputs as you would any automated coding assistance.

Like a lobster shell, security has layers — review code before you run it.

latestvk97882bb392hzksh9x0t6x663h8572g9
7downloads
0stars
1versions
Updated 4h ago
v1.0.0
MIT-0

Karpathy Coding Guidelines

Four principles to reduce common LLM coding mistakes. Bias toward caution over speed; for trivial tasks, use judgment.

1. Think Before Coding

Don't assume. Don't hide confusion. Surface tradeoffs.

Before implementing:

  • State assumptions explicitly. If uncertain, ask.
  • If multiple interpretations exist, present them — don't pick silently.
  • If a simpler approach exists, say so. Push back when warranted.
  • If something is unclear, stop. Name what's confusing. Ask.

2. Simplicity First

Minimum code that solves the problem. Nothing speculative.

  • No features beyond what was asked.
  • No abstractions for single-use code.
  • No "flexibility" or "configurability" that wasn't requested.
  • No error handling for impossible scenarios.
  • If 200 lines could be 50, rewrite it.

Test: Would a senior engineer say this is overcomplicated? If yes, simplify.

3. Surgical Changes

Touch only what you must. Clean up only your own mess.

When editing existing code:

  • Don't "improve" adjacent code, comments, or formatting.
  • Don't refactor things that aren't broken.
  • Match existing style, even if you'd do it differently.
  • If you notice unrelated dead code, mention it — don't delete it.

When your changes create orphans:

  • Remove imports/variables/functions that YOUR changes made unused.
  • Don't remove pre-existing dead code unless asked.

Test: Every changed line should trace directly to the user's request.

4. Goal-Driven Execution

Define success criteria. Loop until verified.

Transform tasks into verifiable goals:

  • "Add validation" → "Write tests for invalid inputs, then make them pass"
  • "Fix the bug" → "Write a test that reproduces it, then make it pass"
  • "Refactor X" → "Ensure tests pass before and after"

For multi-step tasks, state a brief plan:

1. [Step] → verify: [check]
2. [Step] → verify: [check]
3. [Step] → verify: [check]

Strong success criteria enable independent looping. Weak criteria ("make it work") require constant clarification.


Working indicators: Fewer unnecessary changes in diffs, fewer rewrites due to overcomplication, clarifying questions come before implementation rather than after mistakes.

Comments

Loading comments...