编程规范指南

v1.0.0

Behavioral guidelines to reduce common LLM coding mistakes. Use when writing, reviewing, or refactoring code to avoid overcomplication, make surgical changes...

0· 49·0 current·0 all-time
byhu tian@hu-xiao-tian
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name, description, and SKILL.md all describe behavioral guidelines for writing/reviewing/refactoring code. There are no unrelated env vars, binaries, or install steps requested.
Instruction Scope
SKILL.md contains only high-level procedural guidelines (assumptions, simplicity, surgical edits, verifiable goals). It does not instruct the agent to read arbitrary files, call external endpoints, access credentials, or run shell commands. The guidance is broad by design but stays within the stated purpose.
Install Mechanism
No install spec and no code files — instruction-only skill. Nothing is written to disk or downloaded.
Credentials
No environment variables, credentials, or config paths requested; the requested privileges are minimal and proportional to a style guide.
Persistence & Privilege
Skill is not always-on, is user-invocable, and allows model invocation (the platform default). It does not request persistent presence or modify other skills or system settings.
Assessment
This skill is a benign, instruction-only style guide — safe from a credential or install perspective. Before installing, consider whether the specific recommendations match your team's coding standards (it biases toward minimalism and surgical edits) and whether you want an agent to adopt these constraints (they may reduce creativity or propensity to refactor). Because it’s instruction-only, there’s no code to audit; you can try it on a small task to confirm behavior. If you need different conventions, maintain your own variant of the guidelines or adjust prompts accordingly.

Like a lobster shell, security has layers — review code before you run it.

latestvk972rfrtxeyrrs80eyg036p3w584xyjx
49downloads
0stars
1versions
Updated 4d ago
v1.0.0
MIT-0

Karpathy Guidelines

Behavioral guidelines to reduce common LLM coding mistakes, derived from Andrej Karpathy's observations on LLM coding pitfalls.

Tradeoff: These guidelines bias toward caution over speed. For trivial tasks, use judgment.

1. Think Before Coding

Don't assume. Don't hide confusion. Surface tradeoffs.

Before implementing:

  • State your assumptions explicitly. If uncertain, ask.
  • If multiple interpretations exist, present them - don't pick silently.
  • If a simpler approach exists, say so. Push back when warranted.
  • If something is unclear, stop. Name what's confusing. Ask.

2. Simplicity First

Minimum code that solves the problem. Nothing speculative.

  • No features beyond what was asked.
  • No abstractions for single-use code.
  • No "flexibility" or "configurability" that wasn't requested.
  • No error handling for impossible scenarios.
  • If you write 200 lines and it could be 50, rewrite it.

Ask yourself: "Would a senior engineer say this is overcomplicated?" If yes, simplify.

3. Surgical Changes

Touch only what you must. Clean up only your own mess.

When editing existing code:

  • Don't "improve" adjacent code, comments, or formatting.
  • Don't refactor things that aren't broken.
  • Match existing style, even if you'd do it differently.
  • If you notice unrelated dead code, mention it - don't delete it.

When your changes create orphans:

  • Remove imports/variables/functions that YOUR changes made unused.
  • Don't remove pre-existing dead code unless asked.

The test: Every changed line should trace directly to the user's request.

4. Goal-Driven Execution

Define success criteria. Loop until verified.

Transform tasks into verifiable goals:

  • "Add validation" → "Write tests for invalid inputs, then make them pass"
  • "Fix the bug" → "Write a test that reproduces it, then make it pass"
  • "Refactor X" → "Ensure tests pass before and after"

For multi-step tasks, state a brief plan:

1. [Step] → verify: [check]
2. [Step] → verify: [check]
3. [Step] → verify: [check]

Strong success criteria let you loop independently. Weak criteria ("make it work") require constant clarification.

Comments

Loading comments...