KernelGen FlagOS

v1.0.0

Unified GPU kernel operator generation skill. Automatically detects the target repository type (FlagGems, vLLM, or general Python/Triton) and dispatches to t...

0· 114·0 current·0 all-time
byFlagos@wbavon
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The skill claims to generate GPU kernels and to route to three specialized workflows; its instructions require repository inspection, environment checks, and calls to an external kernelgen MCP service. All requested capabilities (Read/Write/Edit/Glob/Grep/Bash and MCP tools) are reasonable for this purpose.
Instruction Scope
Instructions are detailed and stay within the stated workflow (detect repo type, run environment checks, invoke MCP, adapt returned code, submit feedback). They do read repository files and run diagnostic shell commands (e.g., Python imports, nvidia-smi), and the feedback sub-skill will attempt to auto-collect environment info. The only notable scope extension is explicit editing of the project agent config file (.claude/settings.json) to add MCP credentials — this is necessary to enable the MCP but is a privileged write that persists sensitive data.
Install Mechanism
This is an instruction-only skill with no install spec or downloaded code. It instructs using existing tools (pip, gh) when needed. No remote archive downloads or external install scripts are present in the package itself.
Credentials
The skill does not declare required environment variables, but it asks the user to supply an external MCP URL and JWT token and will write them into .claude/settings.json. It also performs runtime environment probes (torch/triton presence, CUDA, nvidia-smi) and may request package installs (with user consent for torch). Requesting and storing the MCP JWT is functionally necessary but sensitive — the token grants the MCP access to calls the skill will send (which may include code and repo context).
!
Persistence & Privilege
The skill instructs the agent to write/merge the MCP configuration (including a JWT) into .claude/settings.json in the project. This is a persistent, privileged modification of agent configuration and stores secrets locally; the action is justified by the skill's need to call the MCP but increases persistence and blast radius if the MCP or token is untrusted. always:false and autonomous invocation are normal and not set to escalate privileges.
Assessment
This skill appears coherent: it inspects your repository, runs environment checks, and delegates all code generation to an external kernelgen MCP service (it will not auto-generate kernel code locally). Before installing/using it, consider the following: - The skill will ask you to provide an MCP service URL and a JWT token and will write/merge them into .claude/settings.json in the project. Only provide a token you trust and review the exact JSON the skill will write. Avoid pasting long-lived credentials you wouldn't want stored in your repo tree. - The MCP is an external service (https://kernelgen.flagos.io per the docs). Generated code and some repository context will be sent to that service — do not configure it if you cannot share the repository contents or proprietary code with that external provider. - The skill will run local diagnostics (python imports, nvidia-smi) and may propose pip installs for non-critical packages; it explicitly requires you to confirm torch installs (it will not auto-install torch). Review any install commands before consenting. - The feedback sub-skill defaults to creating GitHub Issues (via gh) and will auto-fallback to email if gh is missing. If you prefer not to create issues, explicitly request email. If you need stronger assurance, ask the publisher for: (1) the exact schema of what will be written to .claude/settings.json, (2) the MCP service privacy/retention policy, and (3) whether tokens can be scoped or time-limited. Refuse to provide tokens until you confirm those details.

Like a lobster shell, security has layers — review code before you run it.

latestvk979t3j3fc0q3tx8zatffraa1x83ff51
114downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0

kernelgen-flagos — Unified GPU Operator Generation Skill

This is a unified entry point that bundles four sub-skills into one:

Sub-skill filePurpose
kernelgen-general.mdGenerate GPU kernels for any Python/Triton repository
kernelgen-for-flaggems.mdSpecialized generation for FlagGems repositories
kernelgen-for-vllm.mdSpecialized generation for vLLM repositories
kernelgen-submit-feedback.mdSubmit bug reports and feedback via GitHub or email

All sub-skill files are located in the same directory as this SKILL.md file.


Routing Protocol — Follow This BEFORE Doing Anything Else

Phase 1: Detect Repository Type

Use the Glob tool to check for project identity files in the current working directory:

Glob: pyproject.toml
Glob: setup.py
Glob: setup.cfg

Then use the Read tool to read whichever file exists. Determine the project name from the file contents (e.g., name = "flag_gems" in pyproject.toml, or name='vllm' in setup.py).

Also use the Glob tool to check for characteristic directory structures:

FlagGems indicators (match ANY):

  • src/flag_gems/ directory exists
  • Project name is flag_gems or flag-gems or FlagGems
  • import flag_gems appears in test files

vLLM indicators (match ANY):

  • vllm/ directory exists at the repo root (with vllm/__init__.py)
  • Project name is vllm
  • csrc/ directory exists alongside vllm/

Phase 2: Dispatch to Sub-skill

Based on the detection result, use the Read tool to read the appropriate sub-skill file from this skill's directory, then follow the instructions in that file exactly.

To locate the sub-skill files: They are in the same directory as this SKILL.md. Use the Glob tool to find the path:

Glob: **/skills/kernelgen-flagos/kernelgen-general.md

Then use the Read tool to read the matched path.

Decision Table

Detection ResultAction
FlagGems repository detectedRead kernelgen-for-flaggems.md and follow it
vLLM repository detectedRead kernelgen-for-vllm.md and follow it
Neither detected (or unknown)Read kernelgen-general.md and follow it
User reports a bug or requests feedback submissionRead kernelgen-submit-feedback.md and follow it

Important rules:

  1. Always detect first, dispatch second. Never skip detection.
  2. Read the entire sub-skill file before starting execution — do not partially read it.
  3. Follow the sub-skill instructions exactly as if they were the main SKILL.md. All steps, rules, and protocols in the sub-skill apply fully.
  4. Do not mix sub-skills. Once you dispatch to a sub-skill, follow it to completion.
  5. If the user explicitly requests a specific sub-skill (e.g., "use the FlagGems version"), honor that request regardless of auto-detection results.
  6. CRITICAL — MCP is mandatory: ALL operator code generation MUST go through the mcp__kernelgen-mcp__generate_operator MCP tool. NEVER generate Triton kernels, PyTorch wrappers, or operator implementations yourself. If MCP is not configured, not reachable, or fails after all retries, STOP and report the issue — do NOT fall back to writing code manually.

Phase 3: Feedback Handling

At any point during the workflow, if the user reports a bug, says something is broken, or asks to submit feedback about the skill:

  1. Use the Read tool to read kernelgen-submit-feedback.md from this skill's directory.
  2. Follow the feedback submission workflow described in that file.
  3. After feedback is submitted, ask the user if they want to continue with the operator generation workflow or stop.

Quick Reference for Users

# Generate a kernel operator (auto-detects repo type)
/kernelgen-flagos relu

# Generate with explicit function type
/kernelgen-flagos rms_norm --func-type normalization

# The skill will automatically:
# - Detect if you're in a FlagGems repo → use FlagGems-specific workflow
# - Detect if you're in a vLLM repo → use vLLM-specific workflow
# - Otherwise → use the general-purpose workflow

If you encounter any issues during generation, just say "submit feedback" or "report a bug" and the skill will guide you through the feedback submission process.

Comments

Loading comments...