ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Ctf Writeup
v1.0.0
Generates a single standardized submission-style CTF writeup for competition handoff and organizer review. Use after solving a CTF challenge to document the...
Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for gandli/ctf-writeup.
Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Ctf Writeup" (gandli/ctf-writeup) from ClawHub.
Skill page: https://clawhub.ai/gandli/ctf-writeup
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.
Command Line
CLI Commands
Use the direct CLI path if you want to install manually and keep every step visible.
The name/description match the instructions: generating a submission-style writeup from challenge files. Requiring a filesystem-capable agent and use of bash/Python is coherent. Minor mismatch: allowed-tools includes WebFetch/WebSearch which are not referenced in the instructions and are unnecessary for offline writeup generation.
!
Instruction Scope
The SKILL.md explicitly instructs the agent to run broad filesystem scans (find . and recursive grep) and to collect 'solution artifacts' and flags from the current session and challenge files. Those commands can read any file under the working directory (or repository root), potentially exposing unrelated sensitive files. The guidance 'Redact the flag only if the user explicitly asks for redaction' means the agent will include real flags by default, raising a risk of leaking sensitive contents.
✓
Install Mechanism
Instruction-only skill with no install spec and no code files — nothing is written to disk by an installer and no external packages are fetched. This is the lowest-risk install profile.
!
Credentials
No environment variables or credentials are requested (good). However, the instructions permit reading arbitrary files and session data; that filesystem access can expose secrets (API keys, private notes, other flags) even though no env vars are declared. The skill does not require or justify broad filesystem access beyond the challenge directory.
✓
Persistence & Privilege
always:false and no requests to modify agent/system configuration. The skill does not request permanent or elevated privileges.
What to consider before installing
This skill is generally coherent for producing CTF writeups, but it instructs the agent to search the entire working directory and to include real flags by default. Before installing or running it: (1) run it only in an isolated/sanitized challenge directory or a copy of the challenge files (not your home directory or repo root); (2) explicitly tell the skill to redact flags or sensitive data if you don't want real secrets in the output; (3) consider removing or narrowing the find/grep lines to target a known challenge folder; (4) avoid giving it web/network tools unless you need them; and (5) if you're unsure, test the skill on a non-sensitive sample to observe what files it reads and what it outputs.
Like a lobster shell, security has layers — review code before you run it.
latestvk970t7fr2mz4xqyd6hsvhy8dnh83xxjv
123downloads
0stars
1versions
Updated 4w ago
v1.0.0
MIT-0
CTF Write-up Generator
Generate a standardized submission-style CTF writeup for a solved challenge.
Default behavior:
During an active competition, optimize for speed, clarity, and reproducibility
Keep writeups short enough that a teammate or organizer can validate the solve quickly
Always produce a submission-style writeup
Prefer one complete solve script from challenge data to final flag
Workflow
Step 1: Gather Information
Collect the following from the current session, challenge files, and user input:
Challenge metadata — name, CTF event, category, difficulty, points, flag format
# Scan for exploit scripts and artifacts
find . -name '*.py' -o -name '*.sh' -o -name 'exploit*' -o -name 'solve*' | head -20
# Check for flags in output files
grep -rniE '(flag|ctf|eno|htb|pico)\{' . 2>/dev/null
Step 2: Generate Write-up
Write the writeup file as writeup.md (or writeup-<challenge-name>.md) using the submission template below.
Templates
Submission Format
---
title: "<Challenge Name>"
ctf: "<CTF Event Name>"
date: YYYY-MM-DD
category: web|pwn|crypto|reverse|forensics|osint|malware|misc
difficulty: easy|medium|hard
points: <number>
flag_format: "flag{...}"
author: "<your name or team>"
---
# <Challenge Name>
## Summary
<1-2 sentences: what the challenge was and the core technique. Keep it direct.>
## Solution
### Step 1: <Action>
<Explain the key observation in 3-8 short lines. Keep it direct.>
\`\`\`python
<one complete solving script from provided challenge data to printing the final flag>
\`\`\`
### Step 2: <Action> (optional)
<Only add this when a second short step genuinely helps readability, such as separating the core observation from final verification.>
### Step 3: <Action> (optional)
<Use only if the challenge really needs it. Keep the total number of steps small.>
## Flag
\`\`\`
flag{example_flag_here}
\`\`\`
Guidance:
Prefer 1-3 short steps total
Keep code to the smallest complete solving script
Do not split "recover secret", "derive key", and "decrypt flag" into separate partial snippets
The script should start from the challenge data and end by printing the flag
Avoid long background sections
Avoid dead ends unless they explain a key pivot
Avoid multiple alternative solves; pick one clean path
Redact the flag only if the user explicitly asks for redaction