Content Pipeline

v1.0.0

4-stage content pipeline orchestrator: Research -> Ideate -> Write -> Queue. Give it a topic, it researches existing discussions, generates hook angles, writ...

0· 283·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for runesleo/runesleo-content-pipeline.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Content Pipeline" (runesleo/runesleo-content-pipeline) from ClawHub.
Skill page: https://clawhub.ai/runesleo/runesleo-content-pipeline
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install runesleo/runesleo-content-pipeline

ClawHub CLI

Package manager switcher

npx clawhub@latest install runesleo-content-pipeline
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (4-stage content pipeline) match the instructions: research, ideate, write, and queue, and the skill only requires reading/writing project files and performing web searches. No unrelated credentials, binaries, or installs are requested.
Instruction Scope
Runtime instructions explicitly read/write ./content-queue.json and create ./research/*.md files and use web/Twitter searches. That's within scope. However the allowed-tools list includes Bash/Grep/Glob/Agent which give the agent the ability to run shell commands and invoke other agent skills — the SKILL.md doesn't instruct scanning of other system files, but if the platform grants the skill those tools it could perform broader file or command access beyond the stated project paths.
Install Mechanism
Instruction-only skill with no install spec and no code files — nothing is downloaded or written to disk by an installer, which minimizes install-time risk.
Credentials
The skill declares no environment variables and does not ask for credentials. It references Twitter/X and domain-specific sources for research; this is reasonable for public web research, but if you expect authenticated API access (e.g., Twitter API) you'll need to provide credentials separately — the skill does not declare or request them.
Persistence & Privilege
always:false (default) and no persistent install behavior. The skill can be invoked autonomously per platform defaults, which is normal; it does not request permanent inclusion or modify other skills.
Assessment
This is an instruction-only content pipeline that will read and write files in your current project (./content-queue.json and ./research/...). Before enabling it: (1) ensure the agent is confined to the project directory or run in an isolated workspace so any Bash/Read/Write permissions the platform grants cannot access unrelated sensitive files; (2) back up or review content-queue.json before first run (the skill writes the full JSON file); (3) if you expect the pipeline to use authenticated APIs (Twitter, publishing endpoints), note the skill doesn't request credentials — you'll need to provide them through your platform's secure credential mechanism or accept that the skill will rely on public web search only; (4) review generated research files and drafts for correctness and for any accidental disclosure of private data. If you are comfortable restricting the agent's file/command access to a project folder and reviewing outputs, the skill appears coherent with its stated purpose.

Like a lobster shell, security has layers — review code before you run it.

automationvk979w9pgj3g4erdbmghezjc18d82fy2fcontentvk979w9pgj3g4erdbmghezjc18d82fy2flatestvk979w9pgj3g4erdbmghezjc18d82fy2fpipelinevk979w9pgj3g4erdbmghezjc18d82fy2fwritingvk979w9pgj3g4erdbmghezjc18d82fy2f
283downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Content Pipeline Orchestrator

One command, from topic to review-ready draft. Research -> Ideate -> Write -> Queue

When to use vs. not

Use pipeline (original content that needs research):

  • Writing from scratch on a topic you haven't deeply explored
  • Need to survey existing discussion, find data, pick an angle
  • Example: "write about the impact of MoE on local inference" / "year-end market review"

Don't use pipeline (already have material):

  • Quoting someone else's post -> just write directly
  • Replying/commenting -> just write directly
  • Polishing an existing draft -> just edit directly
  • These scenarios waste 4-5x tokens through the pipeline with zero benefit

File locations

Configure these paths for your project:

FilePurpose
./content-queue.jsonIdea lifecycle state
./research/Research results (by date + slug)

Commands

Parse user input, match first hit:

InputCommandAction
/pipeline <topic>runFull pipeline: research -> ideate -> write -> queue
/pipeline url <url>urlExtract from URL -> ideate -> write -> queue
/pipeline seed <idea>seedAdd raw idea to queue as seed
/pipeline statusstatusShow queue grouped by status
/pipeline review <id>reviewShow a draft for review
/pipeline approve <id>approveMark as approved
/pipeline adapt <id> <platform>adaptGenerate platform variant
/pipeline publish <id>publishMark as published + timestamp
/pipeline cleancleanArchive items published 30+ days ago

Queue data model

File: ./content-queue.json

{
  "ideas": [
    {
      "id": 1,
      "topic": "AI Agent end-to-end automation",
      "status": "drafted",
      "platform": "twitter",
      "created": "2026-03-03T15:00:00Z",
      "updated": "2026-03-03T15:05:00Z",
      "research_file": "research/20260303-ai-agent-automation.md",
      "hook_angle": "Builder perspective: Writing is easy, Research is the bottleneck",
      "draft": "This person built a full...",
      "variants": {},
      "source_url": null,
      "feedback": [],
      "published": null
    }
  ],
  "next_id": 2
}

Status flow: seed -> researched -> drafted -> approved -> published -> archived

Queue read/write rules

  1. Read: Read ./content-queue.json
  2. Write: Write back complete JSON (single-user, no concurrency issue)
  3. ID assignment: Use next_id, increment after write
  4. Timestamps: ISO 8601 with timezone

Command details

/pipeline <topic> -- Full Pipeline

Input: topic (keywords or short phrase)

Stage 1: Research

  1. Search for existing discussion on the topic using available search tools:
    • Twitter/X search for relevant posts and threads
    • Web search for articles and data
    • Any domain-specific sources you have access to
  2. Compile findings into a research file:
    ./research/YYYYMMDD-{slug}.md
    
    slug = topic keywords, lowercase with hyphens, max 30 chars

Research file format:

# Research: {topic}
**Date**: YYYY-MM-DD
**Sources**: [list search methods used]

## Key findings
- [Finding 1 + source attribution]
- [Finding 2 + data/numbers]
- [Finding 3 + opposing viewpoint]

## Notable posts/articles
1. @user1 (N likes): "Core point summary"
2. @user2 (N likes): "Core point summary"

## Data points
- [Specific numbers, comparisons, statistics]

## Opposing viewpoints
- [Contrarian takes, if any]

## Source links
- [List of original URLs]

Stage 2: Ideate

  1. Read the research file
  2. Generate 3 hook angles based on the research:

Angle generation prompt (adapt for your LLM of choice):

You are a content strategist. Based on the following research, generate 3 hook angles for a post.

Research:
{research file content}

Requirements:
1. Each angle includes:
   - Hook type (contrast / counterintuitive / data-driven / story / question)
   - Core thesis (one sentence)
   - Key supporting points (2-3)
   - Estimated virality score (1-5)
2. Match the creator's voice and domain expertise
3. Avoid: AI cliches, marketing speak, listicle format

Output as JSON array:
[{"type": "contrast", "thesis": "...", "supports": ["...", "..."], "score": 4}, ...]
  1. Select the highest-scored angle
  2. If multiple angles tie, present options for user to choose

Stage 3: Write

  1. Write the draft using the selected hook angle + research data points
  2. Content format routing:
    • Content <= 280 chars -> short post (tweet)
    • 280-2000 chars -> long post (thread)
    • 2000 chars -> article

  3. Apply your preferred writing style/voice (integrate with a style skill if you have one)
  4. Verify all claims have source attribution from the research

Stage 4: Queue

  1. Read content-queue.json
  2. Create new entry:
    • status: "drafted"
    • platform: target platform
    • research_file: relative path
    • hook_angle: selected angle description
    • draft: written text
  3. Write back content-queue.json
  4. Output confirmation:
    Pipeline complete -- queued #<id>
    Topic: <topic>
    Hook: <angle summary>
    Draft: <first 80 chars>...
    Format: short / long / article
    Use /pipeline review <id> to see full content
    

/pipeline url <url> -- From URL input

  1. Fetch the URL content using available tools
  2. Extract core arguments and data points
  3. Skip Stage 1 (use extracted content as research)
  4. Continue to Stage 2 (ideate) -> Stage 3 (write) -> Stage 4 (queue)
  5. Record source_url in the entry

/pipeline seed <idea> -- Add raw seed

  1. Create queue entry:
    • status: "seed"
    • topic: the idea text
    • draft: null (seeds have no draft yet)
  2. Output: Seed added to queue #<id>

Seeds are raw ideas waiting to be developed. Run /pipeline <topic> later to expand a seed through the full pipeline.


/pipeline status -- Queue status

Read content-queue.json, output grouped by status:

Content Pipeline Status

Seed (N):
  #3 "Multi-agent orchestration" -- 3/3 15:00

Drafted (N):
  #1 "AI Agent automation" -- 3/3 15:05
  #2 "Market arbitrage math" -- 3/3 16:20

Approved (N):
  #5 "MCP practical experience" -- 3/2 20:00

Published (N):
  #4 "Three-layer scraping approach" -- 3/1

Total: N items | Pending: seed(N) + drafted(N)

Show only non-archived items. If over 20 items, show most recent 20 + total count.


/pipeline review <id> -- Review

  1. Find the entry in queue
  2. Display full info:
Review #<id>

Topic: <topic>
Status: <status>
Hook: <hook_angle>
Created: <created>

--- Draft ---
<full draft text>

--- Variants ---
[list any platform variants]

--- Research ---
File: <research_file>
[first 5 key findings if research file exists]

Actions:
  /pipeline approve <id> -- approve for publishing
  /pipeline adapt <id> <platform> -- generate platform variant

/pipeline approve <id> -- Approve

  1. Change status to "approved"
  2. Update updated timestamp
  3. Output: #<id> approved -- ready to publish

/pipeline adapt <id> <platform> -- Multi-platform adaptation

Adapt the draft for a different platform:

  1. Read the entry's draft
  2. Rewrite for the target platform's conventions:
    • Different character limits
    • Different audience expectations
    • Different formatting norms
  3. Store in variants.<platform> field
  4. Output: <platform> variant generated -- /pipeline review <id> to see

/pipeline publish <id> -- Publish marker

  1. Change status to "published"
  2. Record published timestamp
  3. Output: #<id> marked as published

/pipeline clean -- Archive cleanup

  1. Scan all published entries
  2. Archive entries older than 30 days
  3. Output: Archived N old entries

Design principles

  • Research and Ideate stages are platform-agnostic -- only the Write stage adapts for platform
  • One research effort can produce content for multiple platforms ("one fish, many meals")
  • Drafts should be source-verified before entering the queue -- no unsourced claims
  • Seeds are cheap to capture, expensive to develop -- capture freely, develop selectively
  • The pipeline is a framework, not a straitjacket -- skip stages when you already have what you need

Comments

Loading comments...