Preflight

v1.0.0

Pre-publish audience reaction check. Run any content (tweet, launch copy, pricing page, announcement, blog post) through diverse AI personas before publishin...

0· 135·0 current·0 all-time
byKevin Bolander@kbo4sho

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for kbo4sho/preflight.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Preflight" (kbo4sho/preflight) from ClawHub.
Skill page: https://clawhub.ai/kbo4sho/preflight
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install preflight

ClawHub CLI

Package manager switcher

npx clawhub@latest install preflight
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description (pre-publish persona checks) matches the included assets: SKILL.md, a personas library, and a Python runner that sends the content to an LLM and aggregates persona responses. Required tools (python openai client) are proportional to the task.
Instruction Scope
SKILL.md instructs the agent to read a project personas file if present and to run quick checks when used autonomously. The script performs only the described actions (send content to an LLM, parse responses). However, the script issues HTTP requests to a configurable base_url (default http://localhost:11434/v1). If that base_url is set to a remote/untrusted host, published content could be sent off-host — so confirm the endpoint before enabling autonomous or pipeline runs.
Install Mechanism
There is no install spec (instruction-only skill). The included Python script depends on the openai Python package; this is reasonable and low-risk compared to remote downloads or arbitrary installers.
Credentials
The skill declares no required environment variables or credentials. The script hardcodes api_key='ollama' when constructing the OpenAI client (a placeholder value) and does not read secrets from the environment. This is unusual but not evidence of credential exfiltration; nonetheless verify client configuration before sending sensitive content to any external endpoint.
Persistence & Privilege
The skill does not request 'always' presence and uses default autonomous-invocation behavior. It does not attempt to modify other skills or system-wide settings. Normal privileges for a tool that can be run in pipelines.
Assessment
This skill appears to do what it says: run content through persona-based LLM checks. Before installing or enabling autonomous runs, do the following: (1) Confirm the model endpoint (base_url) is a trusted host — the script defaults to localhost but is configurable; pointing it at a remote server would transmit your content off-host. (2) If you will test sensitive or private copy, run the tool locally (keep base_url as localhost) or inspect logs to ensure no unintended outbound endpoints. (3) Note the script requires the Python 'openai' package — install only from trusted sources. (4) If you plan to enable automated/cron/pipeline invocation, make sure you understand when and what content will be auto-submitted to the LLM service. Finally, review the full script on disk (the provided snippet appears truncated in this review) to confirm there are no hidden remote endpoints or telemetry before running in production.

Like a lobster shell, security has layers — review code before you run it.

latestvk97fz9rf19kgz1zypnh04eh3wn83d0th
135downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Preflight

Pre-publish content through simulated audience personas. Get a verdict before you ship.

Workflow

Given content the user wants to publish, run it through audience personas and return a verdict.

1. Load Personas

Check for preflight-personas.md in the project root. If it exists, use those personas. Otherwise use the defaults in references/personas.md.

For quick checks, use 4 personas: The Scroller, The Skeptic, The Ready Buyer, The Amplifier. For thorough checks, use all 8.

2. Evaluate

For each persona, adopt that persona fully and evaluate the content by answering:

  1. FIRST REACTION (1-2 sentences): Gut reaction in the first 3 seconds
  2. WOULD YOU ENGAGE? (yes/no + why): Would you like, comment, click, or reply?
  3. WOULD YOU SHARE? (yes/no + why): Would you send this to someone or repost it?
  4. ONE REWRITE (1-2 sentences): One change to make this work better for this persona

Be blunt, specific, and honest. No hedging. Stay in character.

3. Score

Count engagement and share signals across all personas:

  • Engage rate: % of personas who would engage
  • Share rate: % of personas who would share

4. Verdict

  • 🟢 SHIP IT — 50%+ would share. Publish as-is.
  • 🟡 REVISE — engaging but not shareable. Read the rewrites, apply the best one, optionally re-run.
  • 🟠 RETHINK — mixed signals. The message itself may be wrong, not just the wording.
  • 🔴 KILL IT — not landing. Don't publish. Rethink the approach.

5. Output

Present results as:

PREFLIGHT: [verdict]
Engage: X/Y personas | Share: X/Y personas

[For each persona, one line summary of reaction + their rewrite suggestion]

If patterns emerge across personas (e.g., "3 of 4 want to see an image"), call that out as the top actionable insight.

Keep output brief. The user wants a decision, not an essay.

Customization

See references/personas.md for the default persona library and instructions for creating project-specific personas.

Integration

This skill works as a step in any publishing workflow. When used autonomously (heartbeats, cron, content pipelines), run the quick check (4 personas) by default. Use the full 8 when the user explicitly asks for a thorough preflight.

Comments

Loading comments...