Preflight

v1.0.0

Pre-publish audience reaction check. Run any content (tweet, launch copy, pricing page, announcement, blog post) through diverse AI personas before publishin...

0· 72·0 current·0 all-time
byKevin Bolander@kbo4sho
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description (pre-publish persona checks) matches the included assets: SKILL.md, a personas library, and a Python runner that sends the content to an LLM and aggregates persona responses. Required tools (python openai client) are proportional to the task.
Instruction Scope
SKILL.md instructs the agent to read a project personas file if present and to run quick checks when used autonomously. The script performs only the described actions (send content to an LLM, parse responses). However, the script issues HTTP requests to a configurable base_url (default http://localhost:11434/v1). If that base_url is set to a remote/untrusted host, published content could be sent off-host — so confirm the endpoint before enabling autonomous or pipeline runs.
Install Mechanism
There is no install spec (instruction-only skill). The included Python script depends on the openai Python package; this is reasonable and low-risk compared to remote downloads or arbitrary installers.
Credentials
The skill declares no required environment variables or credentials. The script hardcodes api_key='ollama' when constructing the OpenAI client (a placeholder value) and does not read secrets from the environment. This is unusual but not evidence of credential exfiltration; nonetheless verify client configuration before sending sensitive content to any external endpoint.
Persistence & Privilege
The skill does not request 'always' presence and uses default autonomous-invocation behavior. It does not attempt to modify other skills or system-wide settings. Normal privileges for a tool that can be run in pipelines.
Assessment
This skill appears to do what it says: run content through persona-based LLM checks. Before installing or enabling autonomous runs, do the following: (1) Confirm the model endpoint (base_url) is a trusted host — the script defaults to localhost but is configurable; pointing it at a remote server would transmit your content off-host. (2) If you will test sensitive or private copy, run the tool locally (keep base_url as localhost) or inspect logs to ensure no unintended outbound endpoints. (3) Note the script requires the Python 'openai' package — install only from trusted sources. (4) If you plan to enable automated/cron/pipeline invocation, make sure you understand when and what content will be auto-submitted to the LLM service. Finally, review the full script on disk (the provided snippet appears truncated in this review) to confirm there are no hidden remote endpoints or telemetry before running in production.

Like a lobster shell, security has layers — review code before you run it.

latestvk97fz9rf19kgz1zypnh04eh3wn83d0th

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments