X Voice Match

v1.0.0

Analyze a Twitter/X account's posting style and generate authentic posts that match their voice. Use when the user wants to create X posts that sound like them, analyze their posting patterns, or maintain consistent voice across posts. Works with Bird CLI integration.

1· 1.7k·2 current·2 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description align with the code and SKILL.md: the scripts parse Bird CLI output, build a voice profile JSON, and create a detailed LLM prompt to generate posts. Nothing in the files asks for unrelated cloud credentials or system-wide access. One note: the skill assumes the agent (or operator) will provide LLM access when generating posts, but no LLM credentials or explicit integration mechanism are declared in requires.env — this is plausible (agent model invocation), but it is not documented in SKILL.md.
!
Instruction Scope
The instructions and scripts explicitly instruct fetching tweets via /data/workspace/bird.sh or reading a local file, then include actual sample tweets and signature phrases directly into a generation prompt. That prompt is intended to be passed to an LLM (the generate script prints it for the agent to use). This means scraped tweet text and identifying information will be packaged into an LLM request — which could be sent to external providers and thus exfiltrated beyond the local environment. Also, the manifest shows the end of analyze_voice.py is truncated in the listing, which may indicate delivery corruption or concealed content; that should be verified.
Install Mechanism
No install spec is present (instruction-only with included scripts). That minimizes installer risk — nothing is downloaded or executed from arbitrary URLs by the skill itself.
Credentials
The skill declares no required environment variables or credentials, and the scripts do not read secrets from env. However, the workflow depends on an LLM for generation (the scripts print a prompt for the agent/LLM); if the user or agent config routes those prompts to an external API, API keys will be used outside the skill's manifest. The lack of declared LLM integration or guidance about where prompts go is a transparency gap.
Persistence & Privilege
always:false and there is no code that attempts to persistently modify agent/system configuration or other skills. The scripts write profile files to the current directory or /tmp, which is expected and proportionate for this task.
What to consider before installing
This skill does what it says: it scrapes tweets (via the Bird CLI), builds a voice profile, and constructs an LLM prompt that includes sample tweets and signature phrases to generate posts that mimic an account. Before installing or running it, consider the following: - Impersonation & policy risk: Generating posts that mimic another account could violate platform rules or local law. Only use on accounts you own or have explicit permission to emulate. - Data leakage risk: The generation prompt includes actual tweet text and identifying info. If your agent sends that prompt to a remote LLM (OpenAI, Anthropic, etc.), those tweets will be transmitted off your system — verify where prompts are sent and which API keys or endpoints are used. - Trust the Bird CLI: The scripts call /data/workspace/bird.sh. Ensure that file is what you expect (not replaced by a malicious binary) and that its output format matches the parser assumptions. - Missing/incomplete files: The manifest shows the analyze_voice.py output truncated in this package listing; verify the local files are complete and match the source you expect before running. - Operational hygiene: Run the tool in a sandbox or test environment first, inspect generated prompts, and confirm no unexpected network calls occur. If you want safer operation, modify generate_post.py to redact or summarize sample tweets instead of including verbatim text in prompts, and add explicit logging of which external endpoints receive generation requests. If you need help auditing where prompts would be sent in your agent (which LLM endpoint) or hardening the scripts to avoid sending raw tweet text externally, I can suggest concrete code changes.

Like a lobster shell, security has layers — review code before you run it.

latestvk97agfqtv9yeb4mjad5k83xpbn80aw0v

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments