X Voice Match

v1.0.0

Analyze a Twitter/X account's posting style and generate authentic posts that match their voice. Use when the user wants to create X posts that sound like them, analyze their posting patterns, or maintain consistent voice across posts. Works with Bird CLI integration.

1· 1.8k·2 current·2 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description align with the code and SKILL.md: the scripts parse Bird CLI output, build a voice profile JSON, and create a detailed LLM prompt to generate posts. Nothing in the files asks for unrelated cloud credentials or system-wide access. One note: the skill assumes the agent (or operator) will provide LLM access when generating posts, but no LLM credentials or explicit integration mechanism are declared in requires.env — this is plausible (agent model invocation), but it is not documented in SKILL.md.
!
Instruction Scope
The instructions and scripts explicitly instruct fetching tweets via /data/workspace/bird.sh or reading a local file, then include actual sample tweets and signature phrases directly into a generation prompt. That prompt is intended to be passed to an LLM (the generate script prints it for the agent to use). This means scraped tweet text and identifying information will be packaged into an LLM request — which could be sent to external providers and thus exfiltrated beyond the local environment. Also, the manifest shows the end of analyze_voice.py is truncated in the listing, which may indicate delivery corruption or concealed content; that should be verified.
Install Mechanism
No install spec is present (instruction-only with included scripts). That minimizes installer risk — nothing is downloaded or executed from arbitrary URLs by the skill itself.
Credentials
The skill declares no required environment variables or credentials, and the scripts do not read secrets from env. However, the workflow depends on an LLM for generation (the scripts print a prompt for the agent/LLM); if the user or agent config routes those prompts to an external API, API keys will be used outside the skill's manifest. The lack of declared LLM integration or guidance about where prompts go is a transparency gap.
Persistence & Privilege
always:false and there is no code that attempts to persistently modify agent/system configuration or other skills. The scripts write profile files to the current directory or /tmp, which is expected and proportionate for this task.
What to consider before installing
This skill does what it says: it scrapes tweets (via the Bird CLI), builds a voice profile, and constructs an LLM prompt that includes sample tweets and signature phrases to generate posts that mimic an account. Before installing or running it, consider the following: - Impersonation & policy risk: Generating posts that mimic another account could violate platform rules or local law. Only use on accounts you own or have explicit permission to emulate. - Data leakage risk: The generation prompt includes actual tweet text and identifying info. If your agent sends that prompt to a remote LLM (OpenAI, Anthropic, etc.), those tweets will be transmitted off your system — verify where prompts are sent and which API keys or endpoints are used. - Trust the Bird CLI: The scripts call /data/workspace/bird.sh. Ensure that file is what you expect (not replaced by a malicious binary) and that its output format matches the parser assumptions. - Missing/incomplete files: The manifest shows the analyze_voice.py output truncated in this package listing; verify the local files are complete and match the source you expect before running. - Operational hygiene: Run the tool in a sandbox or test environment first, inspect generated prompts, and confirm no unexpected network calls occur. If you want safer operation, modify generate_post.py to redact or summarize sample tweets instead of including verbatim text in prompts, and add explicit logging of which external endpoints receive generation requests. If you need help auditing where prompts would be sent in your agent (which LLM endpoint) or hardening the scripts to avoid sending raw tweet text externally, I can suggest concrete code changes.

Like a lobster shell, security has layers — review code before you run it.

latestvk97agfqtv9yeb4mjad5k83xpbn80aw0v
1.8kdownloads
1stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

X Voice Match

Analyze Twitter/X accounts to extract posting patterns and generate authentic content that matches the account owner's unique voice.

Quick Start

Step 1: Analyze the account

cd /data/workspace/skills/x-voice-match
python3 scripts/analyze_voice.py @username [--tweets 50] [--output profile.json]

Step 2: Generate posts

python3 scripts/generate_post.py --profile profile.json --topic "your topic" [--count 3]

Or use the all-in-one approach:

python3 scripts/generate_post.py --account @username --topic "AI agents taking over" --count 5

What It Analyzes

The skill extracts:

  • Length patterns: Tweet character counts, thread usage, one-liner vs paragraph style
  • Tone markers: Humor style, sarcasm, self-deprecation, edginess level
  • Topics: Crypto, AI, tech, memes, personal life, current events
  • Engagement patterns: QT vs original, reaction tweets, conversation starters
  • Language patterns: Specific phrases, slang, emoji usage, punctuation style
  • Content types: Observations, hot takes, memes, threads, questions, personal stories

Output Format

Voice Profile (JSON)

{
  "account": "@gravyxbt_",
  "analyzed_tweets": 50,
  "patterns": {
    "avg_length": 85,
    "length_distribution": {"short": 0.6, "medium": 0.3, "long": 0.1},
    "uses_threads": false,
    "humor_style": "self-deprecating, ironic",
    "topics": ["crypto", "AI agents", "memes", "current events"],
    "engagement_type": "reactive QT heavy",
    "signature_phrases": ["lmao", "fr", "based"],
    "emoji_usage": "minimal, strategic",
    "punctuation": "lowercase, casual"
  }
}

Generated Posts

Returns 1-N posts with confidence scores and reasoning.

Integration with Existing Tools

Works with Bird CLI (/data/workspace/bird.sh):

# Fetch fresh tweets for analysis
./bird.sh user-tweets @gravyxbt_ -n 50 > recent_tweets.txt
python3 scripts/analyze_voice.py --input recent_tweets.txt

Post Type Templates

See references/post-types.md for common X post frameworks:

  • Observations
  • Hot takes
  • Self-deprecating humor
  • Crypto commentary
  • Reaction posts
  • Questions

Advanced Usage

Update Voice Profile

Re-analyze periodically to capture style evolution:

python3 scripts/analyze_voice.py @username --update profile.json

Generate by Post Type

python3 scripts/generate_post.py --profile profile.json --type "hot-take" --topic "crypto"

Batch Generation

python3 scripts/generate_post.py --profile profile.json --batch topics.txt --output posts.json

Workflow

  1. First time: Run full analysis on 30-50 tweets
  2. Generate posts: Provide topic, get 3-5 style-matched options
  3. User picks: Select best option or iterate with feedback
  4. Periodic updates: Re-analyze monthly or after major voice shifts

Tips

  • Minimum tweets: 30 tweets for basic analysis, 50+ for accuracy
  • Recency matters: Recent tweets reflect current style better than old ones
  • Topic relevance: Generated posts work best on topics the account normally covers
  • Confidence scores: <70% = may not sound authentic, revise or regenerate

Comments

Loading comments...