Brand Voice Architect

v0.1.1

A high-precision engine for deconstructing, documenting, and synthesizing brand-specific linguistic patterns and tonal architectures. Use this skill whenever...

0· 182·0 current·0 all-time
byDavid Escobar@midnightstudioai

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for midnightstudioai/brand-voice-architect.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Brand Voice Architect" (midnightstudioai/brand-voice-architect) from ClawHub.
Skill page: https://clawhub.ai/midnightstudioai/brand-voice-architect
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install brand-voice-architect

ClawHub CLI

Package manager switcher

npx clawhub@latest install brand-voice-architect
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the included artifacts: a corpus analyzer and a prompt synthesizer. Required resources (none) align with an instruction-only text-processing skill. There are no unrelated credential, binary, or config requirements.
Instruction Scope
SKILL.md stays within the stated purpose (analyze corpus, synthesize voice prompts, produce guides). One notable behavior: generated system prompts instruct models to 'replace prohibited words with preferred equivalents silently' and to 'Never break voice for clarity alone' — this can cause automated alterations to user content and potential meaning drift. The workflow does include a human review step, which mitigates this risk if followed.
Install Mechanism
No install spec or external downloads. The skill ships two small Python scripts and a methodology doc. No network fetches, URL installs, or archive extraction are present.
Credentials
The skill requests no environment variables, credentials, or config paths. The scripts only read user-supplied corpus files or inline text; there are no hidden credential usages.
Persistence & Privilege
always:false and no special persistence or system-wide modifications are requested. The skill does not attempt to modify other skills or global agent settings.
Assessment
This skill appears coherent for brand-voice work and contains only local Python scripts that analyze text and build system prompts. Before installing/use: (1) Only feed corpora you are allowed to share — scripts read any file path you provide. (2) Expect the generated system prompts to enforce silent replacement of 'prohibited' words and strong on-brand rewriting; always perform the documented Manual Review step before deploying prompts or publishing rewritten content. (3) Do not deploy generated system prompts to production without human oversight, since they can change meaning to keep 'voice'. (4) If you are concerned about privacy or sensitive data, inspect or sandbox the two Python scripts locally; they do not call external endpoints or use credentials.

Like a lobster shell, security has layers — review code before you run it.

latestvk9796g88exdcwvqbkqm5xbyk5d83fmx0
182downloads
0stars
2versions
Updated 1mo ago
v0.1.1
MIT-0

Brand Voice Architect (BVA)

A skill for engineering, documenting, and synthesizing brand-specific voice with quantifiable precision. Brand voice is treated as a Linguistic DNA — a measurable baseline, not an aesthetic preference.


Core Workflow

Phase I: Decomposition — /analyze [corpus]

Run a linguistic audit on provided text samples:

  1. Lexical Audit — High-frequency verbs/adjectives, prohibited terms, vocabulary signature
  2. Structural Mapping — Average Sentence Length (ASL), syntactic complexity, variance
  3. Sentiment Baseline — Emotional temperature on a 0.0–1.0 scale

→ Use scripts/voice_analyzer.py to compute metrics programmatically when a corpus is provided.

Phase II: Architectural Design — /synthesize [pillars]

Build the voice matrix:

  1. Pillar Definition — Establish 3 core attributes (e.g., Authoritative, Wit-driven, Technical)
  2. The Spectrum — Define "This, Not That" logic gates for each pillar
  3. Persona Encoding — Translate pillars into LLM system-level instructions

→ Use scripts/prompt_synthesizer.py to generate deployable system prompts.

Phase III: Delivery

  1. Artifact Generation — Produce voice guide docs, style reference cards, prompt templates
  2. Manual Review/review [output] provides a qualitative checklist to assess whether output aligns with the established voice pillars (Claude-assisted, not script-automated)
  3. Platform Pivot/pivot [context] adapts voice for specific channels while preserving DNA, using generate_platform_pivot() from prompt_synthesizer.py

Note on prohibited words: The generated system prompt instructs the LLM to replace prohibited words with preferred equivalents. This is a prompt-level instruction — enforcement depends on the model following the system prompt, not on automated script-level filtering.


The 4-Pillar Framework

Map every brand voice across four axes to define its Safe Operating Area:

AxisPoles
CharacterFriendly ←→ Authoritative
ToneHumorous ←→ Serious
LanguageSimple ←→ Complex
PurposeHelpful ←→ Entertaining

See references/methodology.md for full framework details including Cadence Analysis and Semantic Salience scoring.


Mandatory Output Components

Every Brand Voice engagement must produce:

  1. Metrics Report — Lexical density %, ASL, top keywords, cadence variance
  2. Voice Matrix — 3 pillars × "This/Not That" for each
  3. System Prompt — Ready-to-deploy LLM persona encoding
  4. Platform Pivots — At minimum: formal/informal, long-form/short-form variants
  5. Prohibited/Preferred Lexicon — Concrete word lists

Quick Reference Commands

CommandActionImplementation
/analyze [corpus]Linguistic audit on provided textscripts/voice_analyzer.py
/synthesize [pillars]Generate LLM system prompt from pillarsscripts/prompt_synthesizer.py
/review [output]Qualitative checklist review against voice pillarsClaude-assisted (no script)
/pivot [context]Adapt voice for target platform/audiencegenerate_platform_pivot() in prompt_synthesizer

Scripts

  • scripts/voice_analyzer.py — Computes lexical density, ASL, cadence variance, sentiment temperature, and top keywords from a corpus
  • scripts/prompt_synthesizer.py — Generates deployable LLM system prompts from a BrandConfig object; includes generate_platform_pivot() for channel-specific adaptations

References

  • references/methodology.md — Full technical methodology: 4-Pillar Framework, Cadence Analysis, Semantic Salience, Human-AI Collaborative Loop

Comments

Loading comments...