Draco Competitor Analysis

v1.0.0

Write the competitor-analysis section of research reports, strategy decks, and product/brand studies. Use when the user asks for 竞品分析、竞对拆解、对标分析、benchmarking、...

0· 216·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for draco-kzn/draco-competitor-analysis.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Draco Competitor Analysis" (draco-kzn/draco-competitor-analysis) from ClawHub.
Skill page: https://clawhub.ai/draco-kzn/draco-competitor-analysis
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install draco-competitor-analysis

ClawHub CLI

Package manager switcher

npx clawhub@latest install draco-competitor-analysis
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name, description, and included reference files focus solely on producing competitor-analysis sections. No unrelated binaries, credentials, or config paths are requested.
Instruction Scope
SKILL.md contains a focused workflow, output rules, and directs the agent to use the included reference markdown files. It does not instruct reading system files, environment variables, or sending data to external endpoints.
Install Mechanism
No install specification or code is present; this is instruction-only, so nothing is written to disk or downloaded during installation.
Credentials
No environment variables, credentials, or config paths are required. The declared requirements are proportional to the task.
Persistence & Privilege
always is false and the skill is user-invocable. Autonomous invocation is permitted by default on the platform, but that alone is not a concern here since the skill has no privileged access.
Assessment
This skill appears coherent and low-risk: it only provides templates and instructions and requests no credentials or installs. Before using, avoid pasting sensitive or proprietary data into prompts (the skill will analyze whatever you give it), verify sources when the analysis cites external results, and review generated recommendations for factual accuracy and business fit. If you plan to allow autonomous agent runs that call this skill, be mindful that outputs could be used without an extra human review, so set your agent policies accordingly.

Like a lobster shell, security has layers — review code before you run it.

latestvk97agqyep80dtyhbv54kj14e818337s1
216downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

competitor-analysis

Use this skill to turn raw competitor information into decision-useful analysis.

Core rule:

The goal of competitor analysis is not to describe what others did. It is to solve our problem by borrowing others' experience.

When to use

Trigger this skill when the task is to:

  • write a competitor-analysis section in a report or memo
  • compare products / companies / campaigns for strategic insight
  • benchmark categories, leaders, challengers, substitutes, or avoidable cases
  • analyze marketing campaigns, product positioning, product launch, or brand upgrade
  • answer "what should we learn / avoid / do next"

Do not stop at information collection. The output must end with actionable conclusions.

Required workflow

Follow this sequence:

  1. Define the problem - What exact question are we trying to answer?
  2. Define the competitor set - Why these objects, and what role does each one play?
  3. Decompose the case - Facts → motives → effects
  4. Extract the root variable - What actually drove success or failure?
  5. Convert to action - What should we borrow, avoid, or do next?

If step 5 is weak, the analysis is incomplete.

Five questions you must answer

Before finalizing, check whether the draft answers all five:

  1. What problem are we solving?
  2. Which information is actually useful for that problem?
  3. Where did the information come from?
  4. Is the analysis surface-level or root-cause level?
  5. What concrete help does the conclusion provide for this project?

Competitor types

Classify each analyzed object into one or more of these roles:

  • Core competitor: highly overlaps with us in product, target users, positioning, budget, or category
  • Benchmark competitor: stronger / larger / category-leading reference worth learning from
  • Potential competitor: smaller but strategically interesting; may have sharper tactics or better product logic
  • Substitute competitor: different category but solves the same higher-order need
  • Avoidance competitor: negative case that shows what not to do

Do not create a wide list without role labels.

Three-layer decomposition

For each important case, analyze in this order:

1) Fact layer — what happened?

Capture concrete actions only:

  • what was launched / changed / communicated
  • timing and sequence
  • channel / resource allocation
  • execution structure

2) Motive layer — why did they do it?

Explain the logic behind the actions:

  • market / company / user background
  • why this timing
  • target outcome
  • strategic path chosen
  • why this method instead of another

3) Effect layer — what happened as a result?

Look for:

  • business result
  • user cognition shift
  • engagement / conversion / adoption signals
  • success factor or failure reason
  • the decisive variable

Always push past description. Ask: what was the main contradiction / main variable?

Output rule

Every competitor-analysis section must end with three explicit buckets:

  • What to borrow
  • What to avoid
  • What we should do next

If the user asks for a report section, prefer this output shape:

Suggested section structure

  1. Problem statement
  2. Competitor map and role labels
  3. Cross-case comparison
  4. Deep dive by selected cases
  5. Root insight / decisive variables
  6. Recommendations for us

Scenario-specific frameworks

Read the matching reference file before drafting:

  • Marketing / campaign analysisreferences/marketing.md
  • Product positioning / category comparisonreferences/product-positioning.md
  • Product launch / GTM / 0→1 or 1→10references/product-launch.md
  • Brand upgrade / repositioningreferences/brand-upgrade.md
  • General method and review checklistreferences/methodology.md

Writing standards

  • Lead with the problem, not with the competitor list.
  • Prefer a small number of well-chosen cases over a big shallow inventory.
  • Label evidence and inference separately when useful.
  • Do not confuse “interesting action” with “relevant action”.
  • Do not summarize surface phenomena without extracting implications.
  • Keep conclusions sharp, short, and decision-oriented.

Good conclusion examples

Good:

  • "We should borrow competitor A's timing logic, but not its channel mix; their budget intensity is not replicable for us."
  • "Competitor B succeeded less because of creativity and more because it matched a high-frequency scenario with credible RTB."
  • "Competitor C is useful as an avoidance case: the launch failed because concept, target user, and channel rhythm were misaligned."

Bad:

  • "Competitor A did social media and got good results."
  • "Competitor B's campaign was creative and worth learning from."
  • "There are many things we can reference."

Final self-check

Before delivering, verify:

  • Is the analysis tied to the user's actual business question?
  • Are competitor roles clearly labeled?
  • Did we move from facts to motives to effects?
  • Did we identify a decisive variable?
  • Does the output clearly say what we should borrow / avoid / do next?

Comments

Loading comments...