Competitive Positioning Research

v1.0.0

Strategic competitive analysis skill for positioning research. Defines comparison dimensions, selects structural analogues, researches each comp, scores your...

0· 122·0 current·0 all-time
byNissan Dookeran@nissan

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for nissan/competitive-positioning-research.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Competitive Positioning Research" (nissan/competitive-positioning-research) from ClawHub.
Skill page: https://clawhub.ai/nissan/competitive-positioning-research
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install competitive-positioning-research

ClawHub CLI

Package manager switcher

npx clawhub@latest install competitive-positioning-research
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (competitive positioning research) align with the SKILL.md: the skill prescribes how to pick comps, what to search for, how to score, and how to produce recommendations. There are no unexpected binaries, environment variables, or credentials requested that would be disproportionate to the stated purpose.
Instruction Scope
Instructions are narrowly scoped to public web research (landing pages, case studies) and building a scored recommendation. The skill explicitly limits web_search to 4 queries per session and does not direct the agent to read local files, system config, or environment variables. The worked example includes a local file path for an output example, but the SKILL.md never instructs the agent to read or transmit private files—so this is an example pattern, not evidence of exfiltration behavior.
Install Mechanism
No install spec and no code files are present (instruction-only). This minimizes disk footprint and means nothing is written or executed on install. The regex scanner found nothing to analyze, consistent with an instruction-only skill.
Credentials
The skill declares no required env vars, no credentials, and no config paths. The only resource it uses is outbound network/web_search, which is reasonable for a public-URL research task. Note: outbound network is necessary for the stated purpose; if you supply private product details in prompts the agent could include them in queries—be mindful of what you paste into queries.
Persistence & Privilege
always is false. The skill can be invoked by the model (default for skills), which is expected. There is no request to modify other skills or agent-wide settings. Autonomous invocation + network access is normal here and doesn't by itself indicate a problem.
Scan Findings in Context
[no_findings] expected: The regex scanner reported no findings; that is expected because this is an instruction-only skill with no code files to scan.
Assessment
This skill appears coherent and limited to public web research. Before installing, consider: (1) it performs outbound web searches (up to 4 per session) — if you do not want the agent to query the web, do not enable it; (2) do not paste private/internal URLs, credentials, or proprietary text into prompts that the skill will use for searches, since those could be reflected in queries; (3) review any citations or claims the skill generates — the agent can hallucinate or misinterpret page changes, so validate recommendations before acting on them. If you need stricter controls, restrict network access or run research manually.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🔭 Clawdis
latestvk977zhy88p8tmz0dqp8mfa0aq983s33p
122downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Skill: Competitive Positioning Research

Owner: Archie | Maintained by: Sara


When to Use This Skill

Triggers:

  • "How does our X compare to how [category] leaders do it?"
  • "Research how successful [category] platforms handle [specific problem]"
  • "What can we learn from [Platform A / Platform B] for our [page/feature/approach]?"
  • Pre-ship review Phase 3 (strategic positioning check)
  • Before writing any public-facing page that has direct category comps

Not for:

  • Technical claim accuracy — that's the technical accuracy review pattern (fee amounts, hash functions, protocol specs)
  • Deep product research — that's a full Archie research brief
  • Pricing analysis — that's Becky

This skill is for strategic/UX research — "how did the best examples in this space solve this specific problem, and how do we stack up?" Not "is this claim correct?"


The Research Pattern

Step 1: Define the comparison dimensions

Before searching, lock down:

  • What specific problem are we researching? (e.g. "two-sided marketplace landing page hero CTA — which side to prioritise?")
  • What category are the comps in? (e.g. "developer-facing two-sided marketplace")
  • 3–5 dimensions to score on (e.g. side prioritisation, cold-start handling, social proof, trust signals)
  • Target output: scored table + ranked recommendations

Don't start searching until you've written these down. Undefined scope = research sprawl.

Step 2: Select comps

Pick 4–6 platforms. More is noise. Selection criteria:

  • Same audience type (developer, consumer, enterprise)
  • Same structural problem (two-sided, subscription, usage-based)
  • Mix of early-stage (how they launched) and mature (how they evolved)
  • Prioritise structural analogues over direct competitors — defensive bias corrupts the analysis

Step 3: Research each comp

For each platform, find:

  • How they handled the specific problem (not general company history)
  • What they prioritised early vs. mature stage
  • What worked and what they changed
  • One key lesson that applies to your situation

Search patterns that work:

  • "[platform] landing page teardown"
  • "[platform] early growth strategy"
  • "[platform] cold start problem"
  • "two-sided marketplace [specific problem] best practices"
  • "[platform] how they solved [problem]"

Model knowledge vs. web search: For well-known platforms (Airbnb, Stripe, Uber, Replicate), Archie has sufficient model knowledge for structural patterns. Use web search for specifics — a changed CTA, a pivot, a dated case study.

Step 4: Score our approach

Build a scoring table against the dimensions from Step 1. Score each 1–5 with a brief, honest note.

A 2/5 with a real explanation is more useful than a 4/5 that flatters the team. Score what exists, not what was intended.

Step 5: Produce recommendations

Ranked by impact, not effort. For each recommendation:

  • What to change
  • Why (which comp's evidence supports it)
  • Approximate effort: one-line fix / section rewrite / new feature

Output Format

# [Topic] — Competitive Positioning Research
_Date: YYYY-MM-DD | Analyst: Archie_

## Executive Summary
[3–4 sentences: headline finding + top recommendation]

## Comparison Dimensions
[The 3–5 dimensions being scored, and why they matter]

## Case Studies

### [Platform]
- **What they did:** ...
- **When (early vs mature):** ...
- **Key lesson:** ...

## Scoring Table

| Dimension | Score (1-5) | Notes |
|---|---|---|

## Recommendations (ranked by impact)
1. **[Change]** — [why, which comp supports it] — [effort]

## What We Got Right
[Strengths to preserve]

Time Budget and Scope

TypeCompsTime
Quick (known category)2–48–10 min
Full (novel category)5–615–20 min

Hard limit: 4 web searches. Synthesise from what you find. If you haven't found enough after 4 searches, scope was too broad — narrow the question, not the search count.


Worked Example

Date: 2026-03-24
Product: Reddi Agent Protocol (two-sided agent marketplace)
Problem: Two-sided landing page hero CTA — which side to prioritise?
File: projects/reddi-agent-protocol/reviews/archie-marketplace-research-2026-03-24.md

Comps studied: Stripe, Uber, Airbnb, Hugging Face, Replicate (5 — right call, stopped before noise)

Dimensions scored: Side prioritisation, supply-side hook, demand-side hook, chicken-and-egg acknowledgement, social proof, trust signals

Headline finding: Seller-first hero is defensible at pre-supply stage, but the page is missing three things: cold-start acknowledgement, zero-friction demo, and any social proof. The "Browse Agents" CTA risks leading to a near-empty index — an active anti-signal.

Top recommendation: Add a dual-path hero split so both sides feel directly spoken to without diluting the primary message.

Surprise: Replicate — the closest structural analogue — led with consumers from day one, and made a live runnable demo the primary conversion mechanism on the landing page. Not a "coming soon" but an actual working model you could run from the hero. That's the bar for our live demo CTA.

Score that stung: Chicken-and-egg handling got 1/5. The page doesn't acknowledge it's early-stage, and "why join a marketplace with no one in it yet?" has no answer anywhere on the page. Honest score, actionable gap.


Common Mistakes

Too many comps. Six becomes noise. Pick four or five strong structural analogues, research them properly, and stop.

Comparing to direct competitors. Direct comp analysis introduces defensive bias. Structural analogues (same problem, different space) produce better lessons. Airbnb teaches more about marketplace cold starts than any other agent marketplace would.

Generous scoring. A scoring table where everything is 3–4/5 is useless. The purpose of the table is to surface gaps. If nothing scores below 3, you're flattering the work, not analysing it.

Searching too broadly. two-sided marketplace returns 10 years of generic content. Replicate model provider growth strategy returns the specific insight you need. Start specific, widen only if necessary.

Grepping the full repo. Archie times out on grep -r across a full project directory. Always read targeted files by path. Never use recursive search on a large workspace.


This skill was written 2026-03-24 by Sara, based on Archie's marketplace research for Reddi Agent Protocol.

Comments

Loading comments...