Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

DevTool Answer Monitor

v0.3.0

Use when the user wants to monitor how ChatGPT, Claude, Gemini, and other LLMs describe a developer tool, API, SDK, or open-source project. DevTool Answer Mo...

0· 982· 1 versions· 0 current· 0 all-time· Updated 5d ago· MIT-0

Monitor What LLMs Say Before Users Choose Your Dev Tool

Use this skill as the main visibility workflow router for developer tools and open-source products.

Brand: DevTool Answer Monitor

Companion repo: devtool-answer-monitor

Use this when you want an agent to help you monitor how LLMs describe your product, build a reusable query pool, diagnose negative or outdated answers, and plan what to fix next.

Safety First

  • Treat this root skill as a read-only workflow router.
  • Default to quickstart replay or manual paste mode when you only need examples or scoring help.
  • Do not ask users to paste API keys into chat. If API collection mode is needed, tell them to configure local environment variables themselves and then hand off execution to visibility-monitor.
  • Review local scripts such as install.sh, quickstart.sh, and the selected runner before executing shell commands.

Start Here

Copy one of these prompts to begin:

  • Analyze how ChatGPT and Claude describe my API docs
  • Build a developer-tool answer monitoring query pool for my SDK
  • Find negative or outdated LLM claims about my project

30-Second Result

Typical input

  • product truth such as a README, docs, changelog, integrations, or positioning page
  • answer evidence such as copied model answers, screenshots, or cited URLs
  • scope such as target models, languages, regions, or a repeated query set

What this skill returns

  • a reusable query pool
  • raw evidence and a score draft plan
  • a monitoring summary and report outline
  • a repair backlog with T+7 or T+14 validation points

Companion demo and sample outputs

Trigger

Use this skill when the task is any of the following:

  1. generate a visibility query matrix and Query Pool from product truth;
  2. monitor how multiple LLMs mention, recommend, or misunderstand a product;
  3. plan model-specific content placement based on datasource patterns;
  4. check whether a draft page, FAQ, changelog, or case study is ready to influence model answers;
  5. repair wrong, negative, outdated, or competitor-only answers;
  6. verify whether a repair action improved metrics at T+7 or T+14;
  7. help a user choose between quickstart replay, manual paste mode, and API collection mode.

Beginner Routing

When the user is new to the repository, route them in this order.

SituationNext step
Needs environment check firstopen docs/getting-started.md and review the environment check section
Wants environment-free first runopen docs/index.html or docs/for-beginners.md
Wants a short explanation firstopen docs/for-beginners.md
Wants deeper onboardingopen docs/getting-started.md
Wants the English repository overviewopen README.md
Wants the Chinese repository overviewopen README.zh-CN.md

Visibility Strategy

Always keep the workflow in this order:

StageGoal
Query designturn product truth into scenario matrix, three-layer keywords, and Query Pool seeds
Monitoringscore mention, positive mention, capability accuracy, and ecosystem accuracy
Placementmap each target model to likely datasource channels and publication surfaces
Repairclassify bad answers into information error, negative evaluation, outdated information, or competitor insertion
Activationanalyze whether answers help a user install, integrate, or invoke the product
Regressioncompare follow-up runs and check whether metrics improved after action

Mode Selection

Choose the execution mode before running monitoring.

ModeUse whenTypical inputs
Quickstart replayuser wants the fastest first run without API setupsample model config + sample manual responses
Manual paste modeuser already has copied answers from chat toolsQuery Pool + manual response JSON
API collection modeuser wants repeatable real monitoringQuery Pool + model config + locally configured provider env vars

Input Contract

Prepare as many of the following as possible before execution.

InputExamples
Product truthREADME, docs, changelog, integrations, positioning
Answer evidenceraw answers, screenshots, copied responses, cited links
Monitoring scopemodels, languages, regions, dates, repeated query set
Publishing targetsdocs, blog, GitHub, Q&A, partner channels

Workflow Router

Choose the next sub-skill according to the user's immediate need.

SituationNext Skill
Need query design and scenario clusteringvisibility-query-matrix
Need weekly monitoring, evidence logging, report output, or shell execution after explicit user approvalvisibility-monitor
Need pre-publish content QAvisibility-content-check
Need to repair bad answers and define regression checksvisibility-repair

Required Reading Order

For a full program, read these repository documents in sequence:

  1. playbooks/visibility-workflow-architecture.md
  2. playbooks/keyword-strategy.md
  3. playbooks/monitoring-system.md
  4. playbooks/model-datasources.md
  5. playbooks/content-platform-map.md
  6. playbooks/negative-fix-sop.md

Output Contract

Always preserve the following outputs.

OutputDescription
Query foundationscenario matrix, keyword layers, Query Pool
Monitoring outputsraw evidence, score draft, summary, report, leaderboard or overview
Action plancontent placement priorities and repair backlog
Regression recordT+7 and T+14 comparisons after key fixes

Positioning

DevTool Answer Monitor is the skill layer for the devtool-answer-monitor repo.

  • Use the repo when you want runnable demos, scripts, and report artifacts.
  • Use the skill when you want an agent-guided workflow for monitoring, repair, and regression planning.

Handoff Rules

At the end of each run, preserve:

  1. which product was optimized;
  2. which models and languages were in scope;
  3. which queries are reused in weekly tracking;
  4. what the top three visibility weaknesses are;
  5. what actions are already completed and what still needs validation.

Version tags

latestvk972z4d3g07sy83znxtrj02s9d85c3sq

Runtime requirements

📈 Clawdis
Binspython3, bash
EnvOPENAI_API_KEY, OPENAI_BASE_URL
Primary envOPENAI_API_KEY
Environment variables
OPENAI_API_KEYoptionalOptional provider API key for API collection mode only. Quickstart replay and manual paste mode do not need it.
OPENAI_BASE_URLoptionalOptional OpenAI-compatible gateway URL for multi-provider API collection mode.