Agora Doubt List

v2.2.0

Generate a Cartesian verification artifact before trusting a plan, claim, implementation, or release. Turn confidence into explicit checks.

0· 76·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for malleus35/agora-doubt-list.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Agora Doubt List" (malleus35/agora-doubt-list) from ClawHub.
Skill page: https://clawhub.ai/malleus35/agora-doubt-list
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install agora-doubt-list

ClawHub CLI

Package manager switcher

npx clawhub@latest install agora-doubt-list
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the SKILL.md: the skill's goal is to produce a structured 'doubt list' and all inputs, classification rules, and output templates support that. There are no requested env vars, binaries, or config paths that are unrelated to the stated purpose.
Instruction Scope
SKILL.md contains step-by-step guidance for generating doubts and checks and does not instruct the agent to read files, access environment variables, call external endpoints, or perform system actions outside the scope of generating a checklist.
Install Mechanism
No install specification and no code files — nothing will be downloaded or written to disk. This is the lowest-risk install profile.
Credentials
The skill requests no credentials, API keys, or config paths. There is no disproportionate access requested relative to the simple checklist/guidance functionality.
Persistence & Privilege
always is false, and the skill does not request or modify persistent agent/system configuration. It does not ask for elevation or long-lived privileges.
Assessment
This skill appears low-risk and coherent: it only contains instructions for producing verification checklists and asks for no secrets or installs. Before enabling, consider who will act on its output (don't let checklist output be used as an automatic gate without human oversight), test the skill on a few representative items to validate usefulness, and remember that the model can still hallucinate checks — require reviewers to confirm verifications and link to evidence before trusting a 'clear' posture.

Like a lobster shell, security has layers — review code before you run it.

latestvk9791myywkt1pm0cdkr0hn5hr584zgbp
76downloads
0stars
3versions
Updated 1w ago
v2.2.0
MIT-0

Doubt List

Purpose

Convert confidence into verifiable skepticism.

This skill asks: what must be checked before we trust this? It is not for performative negativity. It exists to separate fact, inference, preference, and guess.

Activate when

Use this skill when:

  • a plan sounds persuasive but has not been stress-tested
  • a release is nearing shipment
  • a claim is important, risky, or governance-sensitive
  • the team wants a verification checklist before execution
  • consequences of error are high

Inputs

Expected inputs may include:

  • a feature or release plan
  • a design proposal
  • a decision memo
  • a claim or assertion set
  • implementation notes or test results

Classification rule

Before generating doubts, classify each major statement as one of:

  • Fact — directly established or evidenced
  • Inference — reasoned from available evidence
  • Preference — normative or taste-based judgment
  • Guess — plausible but currently unverified

Unclassified claims are not ready for trust.

Doubt categories

Always produce doubts across five categories.

1. Happy path doubts

  • What must go right for the main story to hold?
  • Which "obvious" success condition has not actually been verified?

2. Edge case doubts

  • What happens under uncommon but realistic conditions?
  • What retries, partial failures, or weird inputs have been ignored?

3. Boundary doubts

  • What breaks at minimum, maximum, empty, overloaded, concurrent, or delayed conditions?

4. Ambiguity doubts

  • Which terms or promises could be interpreted in more than one way?
  • Which claims sound specific but are not operationally defined?

5. Evil demon scenarios

  • What if the most confidence-inducing assumption is false?
  • What if the evidence is incomplete, stale, biased, or misread?
  • What catastrophic but low-frequency scenario would embarrass the team later?

Procedure

Step 1 — State the object of doubt

Name exactly what is being reviewed.

Step 2 — Classify key claims

Mark each important claim as Fact / Inference / Preference / Guess.

Step 3 — Generate doubts across all five categories

Do not stop at happy path concerns.

Step 4 — Convert doubts into checks

Every serious doubt should map to a concrete verification action.

Step 5 — Assign release posture

Conclude whether the work is:

  • Clear enough to proceed
  • Proceed with conditions
  • Do not proceed yet

Output artifact

## Doubt List

### Object of Review
- ...

### Claim Classification
- Claim: ... -> Fact / Inference / Preference / Guess

### Happy Path Doubts
- Doubt: ...
- Verification: ...

### Edge Case Doubts
- Doubt: ...
- Verification: ...

### Boundary Doubts
- Doubt: ...
- Verification: ...

### Ambiguity Doubts
- Doubt: ...
- Verification: ...

### Evil Demon Scenarios
- Doubt: ...
- Verification: ...

### Clarity Gate
- Clear: ...
- Not clear: ...

### Release Posture
- Proceed / Proceed with conditions / Do not proceed yet

Guardrails

  • Do not confuse disagreement with evidence.
  • Do not mark a guess as fact because the team likes it.
  • Do not stop at implementation QA; plans, memos, and claims also need doubt.
  • Do not generate abstract doubts without corresponding verification actions.

Failure modes

Common failure modes:

  • only checking the happy path
  • writing doubts that are too vague to test
  • skipping claim classification
  • treating rhetorical confidence as evidence
  • using this skill to block progress without naming concrete conditions for trust

Escalation points

Escalate when:

  • the key claim cannot be verified with currently available evidence
  • the team is relying on guesswork for a high-consequence decision
  • release pressure is overriding clarity
  • terms in the proposal are too ambiguous for meaningful review

Completion condition

This skill is complete only when a reviewer could pick up the output and know what to verify next.

Comments

Loading comments...