Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Structured Falsification

v1.0.1

Structured falsification framework for complex decision-making, investment analysis, technology selection, and multi-factor judgment. Use when: (1) evaluatin...

0· 78·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for shenjianjun687-ops/structured-falsification.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Structured Falsification" (shenjianjun687-ops/structured-falsification) from ClawHub.
Skill page: https://clawhub.ai/shenjianjun687-ops/structured-falsification
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install structured-falsification

ClawHub CLI

Package manager switcher

npx clawhub@latest install structured-falsification
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name, description, and included reference files (investment and tech decision templates) match the stated purpose. No unrelated binaries, env vars, or config paths are requested — the skill is instruction-only and uses only its bundled domain docs.
!
Instruction Scope
The SKILL.md instructs the agent to run an internal five-step falsification process and to load bundled domain configs only — this is in-scope. However, it also (a) says it can be auto-triggered 'no explicit keyword required' for high-uncertainty multi-factor decisions, granting broad discretionary activation, and (b) explicitly directs the agent to suppress step-by-step reasoning and only output final conclusions, which reduces transparency and auditability of decisions. Both are scope/behavioral risks worth flagging.
Install Mechanism
No install spec and no code files beyond static references — lowest-risk installation model (instruction-only).
Credentials
No environment variables, credentials, or external endpoints are requested. The skill only references its own bundled docs.
Persistence & Privilege
always:false and no special config paths — the skill does not request persistent/system-level privileges. Autonomous invocation is allowed by platform default; combined with the skill's broad auto-trigger phrasing, this can increase its activation surface but does not itself change privileges.
What to consider before installing
This skill appears to implement a coherent structured‑falsification decision framework and does not ask for credentials or install software — that's good. Before installing, consider: (1) the SKILL.md explicitly permits auto-triggering without strict keywords; if you want control, require explicit invocation or narrower triggers so it doesn't run unexpectedly, (2) the skill instructs the agent to hide chain‑of‑thought and only emit final conclusions, which makes it harder to audit or debug decisions — if you rely on explanations for accountability, you may want to disable or override that behavior, (3) test the skill on low‑risk decisions first to verify outputs and triggers. If the platform lets you review or constrain auto‑trigger rules, apply those limits before enabling the skill broadly.

Like a lobster shell, security has layers — review code before you run it.

latestvk9716h7yrezqkh2ec2brc7na59844jv9
78downloads
0stars
2versions
Updated 3w ago
v1.0.1
MIT-0

Structured Falsification (结构化证伪法)

A five-step reasoning framework that forces rigorous disconfirmation before arriving at conclusions. Designed for AI agents and LLMs to produce concise, high-confidence outputs on complex tasks.

Core principle: Show conclusions, not derivation. The agent runs the full five-step process internally but outputs only the final ranked conclusions with confidence levels and key risks. Verbose reasoning is a sign the framework wasn't applied rigorously enough — tighten the analysis, don't expand the output.

When to Auto-Trigger

  • Multiple competing options with no clear winner
  • User asks "should I…", "which one…", "what about…", "evaluate…", "compare…"
  • Investment target analysis, due diligence, competitive assessment
  • Technology selection, architecture decision, vendor evaluation
  • Any task where the cost of a wrong answer is high

The Five Steps (internal process)

Step 1: Decompose Value Nodes

Map the problem space. Identify where real value / risk / leverage sits.

  • What does this problem actually need? (Not surface requirements — underlying drivers)
  • Map key entities and their relationships (supply chain / dependency graph / stakeholder map)
  • Classify each node: critical vs. nice-to-have vs. irrelevant

Step 2: Falsify Each Candidate (core step)

For every option / target / claim, run:

  1. Surface logic: Why does the market / conventional wisdom support this?
  2. Challenge: Where is the logic fragile? Causal chain breaks? Concept substitution? Hidden assumptions?
  3. Verdict: Rate association strength — direct / indirect / tangential

Output: a falsification table (internal) with columns: Candidate | Surface Logic | Challenge | Verdict

Step 3: Identify True Beneficiaries / Best Options

Apply priority filters:

  1. Infrastructure / tooling (selling shovels during a gold rush) → highest certainty
  2. Core technology owners (commercializable IP) → highest upside
  3. Application layer (using tech to cut costs / add features) → value capture may be limited

Step 4: Stress Test Survivors

For each surviving candidate:

  • Is the causal chain A→B→C fully intact at every link?
  • Is there actual evidence? (Business data, orders, customers, benchmarks)
  • If this logic fails, how bad is the downside?
  • Any show-stopper that eliminates this candidate entirely?

Step 5: Rank and Conclude

  • Sort by certainty (High / Medium / Low)
  • Attach to each: one-line logic, key assumption, core risk
  • Explicitly flag "not recommended" items with reasons
  • One-sentence bottom line

Self-Correction Checklist

Before producing output, verify:

  • Did I falsify hard enough? If every candidate survived, the filter is too loose.
  • Did I confuse "good company" with "good thesis"? Logic > quality.
  • Am I hedging excessively? Pick a direction. Uncertainty should be flagged, not hidden behind "it depends".
  • Did I anchor on the first plausible answer? Force a search for disconfirming evidence.
  • Is my output concise? If the conclusion section exceeds 50% of the total output, re-tighten.

Output Format

Only output the following. No step-by-step narration. No "let me think about this".

## 结论

| # | 候选 | 判断 | 确定性 | 核心逻辑(一句话) | 关键假设 | 主要风险 |
|---|------|------|--------|---------------------|----------|----------|
| 1 | ...  | ✅/⚠️/❌ | 高/中/低 | ... | ... | ... |
| 2 | ...  | ...  | ...    | ...                 | ...      | ...      |

**一句话总结:** ...

Domain Configurations

Load domain-specific checklists when the context matches:

  • Investment analysis → Read references/investment.md
  • Technology selection → Read references/tech-decision.md
  • Other domains → Use the generic framework above, or read a custom config from references/

To create a custom domain config, copy references/domain-template.md and fill in the sections.

Comments

Loading comments...