Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Rationality

v0.1.0

Apply Critical Fallibilism to make decisions by binary testing ideas for decisive flaws, managing complexity, embracing criticism, and avoiding overreach.

2· 2.6k·8 current·9 all-time
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The skill's name and SKILL.md/README describe a decision-making and error-correction framework. The included frameworks, patterns, and templates align with that purpose. No uncommon binaries, credentials, or unrelated permissions are requested.
Instruction Scope
Most runtime instructions are conceptual (how to translate arguments, use IGC triples, follow DDRP). However, some operational guidance tells an agent to interact with local tooling/state (e.g., 'record the Refutation in memory/', 'use git to maintain a "revert path"', 'when a command fails: Run DDRP immediately', and a line in patterns/overreach.md: 'Action: Use `exec` to probe the environment'). Those lines instruct filesystem and shell interaction. This is consistent with a skill intended for AI agents but is broader than purely textual guidance — it assumes the agent has writable memory and shell/git access.
Install Mechanism
Instruction-only skill with no install spec, no downloads, no code files to execute. That is the lowest-risk install profile and matches the content (documentation and templates).
Credentials
The skill declares no required environment variables, credentials, or config paths. References to writing to 'memory/' and using git are operational suggestions but do not request any external secrets or unrelated credentials. The absence of requested env/config access is proportionate to the stated purpose.
Persistence & Privilege
The skill encourages maintaining persistent state (refutation records in memory/, 'Standard Refutation' libraries, updating SKILL.md, using git) and automating fixes after repeated errors. It does not demand always:true or cross-skill config changes. If an agent implements these recommendations, the skill will cause persistent writes to the agent's storage; that is expected for an error-correction framework but worth confirming against your runtime's sandboxing policy.
Assessment
This skill is a collection of frameworks and templates for binary/critical-fallibilist thinking and appears internally consistent. Before installing, check how your agent runtime implements 'memory/' and whether it allows git and shell execution: the SKILL.md suggests writing refutations to memory/ and sometimes using git/reset or exec. If you prefer to limit filesystem or shell access, run the skill in a restricted sandbox or disable any automated writing and treat the files as read-only guidance. No credentials or network installs are required, so the main operational consideration is whether you want the agent to persist changes or run local commands as the documentation suggests.

Like a lobster shell, security has layers — review code before you run it.

critical-fallibilismvk97cbkgbr2m94hwwpzc836kp7s80g688decision-makingvk97cbkgbr2m94hwwpzc836kp7s80g688error-correctionvk97cbkgbr2m94hwwpzc836kp7s80g688latestvk97cbkgbr2m94hwwpzc836kp7s80g688rationalityvk97cbkgbr2m94hwwpzc836kp7s80g688
2.6kdownloads
2stars
1versions
Updated 9h ago
v0.1.0
MIT-0

Rationality Skill (Critical Fallibilism)

The Rationality skill provides a structured framework for thinking, decision-making, and error correction based on the principles of Critical Fallibilism (CF). Unlike traditional rationality which often relies on "weighing" evidence, CF focuses on binary evaluation, error detection, and managing the limits of human (and AI) cognition.

Quick Start

  1. Define your IGC Triple: What is the Idea, the specific Goal, and the Context?
  2. Translate to Binary: Don't ask "how good" an idea is. Ask: "Is there a decisive reason this idea fails at the goal?"
  3. Check for Overreach: Is the complexity of the task exceeding your ability to detect and fix errors? (See patterns/overreach.md).
  4. Seek Criticism: Treat every error found as a gift—a specific piece of knowledge that allows you to improve.

Core Principles

1. The Pledge (Honesty)

Always be willing to follow the truth wherever it leads. Never suppress a criticism or intuition just because it is inconvenient or socially awkward.

2. Binary Evaluation

Knowledge is digital, not analog. Ideas are either refuted (they have a known flaw that makes them fail their goal) or non-refuted. We do not use "weights," "scores," or "probabilities" to judge ideas. One decisive criticism is enough to reject an idea.

3. Criticism as Gift

Errors are inevitable. The only way to improve is to find them. Therefore, criticism is the most valuable input for growth. We don't defend ideas against criticism; we use criticism to filter out errors.

4. Ideas Over Identity

Separate your "self" from your ideas. If an idea you hold is refuted, it is the idea that failed, not you. This prevents defensive reactions that block learning.

5. Overreach Awareness

Error correction is a limited resource. If you take on tasks that are too complex, you will create errors faster than you can fix them. This is Overreach. When you overreach, you must stop, simplify, and revert.

6. Paths Forward

You must maintain "Paths Forward" for error correction. This means having a policy for how external criticism (from users or other agents) is handled so that errors can be fixed without infinite effort.

Directory Structure

  • frameworks/: Core algorithms for thinking and deciding.
  • patterns/: Recognizable mental models and common failures.
  • templates/: Practical tools and checklists for daily use.

When to Use This Skill

  • High-Stakes Decisions: When you can't afford a "good enough" guess.
  • Complex Debugging: When you are stuck in a loop or compounding errors.
  • Resolving Disagreements: When you need a structured way to move past "he said / she said."
  • Self-Regulation: To monitor your own reasoning for bias or overreach.

Philosophy Foundation

This skill is based on Critical Fallibilism, which synthesizes:

  • Popperian Epistemology: Knowledge grows through conjecture and refutation.
  • Theory of Constraints (Goldratt): Focus on bottlenecks; ignore excess capacity.
  • Objectivism (Rand): Reason as an absolute; importance of definitions and context.

Note: This skill is optimized for AI operational use. For deep theoretical study, see memory/philosophy/CF-concepts.md.

Comments

Loading comments...