Determinability Checker

v1.0.2

Causal Sufficiency Determinability Checker — Meta-Skill Gatekeeper based on JEP Paper CheckDeterminability Algorithm

0· 45·0 current·0 all-time
byJEP (Judgment Event Protocol)@schchit

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for schchit/determinability-checker.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Determinability Checker" (schchit/determinability-checker) from ClawHub.
Skill page: https://clawhub.ai/schchit/determinability-checker
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install determinability-checker

ClawHub CLI

Package manager switcher

npx clawhub@latest install determinability-checker
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (determinability checker / gatekeeper) matches the code and documentation: core algorithm in skill/core.py, types in skill/types.py, HTTP interface in skill/api.py, examples and tests. Required dependencies (fastapi, pydantic, uvicorn) are consistent with the provided FastAPI app. There are no unrelated credentials, binaries, or config paths requested.
Instruction Scope
SKILL.md and README describe running a local FastAPI service and calling /check. Runtime instructions and the API only reference the request payload (configs, omega_field, target_field, evidence_fields). The code does not read unrelated system files, environment variables, or call external endpoints. Example/tests modify sys.path for local import (normal for shipped examples/tests).
Install Mechanism
This is an instruction-only skill (no install spec), but a manifest/requirements.txt lists standard Python packages (fastapi, uvicorn, pydantic). There are no downloads from arbitrary URLs, no extraction, and no unusual install locations. Installing dependencies via pip is the expected mechanism.
Credentials
The skill requests no environment variables, credentials, or config paths. All inputs are provided via the API payload. No secrets/external tokens are required or referenced anywhere in the code or SKILL.md.
Persistence & Privilege
Flags show normal defaults (always:false, model-invocation allowed). The skill does not attempt to modify other skills, system-wide settings, or persist credentials. It exposes a simple HTTP API and has no privileged system operations.
Assessment
This skill appears internally consistent and implements the determinability algorithm it claims. Before installing or running it: 1) Install and run it in an isolated environment (virtualenv or container) and review the requirements to avoid surprising dependency upgrades. 2) If you expose the FastAPI endpoint, restrict network access (it's an HTTP service that will accept whatever configs are POSTed to /check). 3) Review the example and tests locally to confirm behavior with your data shapes (field names used in examples vary slightly across docs: e.g., has_verif vs verif_flag). 4) Check the maintainer contact (yuqiang@humanjudgment.org) and MIT license if provenance matters. No environment variables, credentials, or external network calls were found in the code, so there are no obvious secrets-exfiltration risks in the current package.

Like a lobster shell, security has layers — review code before you run it.

auditvk97dtm4wg58enmfr9nahwk0bvs85kmzhcausalityvk97dtm4wg58enmfr9nahwk0bvs85kmzhdeterminabilityvk97dtm4wg58enmfr9nahwk0bvs85kmzhgatekeepervk97dtm4wg58enmfr9nahwk0bvs85kmzhjepvk97dtm4wg58enmfr9nahwk0bvs85kmzhlatestvk97dtm4wg58enmfr9nahwk0bvs85kmzhmeta-skillvk97dtm4wg58enmfr9nahwk0bvs85kmzh
45downloads
0stars
3versions
Updated 1d ago
v1.0.2
MIT-0

Determinability Checker

Causal Sufficiency Determinability Checker

Algorithm implementation based on the paper Target Determinability under Partial Causal Observation (Wang, 2026).

Core Question

Before an Agent calls other skills, it asks itself:

"Based on current evidence, am I sufficient to make this judgment?"

Determinability Results

ResultMeaningAgent Action
DETERMINEDEvidence is sufficient; target is zero-error determinableExecute immediately; no wasted tokens
NOT_DETERMINEDEvidence is insufficient; indistinguishable counterexample existsReturn missing-evidence list; guide next skill to call

Theoretical Foundation

  • Theorem 10.1 (Finite Model Checking): The algorithm returns Determined if and only if the target is zero-error determinable; returns NotDetermined with a counterexample pair certificate.
  • Theorem 8.2 (Constrained Evidence Coverage): An evidence subset covers all conflict edges if and only if the target becomes determinable from the joint observation.
  • Quotient Factorization (Lemma 7.1): D is determinable from Omega if and only if D is constant on every observation equivalence class, if and only if D = g composed with Omega.

Usage Example

Request

{
  "session_id": "audit-001",
  "question": "Does the final output have a valid verification event?",
  "configs": [
    {"config_id": "C1", "tool": "code", "has_verif": true, "verif_hash": "valid", "output": "correct", "target": 1},
    {"config_id": "C2", "tool": "code", "has_verif": false, "verif_hash": "none", "output": "correct", "target": 0}
  ],
  "omega_field": "output",
  "target_field": "target",
  "evidence_fields": ["tool", "has_verif", "verif_hash"]
}

Response

{
  "session_id": "audit-001",
  "question": "Does the final output have a valid verification event?",
  "determinability": "NOT_DETERMINED",
  "can_proceed": false,
  "counterexample": {
    "config1": "C1",
    "config2": "C2",
    "observation": "correct",
    "target1": 1,
    "target2": 0
  },
  "missing_evidence": ["tool", "has_verif", "verif_hash"],
  "next_skill_suggestion": "Supplement the following evidence items: tool, has_verif, verif_hash",
  "message": "Non-determinability proven: configs C1 and C2 share observation correct but differ on target (1 vs 0)."
}

Cognitive Emergence Lab
yuqiang@humanjudgment.org

Comments

Loading comments...