Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Causal Inference

v0.2.0

Add causal reasoning to agent actions. Trigger on ANY high-level action with observable outcomes - emails, messages, calendar changes, file operations, API calls, notifications, reminders, purchases, deployments. Use for planning interventions, debugging failures, predicting outcomes, backfilling historical data for analysis, or answering "what happens if I do X?" Also trigger when reviewing past actions to understand what worked/failed and why.

5· 2.8k·7 current·7 all-time
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description (causal reasoning for actions) match the included scripts: logging actions, backfilling emails/calendar/messages, and estimating treatment effects. However the registry lists no required binaries while the SKILL.md and scripts assume local CLIs (gog, wacli) are available; that's a mismatch the publisher should document. The code only targets data sources relevant to the stated purpose (email/calendar/messages).
Instruction Scope
Instructions explicitly tell the agent to backfill and log wide-ranging personal data (emails, messages, calendar events) and to trigger on 'ANY high-level action'. This is coherent with a causal layer but broad: the skill will collect and persist personally sensitive data from those sources. It does not instruct sending data to external network endpoints, but it does invoke local CLIs and reads/writes local files (/tmp and memory/causal/action_log.jsonl).
Install Mechanism
This is instruction-only with included scripts (no download/install step). No external archive downloads or obscure install URLs are present. Risk is limited to executing the included Python scripts and local subprocesses (gog, wacli) as described.
Credentials
The skill declares no environment variables or credentials, which is appropriate in principle. In practice the scripts call local CLIs (gog, wacli) that will use whatever credentials those tools are configured with; the skill doesn't request or store additional secrets. Reviewers should be aware the skill relies on existing CLI configs (which may hold sensitive tokens) even though none are declared.
Persistence & Privilege
The skill writes its own action log to memory/causal/action_log.jsonl and creates those directories; it does not request always: true, does not modify other skills' configs, and does not request elevated system privileges. Its persistence is limited to its own files.
Assessment
What to consider before installing: - This skill will parse and store sensitive personal data (emails, messages, calendar events). Review the scripts (backfill_* and log_action.py) to ensure you are comfortable with what is written to memory/causal/action_log.jsonl and /tmp files, and where those files will remain on disk. - The SKILL.md expects local CLIs (gog, wacli). Confirm you need/want those CLIs to run here; they will use any credentials already configured in your environment even though the skill doesn't ask for credentials explicitly. - Test on a small or anonymized dataset first. If you enable it, consider limiting triggers (don't allow 'ANY action' globally) and periodically rotate/delete the action_log if it contains sensitive history. - If you need stricter privacy, run these scripts manually outside the agent, or modify them to sanitize/redact identifiers before writing logs. - Because the source is unknown, prefer running with user invocation only (not fully autonomous) until you trust the publisher and have audited the code.

Like a lobster shell, security has layers — review code before you run it.

latestvk97eh8kr4wqjgn17xjnw9h0qjx7zzzjn
2.8kdownloads
5stars
2versions
Updated 7h ago
v0.2.0
MIT-0

Causal Inference

A lightweight causal layer for predicting action outcomes, not by pattern-matching correlations, but by modeling interventions and counterfactuals.

Core Invariant

Every action must be representable as an explicit intervention on a causal model, with predicted effects + uncertainty + a falsifiable audit trail.

Plans must be causally valid, not just plausible.

When to Trigger

Trigger this skill on ANY high-level action, including but not limited to:

DomainActions to Log
CommunicationSend email, send message, reply, follow-up, notification, mention
CalendarCreate/move/cancel meeting, set reminder, RSVP
TasksCreate/complete/defer task, set priority, assign
FilesCreate/edit/share document, commit code, deploy
SocialPost, react, comment, share, DM
PurchasesOrder, subscribe, cancel, refund
SystemConfig change, permission grant, integration setup

Also trigger when:

  • Reviewing outcomes — "Did that email get a reply?" → log outcome, update estimates
  • Debugging failures — "Why didn't this work?" → trace causal graph
  • Backfilling history — "Analyze my past emails/calendar" → parse logs, reconstruct actions
  • Planning — "Should I send now or later?" → query causal model

Backfill: Bootstrap from Historical Data

Don't start from zero. Parse existing logs to reconstruct past actions + outcomes.

Email Backfill

# Extract sent emails with reply status
gog gmail list --sent --after 2024-01-01 --format json > /tmp/sent_emails.json

# For each sent email, check if reply exists
python3 scripts/backfill_email.py /tmp/sent_emails.json

Calendar Backfill

# Extract past events with attendance
gog calendar list --after 2024-01-01 --format json > /tmp/events.json

# Reconstruct: did meeting happen? was it moved? attendee count?
python3 scripts/backfill_calendar.py /tmp/events.json

Message Backfill (WhatsApp/Discord/Slack)

# Parse message history for send/reply patterns
wacli search --after 2024-01-01 --from me --format json > /tmp/wa_sent.json
python3 scripts/backfill_messages.py /tmp/wa_sent.json

Generic Backfill Pattern

# For any historical data source:
for record in historical_data:
    action_event = {
        "action": infer_action_type(record),
        "context": extract_context(record),
        "time": record["timestamp"],
        "pre_state": reconstruct_pre_state(record),
        "post_state": extract_post_state(record),
        "outcome": determine_outcome(record),
        "backfilled": True  # Mark as reconstructed
    }
    append_to_log(action_event)

Architecture

A. Action Log (required)

Every executed action emits a structured event:

{
  "action": "send_followup",
  "domain": "email",
  "context": {"recipient_type": "warm_lead", "prior_touches": 2},
  "time": "2025-01-26T10:00:00Z",
  "pre_state": {"days_since_last_contact": 7},
  "post_state": {"reply_received": true, "reply_delay_hours": 4},
  "outcome": "positive_reply",
  "outcome_observed_at": "2025-01-26T14:00:00Z",
  "backfilled": false
}

Store in memory/causal/action_log.jsonl.

B. Causal Graphs (per domain)

Start with 10-30 observable variables per domain.

Email domain:

send_time → reply_prob
subject_style → open_rate
recipient_type → reply_prob
followup_count → reply_prob (diminishing)
time_since_last → reply_prob

Calendar domain:

meeting_time → attendance_rate
attendee_count → slip_risk
conflict_degree → reschedule_prob
buffer_time → focus_quality

Messaging domain:

response_delay → conversation_continuation
message_length → response_length
time_of_day → response_prob
platform → response_delay

Task domain:

due_date_proximity → completion_prob
priority_level → completion_speed
task_size → deferral_risk
context_switches → error_rate

Store graph definitions in memory/causal/graphs/.

C. Estimation

For each "knob" (intervention variable), estimate treatment effects:

# Pseudo: effect of morning vs evening sends
effect = mean(reply_prob | send_time=morning) - mean(reply_prob | send_time=evening)
uncertainty = std_error(effect)

Use simple regression or propensity matching first. Graduate to do-calculus when graphs are explicit and identification is needed.

D. Decision Policy

Before executing actions:

  1. Identify intervention variable(s)
  2. Query causal model for expected outcome distribution
  3. Compute expected utility + uncertainty bounds
  4. If uncertainty > threshold OR expected harm > threshold → refuse or escalate to user
  5. Log prediction for later validation

Workflow

On Every Action

BEFORE executing:
1. Log pre_state
2. If enough historical data: query model for expected outcome
3. If high uncertainty or risk: confirm with user

AFTER executing:
1. Log action + context + time
2. Set reminder to check outcome (if not immediate)

WHEN outcome observed:
1. Update action log with post_state + outcome
2. Re-estimate treatment effects if enough new data

Planning an Action

1. User request → identify candidate actions
2. For each action:
   a. Map to intervention(s) on causal graph
   b. Predict P(outcome | do(action))
   c. Estimate uncertainty
   d. Compute expected utility
3. Rank by expected utility, filter by safety
4. Execute best action, log prediction
5. Observe outcome, update model

Debugging a Failure

1. Identify failed outcome
2. Trace back through causal graph
3. For each upstream node:
   a. Was the value as expected?
   b. Did the causal link hold?
4. Identify broken link(s)
5. Compute minimal intervention set that would have prevented failure
6. Log counterfactual for learning

Quick Start: Bootstrap Today

# 1. Create the infrastructure
mkdir -p memory/causal/graphs memory/causal/estimates

# 2. Initialize config
cat > memory/causal/config.yaml << 'EOF'
domains:
  - email
  - calendar
  - messaging
  - tasks

thresholds:
  max_uncertainty: 0.3
  min_expected_utility: 0.1

protected_actions:
  - delete_email
  - cancel_meeting
  - send_to_new_contact
  - financial_transaction
EOF

# 3. Backfill one domain (start with email)
python3 scripts/backfill_email.py

# 4. Estimate initial effects
python3 scripts/estimate_effect.py --treatment send_time --outcome reply_received --values morning,evening

Safety Constraints

Define "protected variables" that require explicit user approval:

protected:
  - delete_email
  - cancel_meeting
  - send_to_new_contact
  - financial_transaction

thresholds:
  max_uncertainty: 0.3  # don't act if P(outcome) uncertainty > 30%
  min_expected_utility: 0.1  # don't act if expected gain < 10%

Files

  • memory/causal/action_log.jsonl — all logged actions with outcomes
  • memory/causal/graphs/ — domain-specific causal graph definitions
  • memory/causal/estimates/ — learned treatment effects
  • memory/causal/config.yaml — safety thresholds and protected variables

References

  • See references/do-calculus.md for formal intervention semantics
  • See references/estimation.md for treatment effect estimation methods

Comments

Loading comments...