MBTI Analyzer

v0.4.0

Analyze a user's MBTI from authorized OpenClaw memory, session history, and workspace notes. Use when the user asks for MBTI analysis, personality inference...

1· 98·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name, description, and runtime requirements align: the skill analyzes historical conversations and workspace notes and therefore legitimately needs Python and local read access to workspace files and OpenClaw session/memory files. No unrelated credentials or unusual binaries are requested.
Instruction Scope
SKILL.md prescribes a clear pipeline (discover → ingest authorized sources → build evidence → infer → render) and explicitly requires explicit authorization before reading content. However the discovery and ingestion targets include OpenClaw session JSONL and memory sqlite files (~/.openclaw/*), which can contain sensitive conversation history or other private data. The skill promises to exclude .env and credentials/* by default, but you should verify discover_sources.py/ingest_all_content.py actually implement those exclusions and that quoting options are respected.
Install Mechanism
No install spec is provided (instruction-only), and provided code is local Python scripts. There are no remote downloads or archive extraction steps in the manifest. This is lower risk than an installer that fetches arbitrary code at runtime.
Credentials
The skill requests no environment variables or external credentials, which is proportionate. It does require read access to local OpenClaw state and workspace files (including main.sqlite and sessions), which is appropriate for the stated purpose but warrants privacy consideration because those files may carry sensitive content.
Persistence & Privilege
always is false and the skill is user-invocable. Model invocation is not disabled (normal). The skill does write output artifacts to a local reports directory when run; it does not declare or request permanent elevated privileges or to modify other skills' configurations.
Assessment
This skill appears coherent for generating MBTI reports from your local OpenClaw history and workspace notes, but it reads potentially sensitive local data. Before installing or running it: 1) Confirm and explicitly authorize only the source categories you want analyzed (do not allow broad access by default). 2) Review discover_sources.py and ingest_all_content.py to verify they actually honor the declared exclusions (.env, credentials/*, identity/*) and do not read paths you consider sensitive. 3) Search the included Python files for network/networking calls (requests, urllib, http, socket), subprocess usage, or hard-coded external endpoints; if present, inspect what data they send and to whom. 4) Prefer running the skill in an isolated environment or on a copy of your workspace if you need to be cautious. 5) Disable quoting in the pipeline if you do not want any text excerpts included in the report. If you want higher assurance, request a full code review of the omitted files (build_evidence_pool.py, infer_mbti.py, render_report.py, mbti_common.py, discover/ingest scripts) to confirm there is no unexpected data exfiltration or filesystem access beyond the stated sources.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🧠 Clawdis
Binspython3
latestvk9728s7pzg6cmq2d0eqw4gvnc58480m0
98downloads
1stars
4versions
Updated 1w ago
v0.4.0
MIT-0

MBTI

Generate an evidence-backed MBTI report from authorized OpenClaw history and workspace notes.

Quick Start

This package is a skill. The public handoff line for other agents lives in README.md.

Primary entry points:

  • trigger phrases: MBTI, personality analysis, type me
  • skill command: mbti-report

Minimal runtime requirement:

  • python3

Local install for development or manual setup:

ln -s /absolute/path/to/mbti "$CODEX_HOME/skills/mbti"

Start an analysis by invoking the skill in chat:

Analyze my MBTI using only my authorized memory and session history

For agents and maintainers:

  • read this page top to bottom before running any script
  • use the existing pipeline scripts below as implementation steps
  • do not skip the authorization step
  • do not infer MBTI directly from raw history

At A Glance

What this skill produces:

  • report.html: primary deliverable
  • report.md: compact summary
  • analysis_result.json: type hypothesis, confidence, follow-up questions
  • evidence_pool.json: scored and traceable evidence

What the first interaction should do:

  1. Discover candidate source categories.
  2. Show the user what is available.
  3. Ask which categories are authorized.
  4. Run the extraction → evidence → inference → report pipeline.

Core Rule

Always separate the workflow into two layers:

  1. Full extraction from authorized sources into structured records and an evidence pool.
  2. MBTI inference only from the evidence pool and source summary.

Do not infer MBTI directly from the full raw history.

When To Use

Use this skill when the user wants:

  • MBTI analysis from existing conversations or memory
  • personality inference without filling out a questionnaire
  • a professional-looking personality report with evidence
  • a structured summary of likely type, adjacent alternatives, and uncertainties

Do not use this skill for clinical diagnosis or mental-health assessment.

Authorization First

Before reading any source content:

  1. Run source discovery.
  2. Show the user which source categories are available.
  3. Explain that the report may quote short excerpts unless quoting is disabled.
  4. Ask the user to confirm which source categories are allowed.

Default candidate categories:

  • workspace long-term memory: MEMORY.md
  • workspace daily memory: memory/*.md
  • OpenClaw sessions: ~/.openclaw/agents/*/sessions/*.jsonl
  • OpenClaw memory index: ~/.openclaw/memory/main.sqlite
  • OpenClaw task metadata: ~/.openclaw/tasks/runs.sqlite
  • OpenClaw cron metadata: ~/.openclaw/cron/runs/*.jsonl

Default exclusions:

  • .env
  • credentials/*
  • identity/*
  • device files
  • approval files
  • generic config files
  • gateway and runtime logs

Execution Flow

If the user does not provide an output directory, write results to:

./.mbti-reports/<timestamp>/

Recommended order:

1. Discover Candidate Sources

python3 {baseDir}/scripts/discover_sources.py \
  --workspace-root . \
  --openclaw-home ~/.openclaw \
  --output /tmp/mbti-source-manifest.json

Use the manifest to explain what can be analyzed. Do not read content yet.

2. Ingest Authorized Sources

python3 {baseDir}/scripts/ingest_all_content.py \
  --manifest /tmp/mbti-source-manifest.json \
  --approved-source-types workspace-long-memory,workspace-daily-memory,openclaw-sessions \
  --output-dir ./.mbti-reports/<timestamp>

This creates:

  • raw_records.jsonl
  • source_summary.json

3. Build Evidence Pool

python3 {baseDir}/scripts/build_evidence_pool.py \
  --raw-records ./.mbti-reports/<timestamp>/raw_records.jsonl \
  --source-summary ./.mbti-reports/<timestamp>/source_summary.json \
  --output ./.mbti-reports/<timestamp>/evidence_pool.json

This stage should:

  • keep recall high
  • remove obvious tool noise
  • flag pseudo-signals
  • merge repeated facts
  • retain traceable evidence references

4. Infer MBTI From Evidence Pool

python3 {baseDir}/scripts/infer_mbti.py \
  --evidence-pool ./.mbti-reports/<timestamp>/evidence_pool.json \
  --source-summary ./.mbti-reports/<timestamp>/source_summary.json \
  --output ./.mbti-reports/<timestamp>/analysis_result.json

Inference rules:

  • use four preferences as the primary decision layer
  • use type dynamics and cognitive functions only as a consistency check
  • weigh independent strong evidence above repeated weak signals
  • keep counterevidence visible
  • generate follow-up questions when margins are weak

If analysis_result.json contains needs_followup: true and the user is available to answer, ask the follow-up questions before finalizing the report.

5. Apply Follow-Up Answers And Rerun

After the user answers the low-confidence questions, rerun the pipeline with the answers incorporated as additional user evidence:

python3 {baseDir}/scripts/apply_followup_answers.py \
  --raw-records ./.mbti-reports/<timestamp>/raw_records.jsonl \
  --source-summary ./.mbti-reports/<timestamp>/source_summary.json \
  --analysis ./.mbti-reports/<timestamp>/analysis_result.json \
  --output-dir ./.mbti-reports/<timestamp> \
  --answer "S/N=<user answer>" \
  --answer "J/P=<user answer>"

This updates:

  • raw_records.jsonl
  • source_summary.json
  • followup_answers.json
  • evidence_pool.json
  • analysis_result.json
  • report.md
  • report.html

If the user declines to answer, keep the current report and surface the uncertainty explicitly.

6. Render Final Reports

python3 {baseDir}/scripts/render_report.py \
  --analysis ./.mbti-reports/<timestamp>/analysis_result.json \
  --evidence-pool ./.mbti-reports/<timestamp>/evidence_pool.json \
  --output-dir ./.mbti-reports/<timestamp> \
  --quote-mode summary \
  --open

Add --open to automatically open the HTML report in the default browser after rendering.

This creates:

  • report.md
  • report.html

7. Render A Standalone HTML Preview

When you only need to tune layout, CSS, spacing, or badge/theme behavior, use the built-in preview mode instead of rerunning discovery, ingestion, evidence construction, and inference:

python3 {baseDir}/scripts/render_report.py \
  --debug-preview \
  --debug-type INTP \
  --output-dir /tmp/mbti-preview

This creates a fully populated report.html and report.md from a bundled fixture so report debugging does not depend on prior pipeline artifacts.

Stage Testing

When you want to test one stage in isolation, prepare a synthetic fixture for that stage and then run the real stage script against those files.

Prepare fixture inputs:

python3 {baseDir}/scripts/prepare_stage_fixture.py \
  --stage infer \
  --output-dir /tmp/mbti-stage-infer

Then run the stage you actually want to inspect:

python3 {baseDir}/scripts/infer_mbti.py \
  --evidence-pool /tmp/mbti-stage-infer/evidence_pool.json \
  --source-summary /tmp/mbti-stage-infer/source_summary.json \
  --output /tmp/mbti-stage-infer/analysis_result.json

Supported fixture stages:

  • discover: generates synthetic workspace and OpenClaw source files
  • ingest: adds source_manifest.json
  • evidence: adds raw_records.jsonl and source_summary.json
  • infer: adds evidence_pool.json
  • render: adds analysis_result.json
  • followup: adds answers_input.json for apply_followup_answers.py

Smoke-test all stage entrypoints with:

python3 -m unittest tests.test_stage_smoke

Report Rules

The HTML report is the primary artifact. The chat reply should only provide:

  • the most likely type
  • confidence level
  • 2-4 key observations
  • the output file paths

Do not freestyle the full report in chat if report.html already exists.

Evidence Rules

Treat the following as high-risk pseudo-signals:

  • requests about how the assistant should behave
  • formatting preferences
  • tool and workflow instructions without self-descriptive context
  • command output, logs, stack traces, or copied machine text

Treat the following as stronger evidence:

  • repeated self-descriptions
  • stable decision-making patterns
  • recurring work and reflection habits
  • conflict between desired structure and actual behavior
  • cross-source consistency

Read these references when needed:

Output Discipline

  • Keep tone rigorous and non-clinical.
  • Do not use emoji in the final report.
  • Present the result as a best-fit hypothesis, not a fixed truth.
  • Always include at least one "why not the adjacent type" section.

Comments

Loading comments...