Focalpoint Memory

v0.3.4

FocalPoint — AI cognitive operating system. Memory + attention management + workflow orchestration. Workbench prepares context before tasks. Three-Province r...

0· 148·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for jeff0052/focalpoint-memory.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Focalpoint Memory" (jeff0052/focalpoint-memory) from ClawHub.
Skill page: https://clawhub.ai/jeff0052/focalpoint-memory
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: python3
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install focalpoint-memory

ClawHub CLI

Package manager switcher

npx clawhub@latest install focalpoint-memory
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name and description describe a local cognitive OS that manages memory, tasks, and review workflows. The skill only requires python3 and exposes a focalpoint binary; that aligns with a self-hosted Python tool that uses SQLite and local files. Requested capabilities (workbench, roles, SQLite storage, file paths such as narratives/, knowledge/) are coherent with the stated purpose.
Instruction Scope
SKILL.md is instruction-only and tells the agent to install the focalpoint Python package and register an MCP server command. The instructions reference only internal storage paths (SQLite file, events.jsonl, narratives/, knowledge/) and tool calls (activate_workbench, set_knowledge) that are consistent with the described functionality. However, the skill provides no runtime code in the bundle to review — the actual behavior depends entirely on the external Python package the instructions install, so you cannot audit runtime actions from this skill alone.
Install Mechanism
Registry metadata lists an install kind 'uv' creating a focalpoint binary, while the README-style setup shows 'pip install focalpoint'. Both are plausible for a Python CLI, but the mismatch is a small inconsistency to confirm. There is no download-from-unknown-URL evidence in the skill bundle itself, but installing a third-party pip package (or other package manager package) will execute code from the package author — review the PyPI package and its source before installing. Because this is instruction-only, nothing will be written to disk until you run install.
Credentials
The skill declares no required environment variables, no credentials, and no config paths outside its own storage. That is proportionate for a self-hosted local memory tool that uses a local SQLite file and local directories.
Persistence & Privilege
always is false and the skill uses the normal model-invocation behavior. It intends to persist data locally (SQLite, logs, narratives) which is consistent with purpose and does not request elevated or cross-skill privileges. There is no evidence it attempts to modify other skills or system-wide agent settings.
Assessment
This skill is internally consistent: it describes a local Python-based memory/workflow tool that stores data in SQLite and local files. However, the skill bundle contains only instructions (no code), so its actual behavior depends on the external 'focalpoint' package you would install. Before installing or enabling it: 1) Verify the package source (PyPI and the GitHub repo linked) and check the package owner and recent activity; 2) Inspect the package code (especially setup/install hooks and any network calls) or run it in an isolated VM/container; 3) Back up any important data and note where the SQLite file and logs will be stored; 4) Confirm whether the installer uses pip or your platform's 'uv' package manager and prefer installing from a trusted registry; 5) Monitor network activity on first runs to ensure it doesn't exfiltrate data. If you cannot review the package source yourself, run it in a sandboxed environment only.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🧠 Clawdis
Binspython3

Install

uv
Bins: focalpoint
uv tool install focalpoint
latestvk9753ajrxjdnm6mhawwmd8ct1x83cvce
148downloads
0stars
5versions
Updated 1mo ago
v0.3.4
MIT-0

FocalPoint — AI Cognitive Operating System

Your AI forgets everything between conversations. FocalPoint fixes that.

Not just memory — cognitive infrastructure. FocalPoint tracks your projects, prepares context before tasks, uses a Three-Province review system for quality decisions, and proactively alerts you about stuck work.

The Problem

AI agents are stateless. Every conversation starts from zero — no memory of your projects, decisions, or progress. You waste time re-explaining context and manually tracking what's stuck.

Existing solutions only go halfway:

ToolWhat it doesWhat it doesn't do
Mem0 / ZepRemembers conversationsDoesn't track tasks or alert you
LangGraph / CrewAIOrchestrates agentsNo persistent cognitive layer
Claude / OpenAI memoryRemembers preferencesDoesn't manage work or deadlines

They remember what was said. FocalPoint manages what needs to be done.

What You Get

  • Structured memory — Goal > Project > Milestone > Task hierarchy with status lifecycle
  • Workbench — One call prepares goal, knowledge, context, subtasks, and role prompt
  • Proactive alerts — Heartbeat detects blocked, stale, and at-risk tasks automatically
  • Knowledge documents — Attach design docs to nodes; child tasks inherit parent knowledge
  • Role-based thinking — Strategy, Review, and Execution roles see filtered perspectives
  • Three-Province review — Parallel review by two reviewers before execution; max 3 rejections then escalate
  • Full-text search — FTS5 search across titles, narratives, and knowledge documents
  • GitHub + Notion sync — Issues and pages auto-sync as FocalPoint nodes
  • Zero dependencies — Runs 100% locally on SQLite. No vector DB, no Redis, no cloud.

Competitive Comparison

CapabilityMem0ZepLettaCrewAIClaudeFocalPoint
Persistent memoryYesYesYesYesYesYes
Task lifecycle management---Partial-Yes
Dependency graph (DAG)-----Yes
Proactive alerts-----Yes
Knowledge docs + inheritance----PartialYes
Role-based context---Partial-Yes
Decision review workflow-----Yes
Full-text searchVectorVectorVector--FTS5
MCP native----ProprietaryYes
Self-hosted, zero depsPartialPartialYesYes-Yes

No competitor combines structured task management + proactive alerts + knowledge inheritance + role-based context + review workflow in a single MCP-native package.

Setup

1. Install

pip install focalpoint

2. Add MCP server

# openclaw.yaml
mcp_servers:
  fpms:
    command: focalpoint

3. Restart OpenClaw

That's it. 22 tools are now available.

How It Works

Architecture: Brain-Spine Model

Brain (LLM)              Spine (FocalPoint engine)
  |                         |
  | -- Tool Call -->        | Validate -> Write SQLite -> Narrative -> Audit
  |                         |
  | <-- Context ---         | Assemble L0/L1/L2 -> Trim -> Inject prompt
  • Brain = LLM. Only reads context and issues Tool Calls.
  • Spine = Deterministic engine. All logic here. LLM never touches storage directly.

Storage: Pure SQLite

SQLite           <- Single source of truth
events.jsonl     <- Audit trail
narratives/*.md  <- Append-only logs
knowledge/{id}/  <- Design documents

No vector database. No Redis. No PostgreSQL. One SQLite file is everything.

Work Mode

Workbench — prepare before you work

You: "Work on the payment system task"
AI calls activate_workbench(node_id, role="execution")
-> Returns: goal, knowledge docs, context bundle, sorted subtasks,
   suggested next step, and execution role prompt
-> AI enters role, reads background, starts working

Three Roles

RoleFocusSees
Strategy (Maker)Should we do this? Priority?Decisions + feedback
Review (Reviewer)Any risks? Historical lessons?Risk notes + progress
Execution (Engineer)How to build it? Acceptance criteria?Technical details + progress

Same data, different thinking modes. The role prompt guides the AI's perspective.

Three-Province Review — quality decisions

For major decisions (new features, architecture changes, tech choices):

Strategy produces requirements
    |
Review + Engineer review in parallel
|-- Review: checks risks, historical lessons -> approve/reject
|-- Engineer: evaluates feasibility -> approve/reject
    |
Both approve -> proceed to execution
Either rejects -> revise and resubmit
    |
> 3 rejections -> escalate to human

Knowledge Documents — persistent design context

You: "Save this architecture doc to the project"
AI calls set_knowledge(project_id, "architecture", content)
-> Child tasks inherit parent knowledge automatically
-> AI reads project overview without you re-explaining

Types: overview | requirements | architecture | custom names

Log Categories

append_log(node_id, content, category="decision")   # Decision records
append_log(node_id, content, category="feedback")    # User/market feedback
append_log(node_id, content, category="risk")        # Risks and lessons
append_log(node_id, content, category="technical")   # Technical details
append_log(node_id, content, category="progress")    # Progress updates
append_log(node_id, content, category="general")     # Default

Different roles see different categories. Strategy sees decisions + feedback. Execution sees technical + progress.

Use Cases

Project tracking

You: "Create a project for the product launch with 3 tasks"
(Next day)
You: "What's the launch status?"
AI:  "3 tasks: 1 done, 1 active, 1 blocked. The blocked task
      is waiting on design review — it's been 2 days."

Decision memory

You: "We're going with Stripe for payments"
AI:  append_log(node_id, "Chose Stripe — better API, lower fees", category="decision")
(Two weeks later)
You: "Why did we pick Stripe?"
AI:  Searches decisions -> "You decided on March 15 — better API and lower fees."

Risk detection

AI automatically runs heartbeat and finds:
  - Deploy task BLOCKED for 4 days
  - Docs update STALE — no activity for a week
  - Bug fix AT RISK — deadline is tomorrow

Full-text search

You: "Find everything related to caching decisions"
AI:  search_nodes(query="caching decisions")
-> Finds nodes by title, narrative content, and knowledge docs

Available Tools (22)

Write (11)

ToolWhat it does
create_nodeCreate a project/task/goal/milestone
update_statusChange status (inbox/active/waiting/done/dropped)
update_fieldUpdate title, summary, deadline, etc.
attach_node / detach_nodeMove tasks in hierarchy
add_dependency / remove_dependencyManage task dependencies
append_logRecord decisions, progress, risks (with category)
unarchiveRestore completed/dropped tasks
set_persistentProtect tasks from auto-archive
set_knowledgeAttach knowledge documents to nodes

Read (5)

ToolWhat it does
get_nodeGet full details of a work item
search_nodesFind tasks by filters or full-text search
get_knowledgeRead knowledge with parent inheritance
delete_knowledgeDelete a knowledge document
get_assembly_traceDebug context assembly

Cognitive (4)

ToolWhat it does
bootstrapLoad memory context (call at conversation start)
heartbeatScan for risks: blocked, stale, at-risk tasks
activate_workbenchPrepare working context with role + knowledge
get_context_bundleGet role-filtered, token-budgeted context

Review (1)

ToolWhat it does
sansei_reviewThree-Province parallel review

Runtime (1)

ToolWhat it does
shift_focusSwitch AI attention to a specific task

Automatic Memory Rules

Follow these rules in EVERY conversation:

  1. Conversation start -> Call bootstrap to load memory
  2. Before starting a task -> activate_workbench to prepare context
  3. User makes a decision -> append_log with category="decision"
  4. Risk identified -> append_log with category="risk"
  5. Task progresses -> update_status
  6. Design conclusions -> set_knowledge to persist for future sessions
  7. Before conversation ends -> append_log key takeaways
  8. Every ~10 min -> heartbeat to check for risks

Stats

MetricValue
Tests665
MCP Tools22
External dependencies0 (pure SQLite)
Cold start< 100ms
Supported LLMsAny (via MCP protocol)
PlatformsClaude Desktop, OpenClaw, any MCP client

Requirements

  • Python 3.10+
  • No external services — runs 100% locally on SQLite

Links

Comments

Loading comments...