Install
openclaw skills install recall-from-notionRecall user memories from the Notion Memory Store. Trigger PROACTIVELY at the beginning of conversations where knowing the user's background, preferences, pa...
openclaw skills install recall-from-notionRead the user's memories from the Memory Store Notion Database and use them as context for the current conversation.
This skill uses a zero-config convention: the database is always named "Memory Store".
Step 1: Locate the database
POST /v1/search
{
"query": "Memory Store"
}
From the results, find the item with object: "data_source" whose title is "Memory Store".
Extract both:
data_source_id -- for querying (POST /v1/data_sources/{id}/query)
database_id -- for reference
If found -> use data_source_id for all subsequent queries.
If not found -> silently skip recall. Do NOT prompt the user to create anything.
This skill describes operations using generic Notion REST API format. Each platform's AI should translate to its available tools using the fixed mappings below. Do NOT guess -- follow these mappings exactly.
| Operation | SKILL.md Describes | Use MCP Tool | Key Parameters |
|---|---|---|---|
| Discover database | POST /v1/search | notion-search | query: "Memory Store", content_search_mode: "workspace_search" |
| Get data_source_id | -- | notion-fetch | Fetch the database, extract from <data-source url="collection://..."> tag |
| Structured query | POST /v1/data_sources/{id}/query | Not available | Skip Path 1, use Path 2 only |
| Semantic search in DB | POST /v1/search + data_source_url | notion-search | data_source_url: "collection://<data_source_id>" |
| Fetch page details | GET /v1/pages/{id} | notion-fetch | id: "<page_id>" |
Critical notes:
content_search_mode: "workspace_search" (default ai_search mode may not return databases)notion-search against the same data_source_url -- MCP will error. Run searches sequentially, or combine into one query.notion-fetch per result to get Category, Status, Scope, etc.notion-fetch calls can run in parallel within one response to minimize latency.OpenClaw accesses Notion through a separately installed "notion" skill (clawhub.ai/steipete/notion). This skill must be installed before using recall-from-notion.
When executing, first read the notion skill's SKILL.md to learn the Notion API access patterns (API key setup, curl commands, endpoints). Then follow this workflow using those patterns.
Important: This skill (recall-from-notion) is a workflow skill that depends on Notion connectivity. It does NOT provide Notion access itself -- it relies on the platform's Notion integration (MCP tools on Claude Code/Claude.ai, notion skill on OpenClaw).
Always trigger when:
Consider triggering when:
Skip when:
See Database Discovery above. If not found, silently skip all remaining steps.
From the user's message or conversation context, extract:
Search strategy guidance:
data_source_url MUST be sequential (see Critical notes).Use two parallel paths to maximize recall coverage, then merge results.
Path 1 -- Structured query (precision, returns full properties):
Query the data source with keyword filters on Title, Content, and Project.
POST /v1/data_sources/{data_source_id}/query
{
"filter": {
"or": [
{ "property": "Title", "title": { "contains": "<keyword>" } },
{ "property": "Content", "rich_text": { "contains": "<keyword>" } },
{ "property": "Project", "rich_text": { "contains": "<keyword>" } }
]
},
"page_size": 50
}
For multiple keywords (e.g., "Notion" and "MCP"):
{
"filter": {
"or": [
{ "property": "Title", "title": { "contains": "Notion" } },
{ "property": "Content", "rich_text": { "contains": "Notion" } },
{ "property": "Title", "title": { "contains": "MCP" } },
{ "property": "Content", "rich_text": { "contains": "MCP" } }
]
}
}
MCP platforms (Claude Code / Claude.ai): Path 1 is not available (no structured query tool). Skip directly to Path 2. The recall becomes single-path semantic search.
Path 2 -- Semantic search (coverage, catches what keywords miss):
Search within the Memory Store using the semantic query from Step 2.
POST /v1/search
{
"query": "<semantic query from Step 2>",
"data_source_url": "collection://<data_source_id>"
}
This catches memories that are semantically related but don't contain the exact keywords. For example, searching "CI configuration" might find "GitHub Actions workflow preferences" even though it doesn't contain the word "CI".
Why dual-path? Structured query is precise but only matches exact keywords -- it misses semantically related memories. Semantic search understands intent but returns incomplete properties (only id/title/highlight). Combining both gives precision + coverage.
GET /v1/pages/{page_id}
(Only fetch the delta -- skip pages already in structured query results.)MCP platforms: Since only Path 2 is available, ALL results need enrichment via
notion-fetch. If multiple searches were performed, deduplicate by page id first, then fetch only unique results. Multiplenotion-fetchcalls can run in parallel to minimize latency.
Apply these filters on the merged results:
Scope filter (most important for Claude Code):
Status filter:
Expiry filter:
Priority scoring:
Injection limit: 10-15 memories maximum.
Format recalled memories as a compact context block grouped by Category:
Recalled context from Memory Store:
[Preferences]
- User prefers Ruff for code formatting and linting
- ...
[Facts]
- User is a programmer, primarily uses Python
- Notion workspace connected via MCP
- ...
[Decisions]
- Memory Store uses Notion Database as storage backend
- ...
Rules:
No results: Proceed without memories. Don't announce unless user explicitly asked.
Too many (>15): Rank strictly, inject top 10-15. Note more are available.
Stale/wrong memories: Flag contradictions and offer to update.
"How do you know that?": Explain it came from Memory Store, offer to show/edit.