Install
openclaw skills install memory-to-notionSummarize and archive conversation memories to Notion. Trigger when the user says "summarize memory", "archive conversation", "save memories", "sync memories...
openclaw skills install memory-to-notionThis skill retrieves the user's past conversation history, analyzes it for valuable and meaningful content, decomposes conversations into atomic memory entries, and writes them as rows into the Memory Store Notion Database.
This skill uses a zero-config convention: the database is always named "Memory Store".
Locate the database:
POST /v1/search
{
"query": "Memory Store"
}
From the results, find the item with object: "data_source" whose title is "Memory Store".
Extract both:
data_source_id -- for querying (POST /v1/data_sources/{id}/query)
database_id -- for creating pages (POST /v1/pages with parent: {"database_id": "..."})
If found -> use data_source_id for queries, database_id for page creation.
If not found -> ask the user: "No 'Memory Store' database found in your Notion workspace. Which page should I create it under? Please provide a Notion page URL or page ID." Then create the database (see Database Creation below).
| Property | Type | Description |
|---|---|---|
| Title | Title | One-line memory summary (searchable) |
| Category | Select | Fact / Decision / Preference / Context / Pattern / Skill |
| Content | Rich Text | Detailed memory content |
| Source | Select | Claude.ai / ClaudeCode / Manual / OpenClaw / Other |
| Status | Select | Active / Archived / Contradicted |
| Scope | Select | Global / Project |
| Project | Rich Text | Project name (set when Scope=Project, leave empty for Global) |
| Expiry | Select | Never / 30d / 90d / 1y |
| Source Date | Date | When the original conversation happened |
When the database does not exist, create it under the user-specified parent page. Use the Notion create-database API with the schema above.
This skill describes operations using generic Notion REST API format. Each platform's AI should translate to its available tools using the fixed mappings below. Do NOT guess -- follow these mappings exactly.
| Operation | SKILL.md Describes | Use MCP Tool | Key Parameters |
|---|---|---|---|
| Discover database | POST /v1/search | notion-search | query: "Memory Store", content_search_mode: "workspace_search" |
| Get IDs | -- | notion-fetch | Fetch the database, extract data_source_id from <data-source url="collection://..."> tag |
| Dedup query | POST /v1/data_sources/{id}/query | Not available | Fall back to notion-search with data_source_url (see Step 3 note) |
| Create page | POST /v1/pages | notion-create-pages | parent: { "data_source_id": "..." } |
| Update page status | PATCH /v1/pages/{id} | notion-update-page | command: "update_properties" |
| Create database | POST /v1/databases | notion-create-database | Uses SQL DDL syntax (see Database Creation) |
| Fetch page | GET /v1/pages/{id} | notion-fetch | id: "<page_id>" |
Critical notes:
content_search_mode: "workspace_search" (default ai_search mode may not return databases)notion-search with data_source_url: "collection://<data_source_id>" and keywords from the candidate memory.
Then notion-fetch each result to compare full properties.notion-search against the same data_source_url -- MCP will error.
When deduping multiple candidate memories, run searches sequentially. Deduplicate results by page id before fetching.notion-create-database uses SQL DDL syntax, not JSON. See Database Creation section for the DDL.OpenClaw accesses Notion through a separately installed "notion" skill (clawhub.ai/steipete/notion). This skill must be installed before using memory-to-notion.
When executing, first read the notion skill's SKILL.md to learn the Notion API access patterns (API key setup, curl commands, endpoints). Then follow this workflow using those patterns.
Important: This skill (memory-to-notion) is a workflow skill that depends on Notion connectivity. It does NOT provide Notion access itself -- it relies on the platform's Notion integration (MCP tools on Claude Code/Claude.ai, notion skill on OpenClaw).
Locate the "Memory Store" database. If not found, create it (see above).
Choose a strategy based on the current platform:
Claude.ai (has conversation history API):
recent_chats(n=20) to fetch recent conversationsafter/before parameters to filter by time rangeconversation_search for keyword-based retrievalClaude Code (current session only):
Before writing, query the database to check for duplicates and conflicts. For each candidate memory, search Title and Content:
POST /v1/data_sources/{data_source_id}/query
{
"filter": {
"or": [
{ "property": "Title", "title": { "contains": "<keyword from new memory>" } },
{ "property": "Content", "rich_text": { "contains": "<keyword from new memory>" } }
]
},
"page_size": 10
}
MCP platforms (Claude Code / Claude.ai): Structured query is not available. Use
notion-searchwithdata_source_url: "collection://<data_source_id>"and keywords from the candidate memory as query. Run dedup searches sequentially (not in parallel). Deduplicate results by page id across searches, thennotion-fetchonly unique results to compare properties.
The query returns full page properties. Check for:
Each conversation may yield 0-N memory entries. The key principle is one fact per row.
Decomposition rules:
Examples of good decomposition:
A conversation about "setting up a new Python project" might yield:
"User prefers uv over pip for Python dependency management" -> Category: Preference
"Project OpenClaw uses FastAPI + PostgreSQL architecture" -> Category: Decision
"User prefers Ruff for code formatting and linting" -> Category: Preference
"User is a programmer" -> Category: Fact
What NOT to store:
Create pages in the database. For each memory entry, set properties:
{
"Title": "One-line summary",
"Category": "Fact|Decision|Preference|Context|Pattern|Skill",
"Content": "Detailed memory content, sufficient for any AI platform to understand and use",
"Source": "Claude.ai|ClaudeCode|OpenClaw|Manual|Other",
"Status": "Active",
"Scope": "Global|Project",
"Project": "Project name (set when Scope=Project)",
"Expiry": "Never|30d|90d|1y",
"date:Source Date:start": "YYYY-MM-DD",
"date:Source Date:is_datetime": 0
}
Scope guidelines:
Expiry guidelines:
If Step 3 found conflicting memories:
PATCH /v1/pages/{old_page_id}
{ "properties": { "Status": { "select": { "name": "Contradicted" } } } }
After writing, provide the user with a summary:
Example:
Memory archival complete
Processed 8 conversations, generated 12 memories:
- New: 10
- Updated: 1 (user location updated from Beijing to Shenzhen)
- Skipped: 3 low-value conversations
New memories:
| Title | Category |
|-------|----------|
| User prefers uv for Python dependency management | Preference |
| Project OpenClaw uses FastAPI architecture | Decision |
User: summarize memory
Claude:
data_source_id and database_id