Gemini Deep Research → Notion
Trigger Gemini Deep Research via browser and save results to Notion. Use when the user asks to "deep research" a topic, says "gemini deep research", or wants...
Like a lobster shell, security has layers — review code before you run it.
License
SKILL.md
Gemini Deep Research → Notion
Execution Mode
Run ALL steps in the MAIN SESSION. Do NOT spawn a subagent.
The browser tool (OpenClaw managed profile) is only available in the main session. Subagents cannot access the browser, so all browser automation must happen here.
Reply first: "🔬 Deep Research starting for: [topic]. This takes ~25 min. I'll update you when done."
Then execute all phases below sequentially.
Instructions
Complete ALL steps below in the main session.
Phase 1: Trigger Deep Research
browser action=open profile=openclaw targetUrl="https://gemini.google.com/app"- Snapshot, find the text input, type the research query. Always prepend "请用中文回答。" to the query so the research output is in Chinese.
- Click "工具" (Tools) button (has
page_infoicon) → click "Deep Research" in the menu - Click Send to submit the query
- Wait for research plan to appear (~10s), then click "Start research" / "开始研究" button
- If snapshot-click doesn't work, use JS:
(() => { var btn = Array.from(document.querySelectorAll('button')).find(b => /Start research|开始研究/.test(b.textContent.trim())); if (btn) { btn.click(); return 'clicked'; } return 'not found'; })()
- If snapshot-click doesn't work, use JS:
- Verify research started: button should be disabled, status shows "Researching X websites..." or "正在研究..."
- Save the conversation URL from the browser
Phase 2: Wait for Completion
- Run
exec("sleep 1200")(20 minutes) +process(poll, timeout=1200000) - After waking, check status via JS:
(() => { var el = document.querySelectorAll('message-content')[1]; return el ? el.innerText.substring(0, 200) : 'NOT_FOUND'; })() - Look for completion signals: "I've completed your research" or "已完成"
- If still running, sleep another 600s and check again (max 2 retries)
- If failed/stuck after retries, announce the failure and exit
Phase 3: Extract Report
- Count message-content elements:
document.querySelectorAll('message-content').length - The research report is in the LAST
message-contentelement (usually index 2) - Get total length:
document.querySelectorAll('message-content')[2]?.innerText?.length - Extract in 8000-char chunks using substring:
document.querySelectorAll('message-content')[N]?.innerText?.substring(START, END) - Concatenate all chunks into the full report text
- Save to a temp file: write full report to
/tmp/deep_research_<timestamp>.md
Phase 4: Export to Notion
Parent page ID: 31a4cfb5-c92b-809f-9d8a-dd451718a017 (Deep Research Database)
- Read the Notion API key:
cat ~/.config/notion/api_key - Parse the report into Notion blocks:
- Lines starting with
#→ heading_2/heading_3 blocks - Bullet points → bulleted_list_item blocks
- Regular text → paragraph blocks
- Add a callout at top: "🔬 Generated by Gemini Deep Research on YYYY-MM-DD"
- Split rich_text at 2000 chars
- Lines starting with
- Create the page via Notion API:
curl -s -X POST "https://api.notion.com/v1/pages" \ -H "Authorization: Bearer $NOTION_KEY" \ -H "Notion-Version: 2025-09-03" \ -H "Content-Type: application/json" \ -d '{"parent":{"page_id":"31a4cfb5-c92b-809f-9d8a-dd451718a017"},"icon":{"type":"emoji","emoji":"🔬"},"properties":{"title":{"title":[{"text":{"content":"TOPIC"}}]}},"children":[BLOCKS]}' - If >100 blocks, append remaining via PATCH to
/v1/blocks/{page_id}/children - Rate limit: wait 0.5s between batch requests
Phase 5: Announce
Report back with:
- Research topic
- Brief summary (2-3 key findings)
- Notion page URL:
https://www.notion.so/<page_id_without_dashes>
Notes
- Always use
profile="openclaw"for browser - Deep Research is under "工具" (Tools) menu, NOT the model selector
- If Gemini needs login, announce failure — user must log in manually
- The full pipeline should complete in ~25-30 min total
Files
2 totalComments
Loading comments…
