Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Gemini Deep Research → Notion

Trigger Gemini Deep Research via browser and save results to Notion. Use when the user asks to "deep research" a topic, says "gemini deep research", or wants...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 29 · 0 current installs · 0 all-time installs
byAndy Xie@PalmPalm7
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The declared purpose (trigger Gemini Deep Research and export to Notion) matches the instructions, but the skill metadata lists no required env vars or config paths while the SKILL.md explicitly requires reading a Notion API key from ~/.config/notion/api_key and using a browser profile. README also claims it 'runs as a subagent' while SKILL.md mandates running in the main session — a direct contradiction. The hard-coded Notion parent page ID in SKILL.md is another oddity (the README tells the user to update it). These mismatches indicate sloppy or incomplete packaging and require clarification.
!
Instruction Scope
SKILL.md instructs the agent to (a) open the managed OpenClaw browser profile, interact with the Gemini web UI, and save the conversation URL, (b) sleep/poll for up to ~30 minutes (exec sleep), (c) extract content from DOM elements in chunks, write the concatenated report to /tmp/deep_research_<timestamp>.md, and (d) call the Notion API via curl. Reading ~/.config/notion/api_key is explicitly required. These actions are within the stated purpose but the SKILL.md reads local secret files and writes temporary files without those accesses being declared in the metadata — a scope and transparency issue.
Install Mechanism
No install spec is provided and there are no code files — this is an instruction-only skill. That is the lowest-risk install mechanism (nothing is downloaded or written during install).
!
Credentials
The skill metadata declares no required environment variables or config paths, but SKILL.md expects a Notion API key (reads ~/.config/notion/api_key) and uses $NOTION_KEY in the curl command. Asking to read a local secret file is sensitive but reasonable for Notion export — however it should be declared explicitly (and use a single consistent method: file or env). The hard-coded parent page ID may cause unexpected behavior if the user doesn't update it. Overall, requested secrets/access are plausible for the feature but the omission from metadata is a red flag.
Persistence & Privilege
always:false and no system-wide changes are requested. The skill requires running in the main session (to access the browser), which increases its runtime privileges compared to a subagent but is functionally necessary for browser automation. The README's claim that it runs as a subagent contradicts the SKILL.md requirement to run in the main session and should be corrected.
What to consider before installing
Before installing, verify and fix the inconsistencies: (1) Confirm you are comfortable letting the skill read a local Notion API key — SKILL.md reads ~/.config/notion/api_key and uses $NOTION_KEY in requests; place a dedicated, least-privilege Notion integration token there if you proceed. (2) Update the hard-coded Notion parent page ID in SKILL.md to a page you control (the README mentions this but the skill ships with a default UUID). (3) Decide whether you accept the main-session requirement (the skill will drive the managed browser and may run for ~25–30 minutes, using exec sleep and writing /tmp files). (4) Prefer the skill explicitly declare required env vars/config paths in metadata; ask the publisher to correct the README contradiction (subagent vs main session). If you cannot confirm these items or trust the source, do not install. If you proceed, use a Notion token with minimal scope and monitor created pages during the first run.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.1.0
Download zip
latestvk97dhswygc9nnamcgs4byc9gcx836gr7

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Gemini Deep Research → Notion

Execution Mode

Run ALL steps in the MAIN SESSION. Do NOT spawn a subagent.

The browser tool (OpenClaw managed profile) is only available in the main session. Subagents cannot access the browser, so all browser automation must happen here.

Reply first: "🔬 Deep Research starting for: [topic]. This takes ~25 min. I'll update you when done."

Then execute all phases below sequentially.


Instructions

Complete ALL steps below in the main session.

Phase 1: Trigger Deep Research

  1. browser action=open profile=openclaw targetUrl="https://gemini.google.com/app"
  2. Snapshot, find the text input, type the research query. Always prepend "请用中文回答。" to the query so the research output is in Chinese.
  3. Click "工具" (Tools) button (has page_info icon) → click "Deep Research" in the menu
  4. Click Send to submit the query
  5. Wait for research plan to appear (~10s), then click "Start research" / "开始研究" button
    • If snapshot-click doesn't work, use JS: (() => { var btn = Array.from(document.querySelectorAll('button')).find(b => /Start research|开始研究/.test(b.textContent.trim())); if (btn) { btn.click(); return 'clicked'; } return 'not found'; })()
  6. Verify research started: button should be disabled, status shows "Researching X websites..." or "正在研究..."
  7. Save the conversation URL from the browser

Phase 2: Wait for Completion

  1. Run exec("sleep 1200") (20 minutes) + process(poll, timeout=1200000)
  2. After waking, check status via JS: (() => { var el = document.querySelectorAll('message-content')[1]; return el ? el.innerText.substring(0, 200) : 'NOT_FOUND'; })()
  3. Look for completion signals: "I've completed your research" or "已完成"
  4. If still running, sleep another 600s and check again (max 2 retries)
  5. If failed/stuck after retries, announce the failure and exit

Phase 3: Extract Report

  1. Count message-content elements: document.querySelectorAll('message-content').length
  2. The research report is in the LAST message-content element (usually index 2)
  3. Get total length: document.querySelectorAll('message-content')[2]?.innerText?.length
  4. Extract in 8000-char chunks using substring: document.querySelectorAll('message-content')[N]?.innerText?.substring(START, END)
  5. Concatenate all chunks into the full report text
  6. Save to a temp file: write full report to /tmp/deep_research_<timestamp>.md

Phase 4: Export to Notion

Parent page ID: 31a4cfb5-c92b-809f-9d8a-dd451718a017 (Deep Research Database)

  1. Read the Notion API key: cat ~/.config/notion/api_key
  2. Parse the report into Notion blocks:
    • Lines starting with # → heading_2/heading_3 blocks
    • Bullet points → bulleted_list_item blocks
    • Regular text → paragraph blocks
    • Add a callout at top: "🔬 Generated by Gemini Deep Research on YYYY-MM-DD"
    • Split rich_text at 2000 chars
  3. Create the page via Notion API:
    curl -s -X POST "https://api.notion.com/v1/pages" \
      -H "Authorization: Bearer $NOTION_KEY" \
      -H "Notion-Version: 2025-09-03" \
      -H "Content-Type: application/json" \
      -d '{"parent":{"page_id":"31a4cfb5-c92b-809f-9d8a-dd451718a017"},"icon":{"type":"emoji","emoji":"🔬"},"properties":{"title":{"title":[{"text":{"content":"TOPIC"}}]}},"children":[BLOCKS]}'
    
  4. If >100 blocks, append remaining via PATCH to /v1/blocks/{page_id}/children
  5. Rate limit: wait 0.5s between batch requests

Phase 5: Announce

Report back with:

  • Research topic
  • Brief summary (2-3 key findings)
  • Notion page URL: https://www.notion.so/<page_id_without_dashes>

Notes

  • Always use profile="openclaw" for browser
  • Deep Research is under "工具" (Tools) menu, NOT the model selector
  • If Gemini needs login, announce failure — user must log in manually
  • The full pipeline should complete in ~25-30 min total

Files

2 total
Select a file
Select a file to preview.

Comments

Loading comments…