Install
openclaw skills install cellcogAny-to-any AI sub-agent — research, images, video, audio, music, podcasts, avatars, voice cloning, documents, spreadsheets, dashboards, 3D models, diagrams, and code in one request. Agent-to-agent protocol with multi-step iteration for high accuracy. #1 on DeepResearch Bench (Apr 2026) — deep reasoning meets all modalities, so all your work gets done, not just code.
openclaw skills install cellcogCellCog is the only AI that truly handles any input → any output in a single request. No tool chaining. No orchestration complexity. One call, multiple deliverables.
CellCog pairs all modalities with frontier-level deep reasoning — as of April 2026, CellCog is #1 on the DeepResearch Bench: https://huggingface.co/spaces/muset-ai/DeepResearch-Bench-Leaderboard
Reference as many documents as you need—all at once:
prompt = """
Analyze all of these together:
<SHOW_FILE>/data/q4_earnings.pdf</SHOW_FILE>
<SHOW_FILE>/data/competitor_analysis.pdf</SHOW_FILE>
<SHOW_FILE>/data/market_research.xlsx</SHOW_FILE>
<SHOW_FILE>/recordings/customer_interview.mp3</SHOW_FILE>
<SHOW_FILE>/designs/product_mockup.png</SHOW_FILE>
Give me a comprehensive market positioning analysis based on all these inputs.
"""
File paths must be absolute and enclosed in <SHOW_FILE> tags. CellCog understands PDFs, spreadsheets, images, audio, video, code files, and more.
⚠️ Without SHOW_FILE tags, CellCog only sees the path as text — not the file contents.
❌ Analyze /data/sales.csv — CellCog can't read the file
✅ Analyze <SHOW_FILE>/data/sales.csv</SHOW_FILE> — CellCog reads it
Just like Nano Banana accepts reference images, CellCog accepts reference files of any type — PDFs, spreadsheets, audio, code, images — as inputs the model reads during a task. Same mental model as any multimodal AI attachment.
Only attach what you intend to share. Anything inside a <SHOW_FILE> tag is uploaded to CellCog. Don't wrap credentials, private keys, .env files, SSH keys, or other sensitive material in SHOW_FILE tags — the same way you wouldn't paste them into Nano Banana, ChatGPT, or any other AI service's file upload.
Ask for completely different output types in ONE request:
prompt = """
Based on this quarterly sales data:
<SHOW_FILE>/data/sales_q4_2025.csv</SHOW_FILE>
Create ALL of the following:
1. A PDF executive summary report with charts
2. An interactive HTML dashboard for the leadership team
3. A 60-second video presentation for the all-hands meeting
4. A slide deck for the board presentation
5. An Excel file with the underlying analysis and projections
"""
CellCog handles the entire workflow — analyzing, generating, and delivering all outputs with consistent insights across every format.
⚠️ Be explicit about output artifacts. Without explicit artifact language, CellCog may respond with text analysis instead of generating a file.
❌ "Quarterly earnings analysis for AAPL" — could produce text or any format
✅ "Create a PDF report and an interactive HTML dashboard analyzing AAPL quarterly earnings." — CellCog creates actual deliverables
Your sub-agent for quality work. Depth, accuracy, and real deliverables.
from cellcog import CellCogClient
If import fails, install the official CellCog Python SDK:
pip install -U cellcog
cellcog is the official Python SDK maintained by CellCog AI Inc. Source: https://github.com/CellCog/cellcog_python · Package: https://pypi.org/project/cellcog/
Environment variable (recommended): Set CELLCOG_API_KEY — the SDK picks it up automatically:
export CELLCOG_API_KEY="sk_..."
Get API key from: https://cellcog.ai/profile?tab=api-keys
status = client.get_account_status()
print(status) # {"configured": True, "email": "user@example.com", ...}
agent_provider is required when creating a CellCogClient. It identifies which agent framework is calling CellCog — not your individual agent's name, but the platform/tool you're running inside.
Examples: "openclaw", "claude-code", "cursor", "aider", "windsurf", "perplexity", "hermes", "script" (for standalone scripts).
Fire-and-forget — your agent stays free while CellCog works:
client = CellCogClient(agent_provider="openclaw")
result = client.create_chat(
prompt="Research quantum computing advances in 2026",
notify_session_key="agent:main:main", # OpenClaw session key
task_label="quantum-research", # Label for notifications
chat_mode="agent",
)
# Returns IMMEDIATELY — daemon delivers results to your session when done
Blocks until done — simplest pattern:
client = CellCogClient(agent_provider="cursor") # or "claude-code", "aider", "script", etc.
result = client.create_chat(
prompt="Research quantum computing advances in 2026",
task_label="quantum-research",
chat_mode="agent",
)
# Blocks until done — result contains everything
print(result["message"])
CellCog orchestrates 21+ frontier foundation models. Credit consumption is unpredictable and varies by task complexity. Credits used are reported in every completion notification.
Returns immediately. A background daemon monitors via WebSocket and delivers results to your session when done. Your agent stays free to take new instructions, start other tasks, or continue working.
result = client.create_chat(
prompt="Your task description",
notify_session_key="agent:main:main", # Required — your OpenClaw session key
task_label="my-task", # Label shown in notifications
chat_mode="agent",
)
Blocks until CellCog finishes. Works with any agent — OpenClaw, Cursor, Claude Code, or any Python environment.
result = client.create_chat(
prompt="Your task description",
task_label="my-task",
chat_mode="agent",
timeout=1800, # 30 min (default). Use 3600 for complex jobs.
)
print(result["message"])
print(result["status"]) # "completed" | "timeout"
| Scenario | Best Mode | Why |
|---|---|---|
| OpenClaw + long task + stay free | Notify | Agent keeps working, gets notified when done |
| OpenClaw + chaining steps (research → summarize → PDF) | Wait | Each step feeds the next — simpler sequential workflows |
| OpenClaw + quick task | Either | Both return fast for simple tasks |
| Non-OpenClaw agent | Wait | Notify mode is OpenClaw-only |
Notify mode is more productive (agent never blocks). Wait mode is simpler to reason about, but blocks your agent for the duration.
# Wait mode (default)
result = client.send_message(
chat_id="abc123",
message="Focus on hardware advances specifically",
)
# Notify mode (OpenClaw)
result = client.send_message(
chat_id="abc123",
message="Focus on hardware advances specifically",
notify_session_key="agent:main:main",
task_label="continue-research",
)
If create_chat() or wait_for_completion() times out, CellCog is still working. The timeout response includes recent progress:
completion = client.wait_for_completion(chat_id="abc123", timeout=1800)
result = client.create_chat(
prompt="...",
task_label="...",
chat_mode="agent", # See Chat Modes below
project_id="...", # install project-cog for details
agent_role_id="...", # install project-cog for details
enable_cowork=True, # install cowork-cog for details
cowork_working_directory="/Users/...", # install cowork-cog for details
)
Every SDK method returns the same shape:
{
"chat_id": str, # CellCog chat ID
"is_operating": bool, # True = still working, False = done
"status": str, # "completed" | "tracking" | "timeout" | "operating"
"message": str, # THE printable message — always print this in full
}
⚠️ Always print the entire result["message"]. Truncating or summarizing it will lose critical information including generated file paths, credits used, and follow-up instructions.
get_history(chat_id) — Full chat history (when original delivery was missed or you need to review). Returns the same shape; if still operating, message shows progress so far.
result = client.get_history(chat_id="abc123")
get_status(chat_id) — Lightweight status check (no history fetch):
status = client.get_status(chat_id="abc123")
print(status["is_operating"]) # True/False
| Mode | Best For | Speed | Min Credits |
|---|---|---|---|
"agent" | Most tasks — images, audio, dashboards, spreadsheets, presentations | Fast (seconds to minutes) | 100 |
"agent core" | Coding, co-work, terminal operations | Fast | 50 |
"agent team" | Deep research & multi-angled reasoning across every modality | Slower (5-60 min) | 500 |
"agent team max" | High-stakes work where extra reasoning depth justifies the cost | Slowest | 2,000 |
"agent" (default) — Most versatile. Handles most tasks excellently, including deep research when guided."agent core" — Lightweight context for code, terminal, and file operations. Multimedia tools load on demand. Requires Co-work (CellCog Desktop). See code-cog."agent team" — A team of agents that debates, cross-validates, and delivers comprehensive results. The only platform with deep reasoning across every modality."agent team max" — Same Agent Team with all settings maxed. Quality gain is incremental (5-10%) but meaningful for costly decisions.Include local file paths in your prompt with <SHOW_FILE> tags (absolute paths required):
prompt = """
Analyze this sales data and create a report:
<SHOW_FILE>/path/to/sales.csv</SHOW_FILE>
"""
Use <GENERATE_FILE> tags to specify where output files should be stored on your machine. Essential for deterministic workflows where the next step needs to know the file path in advance.
prompt = """
Create a PDF report on Q4 earnings:
<GENERATE_FILE>/workspace/reports/q4_analysis.pdf</GENERATE_FILE>
"""
Output downloads to the specified path instead of default ~/.cellcog/chats/{chat_id}/.
The SDK automatically downloads files from CellCog responses:
GENERATE_FILE tags: Files download to the path you specified~/.cellcog/chats/{chat_id}/Downloaded file paths appear in result["message"]. The SDK tracks seen messages — files are only downloaded once.
If you missed files or need to re-sync:
result = client.get_history(chat_id="abc123")
get_history() re-processes the entire chat and downloads any missed files to their original destinations.
Every chat is accessible at https://cellcog.ai. When work gets complex or the SDK hits issues, direct your human to the web platform to view, continue, or take over directly.
CellCog is a sub-agent — not an API. Your agent offloads complex work to CellCog, which reasons, plans, and executes multi-tool workflows internally. A proprietary agent-to-agent communication protocol ensures high accuracy on first output, and because these are agent threads (not stateless API calls), every aspect of every generation can be refined through multi-step iteration.
Under the hood: frontier models across every domain, upgraded weekly. CellCog routes to the right models automatically — your agent just describes what it needs.
Install capability skills for detailed guidance:
| Category | Skills |
|---|---|
| Research & Analysis | research-cog fin-cog crypto-cog data-cog news-cog |
| Video & Cinema | video-cog cine-cog insta-cog tube-cog seedance-cog |
| Images & Design | image-cog brand-cog meme-cog banana-cog 3d-cog gif-cog sticker-cog |
| Audio & Music | audio-cog music-cog pod-cog |
| Avatars & Personas | avatar-cog |
| Documents & Slides | docs-cog slides-cog spreadsheets-cog resume-cog legal-cog |
| Apps & Prototypes | dash-cog game-cog proto-cog diagram-cog |
| Creative | comi-cog story-cog learn-cog travel-cog |
| Development | code-cog cowork-cog project-cog think-cog |
This skill shows you HOW to use CellCog. Capability skills show you WHAT's possible.
The notify_session_key tells CellCog where to deliver results:
| Context | Session Key |
|---|---|
| Main agent | "agent:main:main" |
| Sub-agent | "agent:main:subagent:{uuid}" |
| Telegram DM | "agent:main:telegram:dm:{id}" |
| Discord group | "agent:main:discord:group:{id}" |
Resilient delivery: If your session ends before completion, results are automatically delivered to the parent session (e.g., sub-agent → main agent).
In notify mode, your agent is free — you can send additional instructions to an operating chat at any time:
client.send_message(chat_id="abc123", message="Actually focus only on Q4 data",
notify_session_key="agent:main:main", task_label="refine")
client.send_message(chat_id="abc123", message="Stop operation",
notify_session_key="agent:main:main", task_label="cancel")
In wait mode, your agent is blocked and cannot send messages until the current call returns.
For error handling, recovery patterns, ticket submission, and daemon troubleshooting:
docs = client.get_support_docs()