Install
openclaw skills install external-ai-integrationLeverage external AI models (ChatGPT, Claude, Hugging Face, etc.) as tools via browser automation (Chrome Relay) and optional Hugging Face API. Use when you...
openclaw skills install external-ai-integrationThis skill provides patterns for using external AI models as tools that the assistant can call on‑demand. It extends existing browser‑automation and API‑integration skills, enabling the assistant to:
Use Chrome Relay to automate interactions with ChatGPT, Claude, Gemini, or any other web‑based LLM that requires a browser interface.
Prerequisites:
chatgpt.com, claude.ai) already logged in (session cookies present).memory/patterns/playbooks.md – “Browser Automation (Chrome Relay)”).Steps:
profile="chrome").refs="aria" for stable references).Example workflow:
# This is a conceptual example; actual implementation uses browser tool calls.
def ask_chatgpt(prompt):
# 1. Ensure Chrome Relay is attached
browser(action="open", profile="chrome", targetUrl="https://chatgpt.com")
# 2. Snapshot to get references
snap = browser(action="snapshot", refs="aria")
# 3. Find input field (aria role="textbox") and send button
input_ref = snap.find_element(role="textbox", name="Message")
send_ref = snap.find_element(role="button", name="Send")
# 4. Type prompt and click send
browser(action="act", request={"kind":"type", "ref":input_ref, "text":prompt})
browser(action="act", request={"kind":"click", "ref":send_ref})
# 5. Wait for response (simplified)
time.sleep(10)
# 6. Snapshot again, extract response from last message bubble
snap2 = browser(action="snapshot", refs="aria")
response_element = snap2.find_last_message()
return response_element.text
Key considerations:
For models hosted on Hugging Face Spaces or the Inference API, you can call them directly via HTTP requests.
Prerequisites:
"gpt2", "google/flan-t5-large", "microsoft/DialoGPT-medium").Steps:
~/.huggingface/token).curl or exec with requests Python module.Example script (using curl):
#!/bin/bash
set -e
MODEL="google/flan-t5-large"
PROMPT="Translate English to German: How are you?"
API_TOKEN=$(op read "op://Personal/HuggingFace/api_token")
curl -s "https://api-inference.huggingface.co/models/$MODEL" \
-H "Authorization: Bearer $API_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"inputs\": \"$PROMPT\"}" | jq -r '.[0].generated_text'
Example Python function (using requests):
import requests
import os
def hf_inference(model, inputs, parameters=None):
api_token = os.getenv("HF_TOKEN") # or retrieve via 1Password
url = f"https://api-inference.huggingface.co/models/{model}"
headers = {"Authorization": f"Bearer {api_token}"}
payload = {"inputs": inputs}
if parameters:
payload.update(parameters)
resp = requests.post(url, headers=headers, json=payload)
resp.raise_for_status()
return resp.json()
Key considerations:
{"options":{"wait_for_model":true}} in parameters.Instead of spawning a sub‑agent, the assistant calls external AI within its own reasoning flow.
Pattern:
Example decision logic:
def external_ai_assist(task_type, prompt):
if task_type == "code_review":
# Use Claude via browser automation
return ask_claude(prompt)
elif task_type == "translation":
# Use Hugging Face translation model
return hf_inference("Helsinki-NLP/opus-mt-en-de", prompt)
elif task_type == "creative_writing":
# Use ChatGPT via browser automation
return ask_chatgpt(prompt)
else:
raise ValueError(f"No external AI configured for {task_type}")
External models may require different prompting styles than the assistant's native model.
"Translate English to German: ..." for T5).Example prompt for code review:
You are an expert software engineer reviewing the following code snippet. Please:
1. Identify potential bugs or security issues.
2. Suggest performance improvements.
3. Comment on code style and readability.
4. Output your review as a JSON with keys "bugs", "performance", "style".
Code:
```python
def calculate_average(numbers):
total = 0
for n in numbers:
total += n
return total / len(numbers)
### 5. Error Handling and Fallbacks
External services can fail; plan for graceful degradation.
- **Browser automation failures:** Captchas, login required, network errors. Fallback: try Hugging Face API or continue without external help.
- **API failures:** Rate limits, model not found, token invalid. Fallback: use a different model or skip external step.
- **Timeouts:** Set reasonable timeouts (e.g., 30 seconds for browser automation, 10 seconds for API). Fallback: proceed with assistant's own reasoning.
- **Log failures:** Record external AI failures in `memory/YYYY‑MM‑DD.md` with tag `external‑ai‑failure` for later analysis.
**Example fallback structure:**
```python
try:
response = ask_chatgpt(prompt)
except (BrowserError, TimeoutError) as e:
log_failure("ChatGPT", e)
# Fallback to Hugging Face
response = hf_inference("google/flan-t5-xxl", prompt)
except Exception as e:
log_failure("All external AI", e)
response = None
if response:
integrate(response)
else:
# Continue with assistant's own reasoning
pass
Scenario: The assistant is asked to review a complex React component. It uses Claude (via Chrome Relay) for a detailed second opinion.
Steps:
ask_claude(prompt) using browser automation.Scenario: User provides a paragraph in English and asks for a German translation. Assistant calls Hugging Face translation model.
Steps:
"Translate English to German: <text>".hf_inference("Helsinki-NLP/opus-mt-en-de", prompt).Scenario: User needs ideas for a blog post title. Assistant uses ChatGPT to generate 10 options.
Steps:
Scenario: User asks for a strategic analysis of a business decision. Assistant uses its own reasoning, then asks ChatGPT for potential blind spots.
Steps:
docs/browser-automation.md – Chrome Relay setup and commands.skills/huggingface/SKILL.md – Hugging Face API usage.skills/1password/SKILL.md – retrieving secrets.memory/patterns/playbooks.md – Browser Automation playbook.scripts/external_ai_integration.py (this skill's core implementation).playbooks/external-ai-integration-playbook.md (orchestration playbook).When a task would benefit from external AI reasoning, read this skill to decide which model to use and how to call it. Store successful patterns in memory/patterns/tools.md. Update pending.md if external AI fails repeatedly and needs manual configuration.
This skill increases autonomy by expanding the assistant's toolset with external AI models, allowing it to tackle a wider range of tasks without spawning sub‑agents and maintaining control over the workflow.