Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

HARPA AI

v1.0.0

Automate web browsers, scrape pages, search the web, and run AI prompts on live websites via HARPA AI Grid REST API

2· 746·1 current·1 all-time
byAlex@alxsharuk

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for alxsharuk/harpa-ai.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "HARPA AI" (alxsharuk/harpa-ai) from ClawHub.
Skill page: https://clawhub.ai/alxsharuk/harpa-ai
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: HARPA_API_KEY
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install harpa-ai

ClawHub CLI

Package manager switcher

npx clawhub@latest install harpa-ai
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description describe browser automation and scraping via HARPA Grid. Declared requirement (HARPA_API_KEY) and optional tools (curl/wget) are exactly what this integration needs. No unrelated credentials, binaries, or config paths are requested.
Instruction Scope
SKILL.md gives concrete curl examples that only use the HARPA API and the HARPA_API_KEY. However, the API explicitly supports scraping pages using the user's browser session (cookies) and sending async results to arbitrary webhooks — both of which enable exfiltration of sensitive page content if misused. The instructions do not instruct the agent to read local files or other unrelated environment variables.
Install Mechanism
Instruction-only skill with no install spec or code. This is low-risk from an install perspective (nothing is written to disk by the skill itself).
Credentials
Only a single credential (HARPA_API_KEY) is required and declared as the primary credential; that aligns with the documented API usage. No unrelated secrets or system paths are requested.
Persistence & Privilege
always:false (default) and agent-autonomy not disabled. The skill does not request permanent/always-on presence or modification of other skills. No elevated system privileges are requested.
Assessment
This skill appears to do what it says: call the HARPA Grid REST API using a HARPA_API_KEY to control browser nodes and scrape pages. Before installing, consider the following in plain terms: - HARPA_API_KEY gives the service the ability to run actions in your browser nodes and access pages those nodes can reach, including pages behind your login cookies — treat it like a powerful secret. Don't reuse the key elsewhere. - The API supports resultsWebhook (posting results to an arbitrary URL). If you or an automation supply a webhook, scraped page contents (including potentially sensitive data) can be sent to that external server and retained for up to 30 days. Only use trusted webhook endpoints. - Ensure the HARPA Chrome extension and any nodes you use are legitimately installed from the official source (https://harpa.ai). A malicious or compromised extension/node could expose more data. - This skill is instruction-only (no code installed), so risks come from the remote API and what you ask it to scrape. Limit requests to non-sensitive pages, or run automation from isolated browser profiles/accounts for scraping protected content. - Operational advice: rotate the HARPA_API_KEY if you suspect misuse, monitor API activity if HARPA provides logs, and review the HARPA Grid docs and privacy policy to understand retention and sharing. If you need, I can point out specific request parameters and example payloads that are safer (e.g., avoid resultsWebhook, limit node broadcasting, and avoid scraping authenticated pages).

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🌐 Clawdis
Any bincurl, wget
EnvHARPA_API_KEY
Primary envHARPA_API_KEY
aivk972e5ppzchvqq6dk0k5pa3enn818fegautomationvk972e5ppzchvqq6dk0k5pa3enn818fegbrowservk972e5ppzchvqq6dk0k5pa3enn818feglatestvk972e5ppzchvqq6dk0k5pa3enn818fegscrapingvk972e5ppzchvqq6dk0k5pa3enn818feg
746downloads
2stars
1versions
Updated 6h ago
v1.0.0
MIT-0

HARPA Grid — Browser Automation API

HARPA Grid lets you orchestrate real web browsers remotely. You can scrape pages, search the web, run built-in or custom AI commands, and send AI prompts with full page context — all through a single REST endpoint.

Prerequisites

The user must have:

  1. HARPA AI Chrome Extension installed from https://harpa.ai
  2. At least one active Node — a browser with HARPA running (configured in the extension's AUTOMATE tab)
  3. A HARPA API key — obtained from the HARPA extension AUTOMATE tab. The key is provided as the HARPA_API_KEY environment variable.

If the user hasn't set up HARPA yet, direct them to: https://harpa.ai/grid/browser-automation-node-setup

API Reference

Endpoint: POST https://api.harpa.ai/api/v1/grid Auth: Authorization: Bearer $HARPA_API_KEY Content-Type: application/json

Full reference: https://harpa.ai/grid/grid-rest-api-reference


Actions

1. Scrape a Web Page

Extract full page content (as markdown) or specific elements via CSS/XPath/text selectors.

Full page scrape:

curl -s -X POST https://api.harpa.ai/api/v1/grid \
  -H "Authorization: Bearer $HARPA_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "action": "scrape",
    "url": "https://example.com",
    "timeout": 15000
  }'

Targeted element scrape (grab):

curl -s -X POST https://api.harpa.ai/api/v1/grid \
  -H "Authorization: Bearer $HARPA_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "action": "scrape",
    "url": "https://example.com/products",
    "grab": [
      {
        "selector": ".product-title",
        "selectorType": "css",
        "at": "all",
        "take": "innerText",
        "label": "titles"
      },
      {
        "selector": ".product-price",
        "selectorType": "css",
        "at": "all",
        "take": "innerText",
        "label": "prices"
      }
    ],
    "timeout": 15000
  }'

Grab fields:

FieldRequiredDefaultValues
selectoryesCSS (.class, #id), XPath (//h2), or text content
selectorTypenoautoauto, css, xpath, text
atnofirstall, first, last, or a number
takenoinnerTextinnerText, textContent, innerHTML, outerHTML, href, value, id, className, attributes, styles, [attrName], (styleName)
labelnodataCustom label for extracted data

2. Search the Web (SERP)

Perform a web search. Supports operators like site:, intitle:.

curl -s -X POST https://api.harpa.ai/api/v1/grid \
  -H "Authorization: Bearer $HARPA_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "action": "serp",
    "query": "OpenClaw AI agent framework",
    "timeout": 15000
  }'

3. Run an AI Command

Execute one of 100+ built-in HARPA commands or a custom automation on a target page.

curl -s -X POST https://api.harpa.ai/api/v1/grid \
  -H "Authorization: Bearer $HARPA_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "action": "command",
    "url": "https://example.com/article",
    "name": "Extract data",
    "inputs": "List all headings with their word counts",
    "connection": "HARPA AI",
    "resultParam": "message",
    "timeout": 30000
  }'
  • name — command name (e.g. "Summary", "Extract data", or any custom command)
  • inputs — pre-filled user inputs for multi-step commands
  • resultParam — HARPA parameter to return as result (default: "message")
  • connection — AI model to use (e.g. "HARPA AI", "gpt-4o", "claude-3.5-sonnet")

4. Run an AI Prompt

Send a custom AI prompt with page context. Use {{page}} to inject the page content.

curl -s -X POST https://api.harpa.ai/api/v1/grid \
  -H "Authorization: Bearer $HARPA_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "action": "prompt",
    "url": "https://example.com",
    "prompt": "Analyze the current page and extract all contact information. Webpage: {{page}}",
    "connection": "CHAT AUTO",
    "timeout": 30000
  }'

Common Parameters

ParameterRequiredDefaultDescription
actionyesscrape, serp, command, or prompt
urlnoTarget page URL (ignored by serp)
nodenoNode ID ("r2d2"), multiple ("r2d2 c3po"), first N ("5"), or all ("*")
timeoutno300000Max wait time in ms (max 5 minutes)
resultsWebhooknoURL to POST results to asynchronously (retained 30 days)
connectionnoAI model for command/prompt actions

Node Targeting

  • Omit node to use the default node
  • "node": "mynode" — target a specific node by ID
  • "node": "node1 node2" — target multiple nodes
  • "node": "3" — use first 3 available nodes
  • "node": "*" — broadcast to all nodes

Async Results via Webhook

Set resultsWebhook to receive results asynchronously. The action stays alive for up to 30 days, useful when target nodes are temporarily offline.

{
  "action": "scrape",
  "url": "https://example.com",
  "resultsWebhook": "https://your-server.com/webhook",
  "timeout": 15000
}

Tips

  • Scraping behind-login pages works because HARPA runs inside a real browser session with the user's cookies and auth state.
  • Use the grab array with multiple selectors to extract structured data in a single request.
  • For long-running AI commands, increase timeout (max 300000ms / 5 min) or use resultsWebhook.
  • The {{page}} variable in prompts injects the full page content — use it to give AI context about the current page.

Comments

Loading comments...