Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

web-llm-chat

Chat with web-based LLMs through the Chrome Relay extension. Provides free access to powerful web search and RAG capabilities without API costs. Currently su...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 32 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's name/description (web-based LLM chat via Chrome Relay) matches the included code and docs: it connects to a local OpenClaw gateway/relay and a Qwen chat tab. However the registry metadata claims no required config paths or credentials, while the code expects an OpenClaw config file (openclaw.json) containing gateway.auth.token. That undeclared requirement is an inconsistency.
Instruction Scope
SKILL.md tells the agent to run the included Node script which connects to a local gateway and Relay, reads page content, and extracts responses. The runtime instructions and the script only target localhost (gateway/relay and local CDP via WebSocket) and the browser tab; there are no external network endpoints in the docs. The script does, however, instruct/require reading local OpenClaw config files to derive a relay token — this file access is outside what the metadata advertised and should be explicitly declared.
Install Mechanism
No install spec is provided (instruction-only with code files). The package.json only depends on the widely used 'ws' npm package; SKILL.md tells users to 'npm install ws'. There are no remote downloads or extraction of arbitrary archives in the skill bundle.
!
Credentials
The manifest lists no required environment variables or config paths, but the script reads openclaw.json from multiple filesystem locations (including E:\.openclaw\... and ~/.openclaw/...), extracts gateway.auth.token, and derives an HMAC relay token. Accessing that local token is sensitive and should have been declared as a required config/credential. The script also optionally reads an env flag for debug output (QWEN_CHAT_DEBUG_EXTRACT). Requiring access to a gateway auth token is proportionate for the stated operation, but the omission from declared requirements is a privacy/visibility concern.
Persistence & Privilege
The skill is not forced-always, does not request elevated platform-wide privileges, and does not modify other skills or system-wide config. It runs as a user-level Node script and communicates with local gateway/extension endpoints only.
What to consider before installing
Key things to consider before installing/running this skill: - It will read your local OpenClaw configuration file (openclaw.json) to obtain gateway.auth.token and derive a relay token. This token is sensitive; the skill metadata does not declare this file/credential requirement — treat that as a privacy gap. - The script connects to localhost (127.0.0.1) ports used by your OpenClaw gateway/relay and to the browser tab via CDP; it does not contact remote servers according to the provided code, but you should still review the source (scripts/qwen_chat.js) yourself before running. - If you trust the skill: run it on a machine/account that contains only the necessary credentials, inspect outgoing network traffic (to ensure nothing is exfiltrating), and install npm dependencies from official registries. - If you do not want the skill to access your gateway token, do not run it, or modify the script to accept the token via an explicit user-supplied environment variable or prompt and update the skill metadata to declare that requirement. If you want higher confidence, ask the publisher to: (1) declare required config paths/credentials in the registry metadata, (2) add an option to pass the gateway token explicitly (instead of reading from disk), and (3) document exactly what files are read and why.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk97fvgmbp739as9n4f12tcahtd835g9x

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Web LLM Chat Skill

Interact with web-based LLMs through the Chrome Relay extension. This skill enables automated conversations with AI models, supporting both simple queries and multi-turn research workflows.

Currently supported: Qwen AI (chat.qwen.ai) — more models coming soon.

Why This Skill?

The Problem

  • Web search APIs are expensive: Services like Brave Search API and Tavily require API keys and paid subscriptions, creating ongoing costs.
  • Limited research capabilities: Traditional search APIs return raw results, lacking the reasoning and synthesis capabilities of modern LLMs.
  • Quality vs. cost tradeoff: Getting high-quality, well-reasoned research often requires expensive API calls or manual effort.

The Opportunity

Modern web-based LLMs (like Qwen) offer:

  • Powerful built-in search: Native web search with real-time information retrieval
  • RAG capabilities: Automatic retrieval-augmented generation for grounded responses
  • Deep research features: Multi-source synthesis and citation
  • Commercial-grade quality: As products backed by major companies, they're continuously improved

The Solution

This skill leverages OpenClaw's Chrome Relay to:

  • Access web LLMs for free: Use the web interface without API costs
  • Automate research workflows: Let agents conduct multi-turn investigations
  • Get higher quality results: Benefit from commercial LLM capabilities at lower cost
  • Enable comparison: Cross-reference with other AI responses

Bottom line: Use OpenClaw to orchestrate powerful web-based LLMs at a fraction of the API cost, with better research quality than raw search APIs.

Features

  • Send messages to web-based LLMs and receive responses
  • Multiple output formats: plain text, Markdown (preserves code blocks, tables, lists), or raw HTML
  • Send-ready detection: waits until the page is ready for the next question
  • Smart extraction: uses anchor-based extraction to get only the latest response
  • Research mode: agent-orchestrated multi-turn conversations

Supported Models

ModelStatusNotes
Qwen AI (chat.qwen.ai)✅ SupportedFull support for search, RAG, and multi-turn conversations
More models🚧 Coming soonOpen an issue to request support for other web-based LLMs

Prerequisites

  • Chrome Relay extension attached to a Qwen Chat tab (chat.qwen.ai/*)
  • Gateway running on 127.0.0.1:18789 (default)
  • Node.js with ws package installed

Installation

Install the ws package using your preferred package manager:

# npm
npm install ws

# yarn
yarn add ws

# pnpm
pnpm add ws

Quick Start

Check Connection Status

node scripts/qwen_chat.js status

Send a Message

# Plain text (default)
node scripts/qwen_chat.js send "What is machine learning?"

# With custom wait time (for long responses)
node scripts/qwen_chat.js send "Explain RAG in detail" --wait 120

# Get response in Markdown format (preserves formatting)
node scripts/qwen_chat.js send "Write a Python function" --format markdown

# Get raw HTML
node scripts/qwen_chat.js send "Create a table" --format html

Read Current Page Content

node scripts/qwen_chat.js read

Command Reference

status

Check if Chrome Relay is connected and Qwen tab is active.

node scripts/qwen_chat.js status

Output:

Extension: ✅ Connected
Qwen tab: ✅ Qwen Chat
  URL: https://chat.qwen.ai/c/...

send

Send a message to Qwen and receive the response.

node scripts/qwen_chat.js send "your message" [options]

Options:

OptionDescriptionDefault
--wait NMaximum wait time in seconds45
--format text|markdown|htmlOutput formattext
--debug-extractShow extraction debugging infooff

Output Formats:

  • text — Plain text output
  • markdown — Preserves code blocks, tables, lists, headers, and formatting
  • html — Raw HTML from the page

read

Read the current page content (useful for debugging or reviewing conversation history).

node scripts/qwen_chat.js read

research

Run multi-round research on a topic (fixed stages, consider using agent-orchestrated mode instead).

node scripts/qwen_chat.js research "AI safety" --rounds 10 --wait 120

How It Works

Response Extraction

The script uses a robust extraction strategy:

  1. Send-ready detection: Waits until the page is ready for the next question (input field editable, send button enabled)
  2. Anchor-based extraction: Uses the user's message as an anchor to find and extract only the latest response
  3. Content stabilization: Waits for content to stabilize before extraction

Why Not Use Thinking Indicators?

  • Thinking indicators can get stuck visually while the response is complete
  • Send-ready detection is more reliable: if you can send the next question, the previous response is done
  • Works regardless of UI changes to thinking indicators

Why Not Use Delta by Body Length?

  • Qwen page may reflow and change bodyLen unpredictably
  • Anchor-based extraction is more robust to page reflows
  • Only extracts the actual response content, not noise

Research Mode (Agent-Orchestrated)

For multi-turn research, use agent-orchestrated mode instead of the fixed research command. This allows the agent to dynamically control the conversation based on Qwen's responses.

Workflow

1. Determine research topic
2. Ask first question (open-ended, let Qwen expand)
3. Read Qwen's response
4. Analyze the response:
   - Which point deserves deeper exploration?
   - Which claim needs cross-validation?
   - Any contradictions or gaps?
5. Ask follow-up question based on analysis
6. Repeat steps 3-5 for 5-10 rounds
7. Final round: Ask Qwen to summarize, agent also compiles its own summary

Example Per-Round Operation

# Agent sends question and waits for response
node scripts/qwen_chat.js send "What are the key challenges in RLHF?" --wait 120

# Agent can read full page if needed
node scripts/qwen_chat.js read

Follow-up Strategy

Good follow-ups come from Qwen's response:

Response PatternFollow-up Direction
Mentions data/statistics"What's the original source? Sample size?"
Gives opinion without evidence"Any research supporting this claim?"
Mentions controversy"What are the counter-arguments?"
Uses "possibly/maybe""Under what conditions does this hold?"
Lists multiple factors"Which one is most critical? Why?"
Mentions case study"Has this case been challenged by other researchers?"
Goes off-topic"Back to the core question, specifically..."

Best Practices

  • Don't pre-plan all questions: Generate questions dynamically based on responses
  • Allow tangents: If Qwen mentions something unexpected but interesting, pursue it
  • Challenge occasionally: Don't always agree with Qwen; present counter-arguments
  • Maintain continuity: Briefly reference previous points when asking follow-ups
  • Control rounds: 5-10 rounds is optimal; too few lacks depth, too many has diminishing returns
  • Handle timeouts honestly: If the script times out, report it to the user rather than making up content
  • Adjust wait time: Use --wait 180 for search-heavy questions, --wait 60 for simple ones

Debugging

Enable Extraction Debugging

node scripts/qwen_chat.js send "test message" --wait 90 --debug-extract

This shows:

  • Baseline and latest body length
  • Number of leaf elements detected
  • Extraction path used
  • Raw and final content lengths

Common Issues

IssueSolution
Extension disconnectedCheck Chrome extension badge shows ON
No Qwen tab foundOpen chat.qwen.ai and attach extension
Response not capturedIncrease --wait time, use --debug-extract to diagnose
Markdown formatting brokenCode blocks use Monaco Editor; extraction handles this automatically

Configuration

Auth Token

The script auto-derives the relay token from the OpenClaw config. Config priority:

  1. E:\.openclaw\.openclaw\openclaw.json (Windows)
  2. ~/.openclaw/.openclaw/openclaw.json (Unix)

Gateway Ports

  • Gateway: 18789
  • Relay: 18792 (Gateway + 3)

Limitations

  • Requires Qwen to be logged in the browser
  • One tab at a time (controls the first attached Qwen tab)
  • No streaming — waits for full response before returning
  • research command uses fixed stages — use agent-orchestrated mode instead

File Structure

qwen-chat/
├── SKILL.md                    # This file
├── scripts/
│   ├── qwen_chat.js           # Main script
│   ├── _diagnose_selectors.js # Diagnostic tools
│   └── _analyze_format.js     # Format analysis
└── references/
    └── chrome-relay.md        # Chrome Relay setup guide

See Also

License

See LICENSE file for details.

Files

5 total
Select a file
Select a file to preview.

Comments

Loading comments…