WeryAI Chat

v0.1.0

Chat, ask, compare, and inspect WeryAI chat models through the official OpenAI-compatible chat completions API. Use when you need general assistant chat, mul...

0· 153·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for weryai-developer/weryai-chat.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "WeryAI Chat" (weryai-developer/weryai-chat) from ClawHub.
Skill page: https://clawhub.ai/weryai-developer/weryai-chat
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: WERYAI_API_KEY
Required binaries: node
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install weryai-chat

ClawHub CLI

Package manager switcher

npx clawhub@latest install weryai-chat
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description, required binary (node), and required env (WERYAI_API_KEY) match the included scripts (models.js, write.js) that call the WeryAI chat and models endpoints. The package implements chat completions and model lookup as advertised; there are no unrelated credentials or binaries requested.
Instruction Scope
SKILL.md instructs the agent to run the included node scripts (models.js, write.js) and to prefer dry-run when validating payloads. The scripts only read environment variables and call the WeryAI endpoints; they do not attempt to read arbitrary user files or unrelated system state. A pre-scan detected a 'system-prompt-override' pattern, but in context this appears to be the normal inclusion of system messages and presets (systemPrompt strings) that are passed to the remote model — not instructions to override the agent host's system prompt or to exfiltrate data.
Install Mechanism
No install spec or remote download is used. This is an instruction/package bundle containing JS scripts that run with node; no external arbitrary code is fetched at install time.
Credentials
The only required credential is WERYAI_API_KEY (declared as primaryEnv), which is appropriate for a client that calls the WeryAI API. The code also reads optional environment variables (WERYAI_BASE_URL, WERYAI_REQUEST_TIMEOUT_MS, WERYAI_TEXT_MODEL) that are not listed in requires.env — this is typical for configuration, but you should verify WERYAI_BASE_URL points to the official API (default is https://api.weryai.com) before setting it.
Persistence & Privilege
The skill does not request always:true, does not modify other skills or system-wide configs, and has normal agent-invocation defaults. It does perform network calls to the WeryAI API (network_required: true), which is expected.
Scan Findings in Context
[system-prompt-override] expected: The pattern likely matches the many 'systemPrompt' strings and the skill-level instructions in SKILL.md. Those are used as model system messages passed to the remote chat API and are expected for a chat skill; they do not appear to be an attempt to overwrite the host agent's system prompt or to exfiltrate secrets.
Assessment
This package is coherent with its stated purpose: it needs only a WERYAI_API_KEY and node to run. Before installing, verify you trust the WeryAI API recipient: (1) ensure WERYAI_BASE_URL (if set) points to the official API (default: https://api.weryai.com), (2) prefer running with --dry-run to inspect request payloads without spending credits, and (3) do not supply the API key to untrusted skill sources. The SKILL.md and presets include system-message templates that will shape model responses—this is expected behavior for a chat client, not a hidden instruction to your local agent. If you need stronger assurance, review the scripts/models.js and scripts/write.js invocations locally or run them in an isolated environment first.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

💬 Clawdis
Binsnode
EnvWERYAI_API_KEY
Primary envWERYAI_API_KEY
latestvk97ft4f9sdczqq29bqmh7h95ts83dgkt
153downloads
0stars
1versions
Updated 1mo ago
v0.1.0
MIT-0

WeryAI Chat

Use the official WeryAI chat-completions API for general assistant chat and model lookup. This skill is intentionally broad but not specialized: it is for general conversation and prompt-response tasks, not blog writing, social copy, or email drafting.

Example Prompts

  • Ask a WeryAI chat model to explain retrieval augmented generation in plain English.
  • Send this messages array to WeryAI chat completions and return the assistant response.
  • List the currently available WeryAI chat models and their pricing.
  • Use GPT_5_4 for this one chat call instead of the default model.

Quick Summary

  • Main jobs: general assistant chat, multi-turn chat, chat model lookup, prompt-response
  • Default model: GEMINI_3_1_PRO
  • Main optional controls: model, messages, maxTokens, temperature, topP
  • Main trust signals: dry-run support, model lookup, OpenAI-compatible messages, explicit non-specialized scope

Prerequisites

  • WERYAI_API_KEY must be set before calling the API.
  • Node.js >=18 is required.
  • Real runs use the WeryAI chat completion API and may consume credits.

When to use this skill

Use this skill when the user wants:

  • a normal assistant-style answer
  • a direct chat-completions call
  • a multi-turn conversation via messages
  • model lookup or model selection before a chat run

Do not use this skill when the user clearly wants:

  • blog writing
  • email drafting
  • ad copy
  • translation or summarization as the main task

Those belong to the existing specialized text/* skills.

OpenAI-compatible message shape

This skill accepts standard chat-completions messages:

[
  { "role": "system", "content": "You are a helpful assistant." },
  { "role": "user", "content": "What is artificial intelligence?" }
]

If you provide messages, they are passed through directly. If you provide only prompt, the runtime builds a simple messages array automatically.

Commands

# List available chat models
node {baseDir}/scripts/models.js

# Simple prompt-response chat
node {baseDir}/scripts/write.js --json '{
  "prompt":"Explain retrieval augmented generation in plain English",
  "temperature":0.7
}'

# Explicit messages array
node {baseDir}/scripts/write.js --json '{
  "model":"GPT_5_4",
  "messages":[
    {"role":"system","content":"You are concise and technical."},
    {"role":"user","content":"Compare RAG and long-context prompting."}
  ]
}'

# Dry-run preview
node {baseDir}/scripts/write.js --json '{
  "prompt":"What is the difference between latency and throughput?"
}' --dry-run

Workflow

  1. If the user wants model choice or pricing context first, run models.js.
  2. Use write.js for direct prompt-response or explicit messages chat.
  3. Prefer --dry-run when validating payload shape without spending credits.
  4. Return the assistant response directly when the call succeeds.

Definition of Done

  • models.js returns the available chat models and pricing metadata.
  • write.js returns at least one assistant completion choice and non-empty text, or a clear API failure.

Re-run Behavior

  • Re-running models.js is read-only and safe.
  • Re-running write.js --dry-run is safe and does not call the API.
  • Re-running write.js creates a fresh chat completion request and may consume additional credits.

References

Comments

Loading comments...