Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Vaikora

v1.0.1

Route OpenClaw LLM calls through Vaikora for real-time AI agent security monitoring. Every action your agent takes gets scored for risk, anomaly-flagged, and...

0· 19·0 current·0 all-time
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description say it routes LLM calls through a monitoring proxy; the declared environment variables (VAIKORA_API_KEY, VAIKORA_AGENT_ID, LLM_PROVIDER_API_KEY) and the SKILL.md instructions (change base_url, add x-api-key + Authorization header) match that purpose. No unrelated binaries, installs, or config paths are requested.
Instruction Scope
The SKILL.md explicitly instructs the agent to send full prompts, message history, and the upstream provider key through Vaikora. That is coherent for a proxy, but it means highly sensitive material (prompts, responses, and provider secrets) will transit a third party. The instructions do not attempt to read unrelated files or env vars beyond those declared.
Install Mechanism
This is an instruction-only skill with no install spec or code files, so there's nothing written to disk by the skill itself. That lowers install-time risk, but it also means there is no local code to audit.
Credentials
The environment variables requested are proportionate to a proxy gateway: Vaikora needs its own API key and agent id, and it needs the upstream LLM provider key to forward requests. However, providing your upstream provider key to a third party is a significant sensitivity escalation — the SKILL.md acknowledges this but the choice to forward that secret should be evaluated before using in production.
Persistence & Privilege
The skill is not marked always:true and does not request system-wide persistence or modification of other skills. It operates by changing the agent's LLM base_url and headers (as intended for a gateway). Autonomous invocation is allowed (platform default) but is not combined with other elevated privileges here.
Assessment
This skill is coherent with being a monitoring proxy, but it requires you to route your agent traffic — including full prompts/responses and your upstream provider API key — through Vaikora. Before installing or routing production traffic: 1) Use a dedicated, limited-scope upstream key with strict spend and rate limits for testing; 2) Verify Vaikora's data retention, deletion, and encryption policies and service-level agreement; 3) Confirm whether Vaikora stores or logs the upstream provider key long-term (SKILL.md claims it does not beyond request lifetime, but you should validate); 4) Avoid routing PHI/PCI/regulatory data until legal/compliance approval; 5) If using security connectors, ensure those run on your infrastructure and that you understand what Vaikora will push to them; 6) Rotate keys after testing and monitor for unexpected usage. Because the skill is instruction-only and there is no code to audit, evaluate the vendor (homepage, documentation, reviews) and try it in an isolated environment first.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🛡️ Clawdis
EnvVAIKORA_API_KEY, VAIKORA_AGENT_ID, LLM_PROVIDER_API_KEY
Primary envVAIKORA_API_KEY
latestvk9744b17e6ptsh5avbyw90cht985dphz
19downloads
0stars
2versions
Updated 4h ago
v1.0.1
MIT-0

Vaikora Security

Vaikora is a reverse proxy for AI agents. It sits between OpenClaw and your LLM provider (OpenAI, Anthropic, Gemini, Bedrock, etc.) and inspects every request and response before it reaches the model.

What it does:

  • Scores each agent action for risk on a 0 to 100 scale
  • Detects anomalies with ML trained on adversarial prompt examples
  • Blocks prompt injection, jailbreaks, and PII exfiltration attempts
  • Scans LLM responses for toxicity and data leakage
  • Emits behavioral signals that downstream connectors (SentinelOne, CrowdStrike, AWS Security Hub) can ingest

Your agent code does not change. You change the base URL and add two headers.

What Vaikora receives

Because Vaikora sits in the request path, it sees:

  • The full prompt and message history your agent sends
  • The full response returned by the upstream LLM
  • Your upstream LLM provider key, which Vaikora forwards to the provider on your behalf

If any of that is a problem for your use case, read the data handling section below before you route production traffic through it. A test key and isolated agent account are the safest way to evaluate.

Setup

You need a Vaikora account and API key. Get one at vaikora.com. The free tier covers 20 req/min and 7-day audit retention, no card required.

Set three environment variables:

# Vaikora gateway credential (identifies you to Vaikora)
export VAIKORA_API_KEY=vk_live_...

# Vaikora agent identifier (scopes the audit trail)
export VAIKORA_AGENT_ID=your-agent-id

# Your upstream LLM provider key (Vaikora forwards this to the provider)
export LLM_PROVIDER_API_KEY=sk-...

LLM_PROVIDER_API_KEY is whatever key the provider issues you. OpenAI's sk-..., Anthropic's sk-ant-..., a Google API key, etc. Vaikora does not store it beyond the request lifetime, but it does see it in cleartext.

How routing works

Vaikora exposes a drop-in OpenAI-compatible endpoint at https://api.vaikora.com/v1. The completions path is /v1/chat/completions, same as OpenAI.

In your OpenClaw config, change the base URL and set two headers:

# Before
llm:
  provider: openai
  base_url: https://api.openai.com/v1
  headers:
    Authorization: "Bearer ${LLM_PROVIDER_API_KEY}"

# After
llm:
  provider: openai
  base_url: https://api.vaikora.com/v1
  headers:
    x-api-key: "${VAIKORA_API_KEY}"
    Authorization: "Bearer ${LLM_PROVIDER_API_KEY}"
    x-vaikora-agent: "${VAIKORA_AGENT_ID}"

Header roles:

  • x-api-key authenticates your request to Vaikora
  • Authorization: Bearer carries your upstream provider key. Vaikora forwards this to OpenAI, Anthropic, or whichever provider your chosen model maps to.
  • x-vaikora-agent tags the action in Vaikora's audit log

This mirrors the dual-header pattern documented in the Data443 LLM Gateway QA handbook. Works with any provider OpenClaw supports: OpenAI, Anthropic, Google, Azure, Bedrock, Mistral, Groq, Ollama.

Security connectors

Vaikora captures every action. To push high-risk signals into your SIEM or EDR, install a connector from AWS Marketplace. Each is free:

PlatformWhat it does
SentinelOneMaps high-risk agent actions to IOCs via Threat Intelligence API
CrowdStrike FalconPushes risky actions as Custom IOCs. Critical = prevent mode. High = detect mode.
AWS Security HubSends ASFF findings for high-severity and anomalous actions

Search "Vaikora" in AWS Marketplace. Connectors run on your infrastructure (Lambda or Logic Apps) and poll Vaikora's API on a schedule.

What gets monitored

Every action is scored across four dimensions:

DimensionWhat it checks
Risk ScoreComposite 0 to 100 based on content, context, and intent
AnomalyML deviation from this agent's baseline behavior
PolicyAllow, block, or audit decision against configured rules
ThreatConfirmed malicious activity flag with 0 to 1 confidence score

Actions with risk score at 75 or above, an anomaly flag, or a confirmed threat get forwarded to your security connector as a finding.

Verifying routing is live

After the config change, run a test prompt through your agent, then query Vaikora's management API to confirm the action was logged:

curl -H "x-api-key: ${VAIKORA_API_KEY}" \
  "https://api.vaikora.com/api/v1/actions?agent_id=${VAIKORA_AGENT_ID}&per_page=5"

Note the two paths:

  • /v1/... is the OpenAI-compatible gateway (where your agent sends traffic)
  • /api/v1/... is Vaikora's management API for reading audit data

You should see the action with a risk score and threat assessment.

Policy presets

Activate a preset in your Vaikora config:

PresetUse case
standardDefault, balanced security
strictHigh-sensitivity environments
permissiveDev and test, minimal blocking
hipaaPHI detection, medical data protection
pci-dssCredit card and financial data protection
gdprEU PII categories, Right to Erasure support
# vaikora.yaml
policy: hipaa

Data handling notes

Because Vaikora is in the request path, treat it like any other vendor with access to your prompts and provider credentials:

  • Use a dedicated upstream provider key with spend limits while evaluating
  • Do not route PHI, PCI, or regulated data until you have reviewed Vaikora's retention and access controls
  • Rotate your provider key after testing
  • Use vk_test_... keys for local development

Vaikora's docs cover retention and access at vaikora.com/docs.

Performance

  • Gateway latency: P50 = 8ms, P95 = 22ms
  • Block decisions are early-exit, around 18ms
  • Published threat detection accuracy: 99.9%, false positive rate under 0.1%

Links

Comments

Loading comments...