Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

SealVera

v1.0.4

Tamper-evident audit trail for AI agent decisions. Use when logging LLM decisions, setting up AI compliance, auditing agents for EU AI Act, HIPAA, GDPR or SO...

0· 436·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
The name/description (tamper-evident audit trail) matches what the code and docs actually do: intercept LLM SDKs, log decisions to app.sealvera.com, provide helpers and a watcher. However the registry metadata declares no required env vars or credentials while the skill and reference docs clearly require SEALVERA_API_KEY and other environment config — this inconsistency is surprising and reduces trust.
!
Instruction Scope
Runtime instructions and scripts instruct the agent to run setup.js which will write files into the workspace (sealvera-log.js, .sealvera.json), patch AGENTS.md and optionally SOUL.md with mandatory logging rules, and suggest setting NODE_OPTIONS to auto-require an autoload script. The subagent-watcher reads ~/.openclaw/.../sessions.json and session transcripts and will synthesize and POST logs for missing sessions. These actions read and transmit potentially sensitive data (transcripts, inputs/outputs, possibly PHI) to an external service and impose mandatory logging in agent prompts — scope extends well beyond benign SDK-wrapping.
Install Mechanism
There is no network install spec (instruction-only + included scripts), so nothing is downloaded during install. However the setup script will copy/generate files into the user's workspace and suggests runtime autoloading (NODE_OPTIONS). The code will attempt to require an external 'sealvera' SDK if present, but also generates a local sealvera-log.js helper that performs network calls.
!
Credentials
The skill metadata declared no required env vars, but the code and docs expect SEALVERA_API_KEY (and optionally SEALVERA_ENDPOINT, SEALVERA_AGENT). The subagent-watcher includes a hard-coded default SV_KEY value in its source (a baked-in API key), which is unexpected and dangerous. The skill also reads OPENCLAW_WORKSPACE and the user's home sessions/transcripts — access to these paths is not declared in metadata and may expose sensitive data.
!
Persistence & Privilege
Setup will write config and helper files into the workspace and patch AGENTS.md and SOUL.md to enforce a mandatory logging footer. The autoload script monkeypatches module loading to intercept OpenAI/Anthropic clients at require-time and mutates require cache entries — a global runtime modification. The subagent-watcher writes state and can be run as a cron-style backstop. These changes are persistent and affect agent behavior beyond a local helper.
Scan Findings in Context
[hardcoded-credential] unexpected: scripts/subagent-watcher.js contains a hard-coded default SV_KEY value (SV_KEY = 'sv_5e4735b2...'). A logging/tracing SDK might request an API key, but embedding a default secret in code is unexpected and unsafe; it could leak or be abused.
What to consider before installing
What to consider before installing: - Metadata mismatch: the registry lists no required env vars, but the skill requires SEALVERA_API_KEY/SEALVERA_ENDPOINT/SEALVERA_AGENT. Ask the publisher to correct the metadata. - Data exfiltration risk: this skill will send agent inputs/outputs/reasoning to https://app.sealvera.com (or whichever endpoint you configure). If your agents handle sensitive data (PHI, PII, financial data), sending transcripts or factor-level values to an external service may violate policy or law (e.g., HIPAA, GDPR) unless you have an appropriate contract/BAA and configuration. - Filesystem access: setup and the watcher will read/write workspace files (AGENTS.md, SOUL.md, .sealvera.json, sealvera-log.js) and the watcher reads sessions/transcripts from the user's home directory. Review those operations and ensure you are comfortable with the changes and with transcripts being processed and potentially transmitted. - Global runtime changes: autoload and intercept scripts monkeypatch Module._resolveFilename/require cache to intercept OpenAI/Anthropic clients. That can change agent behavior across your environment and may be hard to audit or undo. Consider testing in an isolated sandbox first. - Hard-coded API key: the watcher includes a baked-in default API key; this is unexpected and should be removed. Ask the author why it exists and require it be deleted or explained prior to use. Recommendations: - Do not connect production systems (especially those handling PHI/financial PII) until you have verified the vendor, reviewed the server endpoint and DPA/BAA, and tested in a safe environment. - Request the publisher/source code origin and a verifiable homepage or vendor contact (none declared here). Prefer published/official SDKs from known vendors. - If you want to trial: use an isolated workspace and a throwaway SealVera API key with minimal privileges; run setup interactively and inspect every file it writes; grep for hard-coded secrets; run the code in a sandbox and monitor network calls. - Ask the author to fix the metadata (declare required env vars and config paths), remove hard-coded credentials, and provide an audit or third-party review of the interception/monkeypatch behavior. If you want, I can produce a short checklist of exact files and lines to inspect or a safe test plan to evaluate this skill in isolation.

Like a lobster shell, security has layers — review code before you run it.

ai-governancevk97faqkfp6yc9rc2vqaa0p6kd58255fgauditvk97faqkfp6yc9rc2vqaa0p6kd58255fgcompliancevk97faqkfp6yc9rc2vqaa0p6kd58255fgeu-ai-actvk97faqkfp6yc9rc2vqaa0p6kd58255fgfintechvk97faqkfp6yc9rc2vqaa0p6kd58255fggdprvk97faqkfp6yc9rc2vqaa0p6kd58255fghipaavk97faqkfp6yc9rc2vqaa0p6kd58255fglatestvk972nse4eqbk1gcara5cfcefsx82epx3llmvk97faqkfp6yc9rc2vqaa0p6kd58255fgobservabilityvk97faqkfp6yc9rc2vqaa0p6kd58255fgresponsible-aivk97faqkfp6yc9rc2vqaa0p6kd58255fg
436downloads
0stars
5versions
Updated 7h ago
v1.0.4
MIT-0

SealVera Skill for OpenClaw

Cryptographically-sealed, tamper-evident audit trails for every AI decision your agents make.

SealVera is AI Decision Audit Infrastructure. This skill wires any OpenClaw agent into SealVera so every decision it makes is logged, cryptographically signed, chained, and monitored — automatically. Built for teams shipping AI in finance, healthcare, legal, and insurance.

EU AI Act · SOC 2 · HIPAA · GDPR · FINRA · ISO 42001


Why SealVera?

  • Tamper-evident — every decision is cryptographically hashed (RSA-2048) and chained; any alteration is detectable
  • Works in 2 linesinit() + one patch call and every LLM decision is logged
  • Full explainability — captures inputs, outputs, reasoning steps, confidence scores, model used
  • Real-time dashboard — search, filter, export at app.sealvera.com
  • Drift detection — alerts when agent behaviour deviates from its baseline
  • Any LLM — OpenAI, Anthropic, Gemini, Ollama, LangChain, CrewAI, AutoGen
  • Zero friction — no new infrastructure, installs in seconds

First time? Run setup

Check if SealVera is configured:

const fs = require('fs');
const path = require('path');
const configured = fs.existsSync(path.join(process.env.OPENCLAW_WORKSPACE || process.cwd(), '.sealvera.json'));

If .sealvera.json does NOT exist, run setup:

node <skill_dir>/scripts/setup.js

Setup will:

  1. Ask for your API key (get one free at app.sealvera.com)
  2. Verify connectivity and show org/plan info
  3. Copy sealvera-log.js into the workspace
  4. Patch AGENTS.md with the mandatory sub-agent audit rule
  5. Write .sealvera.json config
  6. Fire a test log to confirm end-to-end

One-time setup. After this, every sub-agent you spawn is audited automatically.

If the user says "install SealVera", "set up SealVera", or "connect SealVera" — run setup, don't just describe it.


How logging works

After setup, two logging paths are active:

1. Sub-agent logging (automatic via AGENTS.md)

Every sessions_spawn task prompt gets a MANDATORY footer — the sub-agent logs itself on completion.

2. Main-session logging (call after every significant task)

The agent running the main session must log its own work. This is what shows your day-to-day activity in the dashboard.

const log = require('./sealvera-log');
await log({
  action: 'fix_login_bug',          // short description of what you did
  decision: 'COMPLETED',            // COMPLETED | FAILED | ESCALATED
  input:  { task: '...' },
  output: { result: '...' },
  reasoning: [{ factor: 'outcome', value: 'ok', signal: 'safe', explanation: '...' }]
});

Log after: code written, bug fixed, feature deployed, investigation done, judgment call made. Don't log: routine chat, heartbeat pings, trivial lookups.

Setup patches SOUL.md with this rule so the agent follows it automatically.

Each log entry includes: input/output, reasoning steps, RSA-2048 signature, chain link, timestamp, model, and agent name.


Decision vocabulary

Task / agent execution:

DecisionMeaning
COMPLETEDTask finished successfully
RESPONDEDAnswered a question / returned output
FAILEDTask failed
ERRORUnexpected exception
ESCALATEDHanded off to human or higher-level agent
SKIPPEDTask intentionally not run
PARTIALCompleted with caveats

Approval / gating:

DecisionMeaning
APPROVEDRequest approved
REJECTEDRequest blocked
FLAGGEDNeeds human review

Tests:

DecisionMeaning
PASSEDTest passed
FAILEDTest failed
SKIPPEDTest skipped

Sub-agent audit rule

Append this footer to every sessions_spawn task:

---
MANDATORY: Before you finish, log your result to SealVera:
  const log = require('<workspace>/sealvera-log');
  await log({
    action: '<what you did>',
    decision: 'COMPLETED',
    input: { task: '<summary>' },
    output: { result: '<summary>' },
    reasoning: [{ factor: 'outcome', value: '<result>', signal: 'safe', explanation: '<why>' }]
  });
Vocabulary: task=COMPLETED/FAILED | test=PASSED/FAILED | gate=APPROVED/REJECTED | handoff=ESCALATED

Check status

node <skill_dir>/scripts/status.js

Get your API key

Sign up at app.sealvera.com — free tier includes 10,000 decisions/month.


Reference

See references/api.md for all SDK methods and log field schema. See references/compliance.md for regulation mapping (EU AI Act, FINRA, HIPAA, GDPR, SOC 2).

Comments

Loading comments...