vryfik skill

v1.0.7

Use when searching the web, documentation, or current information where token efficiency matters. Triggers on queries about API docs, current events, pricing...

0· 35·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for briefness/vryfik-skill.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "vryfik skill" (briefness/vryfik-skill) from ClawHub.
Skill page: https://clawhub.ai/briefness/vryfik-skill
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: node
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install vryfik-skill

ClawHub CLI

Package manager switcher

npx clawhub@latest install vryfik-skill
Security Scan
Capability signals
Crypto
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (token‑efficient web/document search) matches the artifacts: intent parsing, query rewriting, budget control, cache, credibility probes, and assembly. Required runtime (node) is appropriate and no unrelated credentials or binaries are requested.
Instruction Scope
Instructions limit network activity to host-provided web_search for content retrieval and the skill's parallel-probe script (HTTP HEAD) for availability checks. The skill reads/writes a local cache (~/.antigravity/search-cache/) and uses local data files (intent patterns, domain reputation). Cache may contain user query text (SKILL.md documents this). Review host agent behavior: host provides full GET/search results to the skill for post-processing, so the security posture depends on trusting the host tool to sanitize user-supplied URLs.
Install Mechanism
No install script; code files are included and executed via the shell tool using node. No remote downloads, package installs, or URL fetch/install steps are present.
Credentials
The skill requests no environment variables or credentials. Local file access is limited to a single cache directory and bundled data files; network access is restricted to outbound HEAD probes (parallel-probe) and relies on the host for GET/search calls—this is proportionate to the stated functionality.
Persistence & Privilege
always is false and disable-model-invocation is set to true (skill cannot invoke the model autonomously). The only persistent artifact is the local cache (~/.antigravity/search-cache/) which the SKILL.md documents and the scripts write with restricted file modes (0o600/0o700). No configuration or credential changes to other skills are made.
Assessment
This skill appears internally coherent and implements a local pipeline (intent parsing, rewriting, caching, credibility probes, assembly) that delegates actual page retrieval to the host agent. Before installing: - Ensure you trust the host agent's 'web_search' tool (it performs the live GETs and supplies fragments to the skill). The skill assumes URLs come from a trusted caller. - Accept that queries (cache keys/snippets) will be stored under ~/.antigravity/search-cache/ (SKILL.md states files created with 0o600 and dir 0o700). If that concerns you, review or relocate the cache directory before use. - Note the skill issues outbound HEAD requests (no bodies) for availability checks; network egress must be acceptable. - No secrets/credentials are requested by this skill. If you need higher assurance, review the bundled scripts (they are small, readable JS) and verify the host's web_search implementation sanitizes user-supplied URLs and does not forward sensitive local/internal URLs to the skill.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Binsnode
latestvk976vdzvhzph9syrjqr5w2xc3185qcxm
35downloads
0stars
8versions
Updated 7h ago
v1.0.7
MIT-0

Searching Precisely

Overview

Web search pipeline that minimizes token consumption via local intent classification, semantic caching, credibility validation, and streaming fragment assembly.

架构分工

  • **宿主 AI(Host Agent)**负责实际的网页 GET / Search API 调用,返回原始 fragments
  • 本 Skill 的脚本负责前处理(intent 分类、query 改写、budget 控制、cache 查询)和后处理(credibility probe、stream 组装、cache 写入)

Core Rule: Always check the semantic cache first. Only invoke web search on a cache miss.

Pipeline Architecture

Query → [Intent Parser] → [Query Rewriter] → [Budget Controller]
                                                     ↓
                                           [Semantic Cache] ──hit──→ Return
                                                     ↓ miss
                                           [Web Search]  (≤1500 tok)
                                                     ↓
                                           [Parallel Credibility Probe]
                                                     ↓
                                           [Stream Assembler] → [Write Cache]

Instructions

When this skill activates, execute the pipeline below in order. Exit early at any step that produces a final answer — do not run later steps unnecessarily.

Note: Replace <placeholders> with actual runtime values. All arguments must be valid JSON strings.


Step 1 — Classify Intent

Run via shell tool:

node scripts/intent-parser.js '<original_query>'

Extract intent and confidence from the JSON output.
If confidence < 0.5, default to intent = "web_search" and continue.


Step 2 — Initialize Budget

node scripts/budget-controller.js init

Keep the returned state.remaining value. Abort any later step that would exceed it.


Step 3 — Check Semantic Cache

node scripts/semantic-cache.js check '{"query":"<original_query>","intent":"<intent>"}'
  • hit: true and similarity ≥ 0.85return result to the user. Pipeline complete. Skip all remaining steps.
  • hit: false → continue to Step 4.

Step 4 — Rewrite Query

node scripts/query-rewriter.js '{"intent":"<intent>","query":"<original_query>"}'

Use the returned subQueries array (max 3) for web search.


Step 5 — Web Search (host agent)

Using your native search_web tool, search each sub-query from Step 4.
Collect result URLs and content fragments.
Always perform live search on a cache miss — never fabricate results.


Step 6 — Validate Source Credibility

Extract up to 5 unique source URLs from Step 5. Run:

node scripts/parallel-probe.js '{"sources":[{"url":"<url1>"},{"url":"<url2>"}]}'
  • verdict: "trust" → use directly
  • verdict: "verify" → use with caution; flag in the answer
  • available: false → discard that source

Step 7 — Score Credibility

node scripts/credibility-arbiter.js '{"results":[<probe_results_array>]}'

If all sources score < 0.4, discard everything and tell the user no reliable source was found. Do not assemble.


Step 8 — Assemble Answer

node scripts/stream-assembler.js '{"fragments":[<trusted_fragments>],"query":"<original_query>"}'

Return the answer field to the user.
If coherenceScore < 0.5, add a note that the result may be incomplete.


Step 9 — Write Cache

node scripts/semantic-cache.js write '{"query":"<original_query>","intent":"<intent>","result":"<answer>"}'

This caches the result for future similar queries. Always run this step after a successful assembly.

Search Decision Rules

SituationAction
Query answered by cache (similarity > 0.85)Return cache, skip web search
Query about API docs / official specsWeb search with known domain hint
Query about current events / pricing / newsWeb search, no cache assumed
Query with ambiguous intentRewrite into 2-3 sub-queries, search each

Token Budget Defaults

ComponentBudget
Intent parse50
Cache check20
Web search1500
Credibility300
Assembly500
Total2370

Common Mistakes

  • ❌ Calling web search before checking the semantic cache
  • ❌ Skipping credibility check for unknown domains
  • ❌ Assembling fragments from a single unverified source
  • ❌ Fetching full page content when a HEAD probe + summary suffices

ClawHub Security Notes

All permission declarations, dependencies, I/O scopes, and security properties are formally declared in the YAML frontmatter above (permissions, dependencies, io, security_notes). Static analyzers should use those fields as the authoritative source.

Summary per script:

ScriptI/ONetworkShell
intent-parser.jsnonenonenone
query-rewriter.jsnonenonenone
stream-assembler.jsnonenonenone
budget-controller.jsnonenonenone
credibility-arbiter.jsnonenonenone
semantic-cache.js~/.antigravity/search-cache/ R/Wnonenone
parallel-probe.jsreputation DB R (bundled)HEAD only, no uploadnone

Comments

Loading comments...