Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Openclaw Research Viz

v1.5.0

Generate interactive HTML research reports from AI research context. After completing a multi-step research task (web search, API calls, analysis), use this...

0· 130·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for frrrrrrrrank/research-visualizer.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Openclaw Research Viz" (frrrrrrrrank/research-visualizer) from ClawHub.
Skill page: https://clawhub.ai/frrrrrrrrank/research-visualizer
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: node
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install research-visualizer

ClawHub CLI

Package manager switcher

npx clawhub@latest install research-visualizer
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (create interactive HTML research reports) align with the included Node scripts and demo HTML. Requiring 'node' is appropriate. However, the skill uploads reports to a2ui.me / Cloudflare R2 while declaring no required environment variables or credentials — that is unexpected unless the upload endpoint accepts anonymous uploads or the code contains embedded credentials. The presence of a worker directory suggests a server-side component is bundled; this is plausible for a report host but should be explained.
!
Instruction Scope
SKILL.md instructs the agent to extract conversation context into a JSON, write it to /tmp, and run included node scripts that encrypt and upload the report. The instructions do not tell the agent to read unrelated system files or extra environment variables, which is good. The concern: the instructions ask the agent to execute shipped JavaScript without spelling out exactly what upload-report.js and worker code send to the external endpoint (e.g., any metadata, request headers, or unencrypted payloads). The guidance 'key never touches the server' is a strong claim but must be confirmed by inspecting the upload/encryption code.
Install Mechanism
There is no external install URL or archive; the skill is instruction+bundled code (scripts and worker). No network download/install step in the manifest reduces supply-chain risk. Node is required to run the bundled scripts. This is a relatively low install risk, but executing included scripts is still a runtime risk to review.
!
Credentials
The skill declares no required environment variables or credentials while its workflow uploads encrypted reports to a2ui.me / R2. Uploading to R2 typically requires credentials or an intermediate service; the lack of declared credentials suggests one of: (a) the host accepts anonymous uploads, (b) credentials are hard-coded in the included code, or (c) the upload is proxied through a bundled worker. Any of those cases require inspection. Also the demo encrypted HTML and test files include large Base64 blobs (expected for encrypted content) but these also increase the chance hidden data or keys are embedded. The skill's claim that the decryption key 'never touches the server' is plausible but unverified without reading upload-report.js/encrypt-report.js/worker code.
Persistence & Privilege
always is false and the skill is user-invocable; it does not request elevated platform privileges in the metadata. There is no indication it modifies other skills or global agent settings. Autonomous invocation is allowed by default; combine this with the concerns above (external upload) when deciding to enable autonomous runs.
Scan Findings in Context
[base64-block] expected: Large Base64 blocks appear in demo/test-encrypted.html and other bundled files (these are expected for sample ciphertext or embedded assets). This pattern is expected for an encryption+upload workflow, but also worth inspecting to ensure no secret keys or plaintext are embedded as base64 inside the bundle or uploaded metadata.
What to consider before installing
This skill is plausibly what it claims (generate, AES-encrypt, upload reports), but you should not install it blindly if you care about confidentiality. Before installing or running it with real data: 1) Inspect upload-report.js and worker/src/index.ts to confirm they do NOT transmit the plaintext or the AES key, and to see exactly which endpoint (a2ui.me URL) and headers are used; 2) Verify where the R2 storage is hosted and who controls a2ui.me — confirm retention, access controls, and deletion policy; 3) Look for hard-coded secrets or API keys in the repo (hard-coded credentials are a red flag); 4) If you cannot review the code, run the skill in an isolated, sandboxed environment with non-sensitive test data and observe network requests (does it only upload ciphertext? does it leak metadata?); 5) If you plan to enable autonomous invocation, consider the increased blast radius (the agent could upload many reports automatically). If any of the above checks are unclear or the upload code contains embedded credentials or sends unencrypted content/keys, consider not using the skill.
scripts/upload-report.js:119
Shell command execution detected (child_process).
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Binsnode
latestvk97a57rpp4pjp1rk5rx8y8e0f983ygyw
130downloads
0stars
6versions
Updated 3w ago
v1.5.0
MIT-0

Research Visualizer

After completing a research task that involved multiple steps (searches, API calls, browsing, analysis), generate an interactive visual report and return it as a link.

When to Activate

Activate this skill automatically when:

  1. A research task with 3+ steps has been completed
  2. The user asked a question requiring multi-source analysis (e.g., market analysis, news investigation, comparative research)
  3. The user explicitly requests to "show the research process" or "visualize the analysis"

Workflow

Step 1: Collect Research Context

From the conversation, extract a structured JSON with this schema:

{
  "title": "Short descriptive title of the research",
  "subtitle": "One-line description",
  "research_time": "Xm Ys",
  "conclusion": {
    "text": "2-3 sentence key finding",
    "confidence": 0.0-1.0
  },
  "steps": [
    {
      "tool": "Web Search | API | Web Browse | Analysis | Synthesis",
      "tool_label": "Display label for the tool",
      "time_range": "start — end",
      "query": "What was searched/called/analyzed",
      "summary": "What was found/concluded",
      "sources": [
        { "title": "Source name", "url": "https://...", "icon": "📄" }
      ]
    }
  ],
  "visualizations": [
    {
      "type": "line_chart | bar_chart | market_cards | world_map | news_cards | stat_cards | comparison_table | quote_block | key_points",
      "section_title": "Section heading",
      "data": {}
    }
  ]
}

Step 2: Choose Visualizations Dynamically

IMPORTANT: Do NOT use a fixed template. Analyze the research content and pick only the visualizations that make sense. Use this decision guide:

Research involves...Use these visualizations
Time-series data, trends, probabilitiesline_chart or bar_chart
Prediction markets, odds, pricingmarket_cards
Geopolitics, regional impact, locationsworld_map
News articles, media coveragenews_cards
Key metrics, statistics, KPIsstat_cards
Comparing products, options, candidatescomparison_table
Expert opinions, notable quotesquote_block
Summarized takeaways, bullet pointskey_points

Rules:

  • Use 2-4 visualization types per report (don't overload)
  • Always include the research timeline (steps)
  • Pick visualizations that ADD VALUE, not just fill space
  • If unsure, stat_cards + news_cards is a safe default combo

Visualization Data Schemas

  • line_chart: Time-series data, trends, probability changes

    { "title": "Chart Title", "y_format": "percent", "y_min": 0, "y_max": 100,
      "x_labels": ["Label1", "Label2"],
      "series": [{ "name": "Series Name", "values": [10, 20, 30] }] }
    
  • bar_chart: Categorical comparisons, rankings

    { "title": "Chart Title", "y_format": "number",
      "bars": [{ "label": "Category A", "value": 85, "color": "#00d2a0" },
               { "label": "Category B", "value": 62, "color": "#6c5ce7" }] }
    
  • market_cards: Prediction markets, pricing, comparisons

    [{ "name": "Market Name", "yes_price": 85, "no_price": 15, "volume": "128M", "change_7d": 3.2 }]
    
  • world_map: Geopolitical analysis, regional impacts

    { "regions": [{ "id": "united_states|europe|east_asia|...", "name": "Display Name", "info": "Impact description" }] }
    

    Valid region IDs: north_america, united_states, canada, mexico, south_america, europe, africa, russia, middle_east, east_asia, china, southeast_asia, australia

  • news_cards: Related news, source citations

    [{ "title": "Headline", "source": "Publisher", "date": "Mar 28, 2026", "sentiment": "positive|negative|neutral|warning", "tag": "Category", "url": "https://..." }]
    
  • stat_cards: Key metrics and statistics (use for any numerical highlights)

    [{ "label": "Total Volume", "value": "$4.2B", "change": "+12.5%", "trend": "up|down|neutral", "icon": "💰" }]
    
  • comparison_table: Side-by-side comparisons

    { "headers": ["Feature", "Option A", "Option B"],
      "rows": [["Price", "$10/mo", "$25/mo"], ["Users", "5", "Unlimited"]],
      "highlight_col": 1 }
    
  • quote_block: Notable quotes from sources

    [{ "text": "The quote text here", "author": "Person Name", "role": "Title / Organization", "url": "https://..." }]
    
  • key_points: Bullet-point takeaways with icons

    [{ "icon": "✅", "title": "Point Title", "text": "Explanation of the point" }]
    

Step 3: Generate and Upload

Save the JSON to a temp file, then run:

node {baseDir}/scripts/generate-report.js --input /tmp/research-data.json --output /tmp/report.html
node {baseDir}/scripts/upload-report.js --file /tmp/report.html

The upload script automatically:

  1. Encrypts the HTML with AES-256-GCM (key never touches the server)
  2. Uploads the encrypted viewer page to R2
  3. Outputs the full URL with the decryption key in the fragment (#key=...)

Privacy guarantee: The URL fragment (#key=...) is never sent to the server. Only the person with the complete URL can view the report.

Return the URL to the user as:

📊 Research Report Ready
[Title of Research](https://r.a2ui.me/r/xxxxx.html#key=yyy)
6 steps · 14 sources · 87% confidence
🔒 End-to-end encrypted — only this link can decrypt

Step 4: Local Fallback

If no upload credentials are set, the report saves locally to {baseDir}/output/. Inform the user:

📊 Research report saved locally:
file:///path/to/output/report.html#key=yyy
(Set A2UI_R2_BUCKET to enable cloud hosting at r.a2ui.me)

Step 5: No-encrypt option

If the user explicitly wants a public (unencrypted) report, add --no-encrypt:

node {baseDir}/scripts/upload-report.js --file /tmp/report.html --no-encrypt

Important

  • Always include ALL research steps, not just the final answer
  • Include real source URLs whenever available
  • Set confidence based on source agreement (high if multiple sources agree, lower if contradictory)
  • The report should make the research process transparent and verifiable
  • Keep step summaries concise but informative (1-2 sentences each)

Comments

Loading comments...