Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Echo - OpenClaw Perplexity Ultimate Async Deep Researcher

v1.0.0

Perform deep, concurrent web research using the Perplexity Search API.

0· 423·0 current·0 all-time
byChris Lee@holygrass

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for holygrass/echo-openclaw-perplexity-ultimate-async-deep-researcher.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Echo - OpenClaw Perplexity Ultimate Async Deep Researcher" (holygrass/echo-openclaw-perplexity-ultimate-async-deep-researcher) from ClawHub.
Skill page: https://clawhub.ai/holygrass/echo-openclaw-perplexity-ultimate-async-deep-researcher
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: PERPLEXITY_API_KEY
Required binaries: python3
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install echo-openclaw-perplexity-ultimate-async-deep-researcher

ClawHub CLI

Package manager switcher

npx clawhub@latest install echo-openclaw-perplexity-ultimate-async-deep-researcher
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name, description, required binary (python3), and the single required env var (PERPLEXITY_API_KEY as primary credential) are consistent with a skill that queries the Perplexity Search API for research data.
Instruction Scope
SKILL.md confines behavior to: formulate 3–5 queries, run the provided Python script, parse JSON output, and synthesize/cite results. It does not ask to read unrelated files or environment variables. However, it explicitly mandates running the provided script (including pip install via subprocess) which grants the agent the ability to install and execute additional code at runtime—this expands the attack surface beyond simple API calls.
!
Install Mechanism
There is no install spec in the registry metadata, but the runtime script auto-installs the 'perplexityai' package by invoking 'pip install' via subprocess. This performs network fetches and writes packages to disk at runtime (moderate supply-chain risk). Installing from PyPI is common, but runtime auto-install is a hidden install behavior that may be unexpected in sandboxed or audited environments and is susceptible to typosquatting or malicious package replacement.
Credentials
The skill requires only PERPLEXITY_API_KEY and accesses only that environment variable in the script. No unrelated secrets, config paths, or additional credentials are requested.
Persistence & Privilege
The skill is not marked 'always' and does not request persistent system-wide changes. Still, its runtime behavior (pip installing a package and executing Python code via subprocess) requires write/network privileges in the execution environment; users running in shared/sensitive environments should be cautious.
What to consider before installing
This skill appears to do what it claims (run Perplexity searches) but requires the agent to install a PyPI package at runtime and execute Python subprocesses. Before installing or enabling it: (1) Decide whether you trust runtime pip installs in your environment—consider pre-installing the 'perplexityai' SDK in a controlled image. (2) Run the skill in a network-restricted sandbox or environment with limited filesystem impact if possible. (3) Ensure your PERPLEXITY_API_KEY has appropriate scope and rotate it if you later stop using the skill. (4) If you need stronger assurance, ask the skill author for an explicit install spec (package source, checksums) or provide the dependency yourself and remove the auto-install step.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Binspython3
EnvPERPLEXITY_API_KEY
Primary envPERPLEXITY_API_KEY
latestvk97dppbd69c76df3ywmfr6ph0x8206yt
423downloads
0stars
1versions
Updated 11h ago
v1.0.0
MIT-0

Echo - OpenClaw Perplexity Ultimate Async Deep Researcher

You are an expert autonomous researcher. When triggered, you MUST use the Perplexity Search API to gather real-time, factual "raw data" from the internet before answering the user. Do not rely solely on your internal training data.

Execution Workflow

You must strictly follow these 3 stages:

Stage 1: Query Formulation

Analyze the user's research request.

Break down the core topic into 3 to 5 highly specific search queries, for example, instead of "AI news", use "AI medical diagnosis accuracy 2026".

Stage 2: Execute Async Search

You must use your code execution tool (Python) to run the exact script below.

Instructions for Agent:

  1. Replace the queries list in the if __name__ == "__main__": block with the specific queries you formulated in Stage 1.
  2. Run the code and read the JSON output from stdout.
import asyncio
import json
import sys
import subprocess
import os

# Auto-install dependency to ensure zero-setup for the user
try:
    from perplexity import AsyncPerplexity
except ImportError:
    print("Installing perplexityai...")
    subprocess.check_call([sys.executable, "-m", "pip", "install", "perplexityai", "-q"])
    from perplexity import AsyncPerplexity

async def fetch_results(queries):
    # Ensure API Key exists
    if not os.environ.get("PERPLEXITY_API_KEY"):
        print(json.dumps({"error": "PERPLEXITY_API_KEY environment variable is not set."}, ensure_ascii=False))
        return

    client = AsyncPerplexity(
        api_key=os.environ.get("PERPLEXITY_API_KEY"),
    )

    # Create async tasks for concurrent execution
    tasks = [
        client.search.create(query=q, max_results=5, max_tokens_per_page=2048)
        for q in queries
    ]

    responses = await asyncio.gather(*tasks, return_exceptions=True)

    output = {}
    for q, res in zip(queries, responses):
        if isinstance(res, Exception):
            output[q] = {"error": str(res)}
        else:
            # Extract only necessary raw data to save context window limits
            output[q] = [
                {"title": r.title, "url": r.url, "snippet": r.snippet}
                for r in res.results
            ]

    # Output strictly as JSON for the LLM to parse
    print(json.dumps(output, ensure_ascii=False, indent=2))

if __name__ == "__main__":
    # AGENT: Replace this list with your formulated queries
    queries = ["QUERY_1", "QUERY_2", "QUERY_3", "QUERY_4", "QUERY_5"]
    asyncio.run(fetch_results(queries))

Stage 3: Synthesis and Citation

Read the JSON output generated by the python script.

Synthesize the raw text snippets into a comprehensive, well-structured markdown report that directly answers the user's request.

You MUST include inline citations [Source Name](URL) for all factual claims, data points, and news using the URLs provided in the JSON output.

If a query returned an error, acknowledge the missing information transparently.

Comments

Loading comments...