Perplexity Research

v2.0.0

Conduct deep research using Perplexity Agent API with web search, reasoning, and multi-model analysis. Use when the user needs current information, market re...

0· 796·2 current·2 all-time
byJoe Hu@hushenglang
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's code and SKILL.md implement a Perplexity API client with web search, streaming, multi-model comparison and cost tracking — all coherent with the name/description. However, the registry metadata at the top of the report lists 'Required env vars: none' while the SKILL.md and manifest explicitly require PERPLEXITY_API_KEY; this mismatch is an internal inconsistency that should be resolved.
!
Instruction Scope
The runtime instructions and the client code instruct the agent to perform web searches (web_search tool) and even include text telling the model to 'NEVER ask permission to search - just search when appropriate.' That guidance effectively encourages autonomous searching without explicit user confirmation and widens the skill's autonomy. Apart from that, the SKILL.md and code otherwise constrain activity to the Perplexity API and to reading a .env in the skill's scripts/ directory (expected for API keys).
Install Mechanism
There is no install spec (instruction-only skill), and included manifest/README recommend standard pip dependencies (perplexity, python-dotenv). No arbitrary downloads or extraction from untrusted URLs are present in the package contents provided.
Credentials
The skill asks for a single API credential (PERPLEXITY_API_KEY), which is proportionate to the declared functionality. However, registry metadata incorrectly lists 'none' for required env vars while manifest and SKILL.md require the key — this discrepancy can lead to surprises at install/runtime if the platform doesn't prompt for the secret. The code reads .env from the skill's scripts/ directory; ensure .env is not accidentally included in backups or shared.
Persistence & Privilege
The skill does not request 'always: true' and does not modify system-wide configurations. It only reads a local .env and otherwise uses the network to call the Perplexity API. Example workflows write reports to user-specified output paths (user-initiated).
What to consider before installing
This skill appears to be a legitimate Perplexity API client for web research, but take these precautions before installing or enabling it: 1) Confirm the platform will request or store PERPLEXITY_API_KEY securely (manifest/SKILL.md require it; the registry header did not). 2) Review and consider removing or changing the hard-coded guidance in the client that tells the model to 'NEVER ask permission to search' — that encourages autonomous web searches which may query external sites without user consent. 3) Keep any .env files out of source control and backups; manifest excludes scripts/.env but double-check your packaging. 4) Inspect the included 'perplexity' dependency (pip package) if you want higher assurance about upstream behavior. If you are uncomfortable with the agent performing searches autonomously, do not enable the skill for autonomous invocation or ensure your agent's policy prevents unprompted web searches.

Like a lobster shell, security has layers — review code before you run it.

latestvk9764v86g0rs7gyvwda31pf7yh82eca7
796downloads
0stars
2versions
Updated 1mo ago
v2.0.0
MIT-0

Perplexity Research

Research assistant powered by Perplexity Agent API with web search and reasoning capabilities.

Quick Start

The Perplexity client is available at scripts/perplexity_client.py in this skill folder.

Default model: openai/gpt-5.2 (GPT latest)

Key capabilities:

  • Web search for current information
  • High reasoning effort for deep analysis
  • Multi-model comparison
  • Streaming responses
  • Cost tracking

Common Research Patterns

1. Deep Research Query

Use for comprehensive analysis requiring web search and reasoning:

# Import from skill scripts folder
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent / "scripts"))
from perplexity_client import PerplexityClient

client = PerplexityClient()
result = client.research_query(
    query="Your research question here",
    model="openai/gpt-5.2",
    reasoning_effort="high",
    max_tokens=2000
)

if "error" not in result:
    print(result["answer"])
    print(f"Tokens: {result['tokens']}, Cost: ${result['cost']}")

2. Quick Web Search

Use for time-sensitive or current information:

result = client.search_query(
    query="Your question about current events",
    model="openai/gpt-5.2",
    max_tokens=1000
)

3. Model Comparison

Use when output quality is critical:

results = client.compare_models(
    query="Your question",
    models=["openai/gpt-5.2", "anthropic/claude-3-5-sonnet", "google/gemini-2.0-flash"],
    max_tokens=300
)

for result in results:
    if "error" not in result:
        print(f"\n{result['model']}: {result['answer']}")

4. Streaming for Long Responses

Use for better UX with lengthy analysis:

client.stream_query(
    query="Your question",
    model="openai/gpt-5.2",
    use_search=True,
    max_tokens=2000
)

Research Workflow

When conducting research:

  1. Initial exploration: Use research_query() with web search enabled
  2. Validate findings: Compare key insights across models with compare_models()
  3. Deep dive: Use streaming for detailed analysis on specific aspects
  4. Cost-aware: Monitor token usage and costs in results

Model Selection

Default: openai/gpt-5.2 (Latest GPT model)

Alternative models:

  • anthropic/claude-3-5-sonnet - Strong reasoning, balanced performance
  • google/gemini-2.0-flash - Fast, cost-effective
  • meta/llama-3.3-70b - Open source alternative

Switch models based on:

  • Quality needs (GPT-5.2 for best results)
  • Speed requirements (Gemini Flash for quick answers)
  • Cost constraints (compare costs in results)

Reasoning Effort Levels

Control analysis depth with reasoning_effort:

  • "low" - Quick answers, minimal reasoning
  • "medium" - Balanced reasoning (default for most queries)
  • "high" - Deep analysis, comprehensive research (recommended for research)

Environment Setup

Ensure PERPLEXITY_API_KEY is set:

export PERPLEXITY_API_KEY='your_api_key_here'

Or create .env file in the skill's scripts/ directory:

PERPLEXITY_API_KEY=your_api_key_here

Error Handling

All methods return error information:

result = client.research_query("Your question")

if "error" in result:
    print(f"Error: {result['error']}")
    # Handle error appropriately
else:
    # Process successful result
    print(result["answer"])

Cost Optimization

  • Use max_tokens to limit response length
  • Start with lower reasoning effort, increase if needed
  • Use search_query() instead of research_query() for simpler questions
  • Monitor costs via result["cost"] field

Integration Examples

Investment Research

client = PerplexityClient()

# Market analysis
result = client.research_query(
    query="Analyze recent developments in AI chip market and key competitors",
    reasoning_effort="high"
)

# Company deep dive
result = client.search_query(
    query="Latest earnings report for NVIDIA Q4 2025"
)

# Multi-model validation
results = client.compare_models(
    query="What are the biggest risks in the semiconductor industry?",
    models=["openai/gpt-5.2", "anthropic/claude-3-5-sonnet"]
)

Trend Analysis

# Current trends with web search
result = client.research_query(
    query="Emerging trends in sustainable investing and ESG adoption rates",
    reasoning_effort="high",
    max_tokens=2000
)

# Stream for real-time updates
client.stream_query(
    query="Latest developments in quantum computing commercialization",
    use_search=True
)

Multi-Turn Research

# Build context across multiple queries
messages = [
    {"role": "user", "content": "What is the current state of fusion energy?"},
    {"role": "assistant", "content": "...previous response..."},
    {"role": "user", "content": "Which companies are leading in this space?"}
]

result = client.conversation(
    messages=messages,
    use_search=True
)

Best Practices

  1. Default to research_query() for most research tasks - it combines web search with high reasoning
  2. Use streaming for user-facing applications to show progress
  3. Compare models for critical decisions or when quality is paramount
  4. Set reasonable max_tokens - 1000 for summaries, 2000+ for deep analysis
  5. Track costs - access via result["cost"] and result["tokens"]
  6. Handle errors gracefully - always check for "error" key in results

API Reference

See reference.md for complete API documentation, or scripts/perplexity_client.py for:

  • Full method signatures
  • Additional parameters
  • CLI usage examples
  • Implementation details

Command Line Usage

Run from the skill directory:

# Research mode
python scripts/perplexity_client.py research "Your question"

# Web search
python scripts/perplexity_client.py search "Your question"

# Streaming
python scripts/perplexity_client.py stream "Your question"

# Compare models
python scripts/perplexity_client.py compare "Your question"

Comments

Loading comments...