prismfy-search

Search the web across 10 engines — Google, Reddit, GitHub, arXiv, Hacker News, and more — using Prismfy. Use when the user asks to search the web, look somet...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
4 · 21.7k · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description claim live web search via Prismfy. Declared requirement is a single PRISMFY_API_KEY and the binaries curl and jq — which the bundled search.sh uses to call https://api.prismfy.io endpoints. No unrelated credentials, services, or unusual binaries are requested.
Instruction Scope
SKILL.md and search.sh only instruct running the bundled script and storing an API key in environment files (.bashrc, .env, or ~/.claude/.env). This is expected for an API-based search helper. Note: the docs recommend adding the API key to a global Claude env (~/.claude/.env), which will expose the key to all Claude Code sessions — users should be aware of that persistence and scope.
Install Mechanism
No install spec or remote downloads; the skill is instruction-only with a local helper script (search.sh). Nothing is written to disk by an installer and there are no URLs or archive extracts in the install process.
Credentials
Only PRISMFY_API_KEY is required (declared as primaryEnv). The script uses that key to authenticate to Prismfy API and does not access other environment variables or config paths. The single API key is proportional to the stated functionality.
Persistence & Privilege
Skill is not marked always:true and is user-invocable (normal defaults). It does not modify other skills or request system-wide config changes. Autonomous invocation is permitted by platform default and is not, by itself, a red flag here.
Assessment
This skill appears to do exactly what it says: call Prismfy's API and return results. Before installing, consider: - Privacy: search queries and returned snippets are sent to Prismfy (https://prismfy.io). If you will search sensitive/internal content, do not send it through a third-party search proxy. - API key handling: follow least-privilege storage. Avoid committing keys into repos or putting them in globally-applied files unless you intend every local session to have access. Prefer project .env or an ephemeral session secret if you worry about broad exposure. - Quotas and paid engines: some engines (Google, Reddit, GitHub, arXiv) may require a paid Prismfy plan per the docs. Use /search --quota to check usage. - Trust the provider: review Prismfy's docs/privacy if you plan to route many queries or sensitive data through their service. If any of the above concerns are unacceptable, do not install or avoid adding the PRISMFY_API_KEY to global environment files. Otherwise the skill is coherent with its stated purpose.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk97ep051cad7nceg1bwrn43fz983myys

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🔍 Clawdis
Binscurl, jq
EnvPRISMFY_API_KEY
Primary envPRISMFY_API_KEY

SKILL.md

🔍 Prismfy Web Search

Real-time web search across 10 engines — Google, Reddit, GitHub, arXiv, Hacker News, Ask Ubuntu, and more — powered by Prismfy. No proxy hassle, no CAPTCHA, no blocked requests. Just results.

Get your free API key at prismfy.io and you're ready to go.


Setup

1. Get an API key

Head to prismfy.io, create an account, and grab your API key from the dashboard. There's a free tier — no credit card needed to get started.


2. Add the key to your environment

Pick the method that fits your setup:

Option A — Shell profile (works everywhere, permanent)

# Add to ~/.zshrc or ~/.bashrc:
export PRISMFY_API_KEY="ss_live_your_key_here"

# Then reload:
source ~/.zshrc   # or: source ~/.bashrc

Option B — Project .env file (per-project)

# In your project root, create or edit .env:
echo 'PRISMFY_API_KEY=ss_live_your_key_here' >> .env

Claude Code automatically loads .env files from the project root.

Option C — Claude Code global env (recommended)

# Add to ~/.claude/.env (applies to all Claude Code sessions):
echo 'PRISMFY_API_KEY=ss_live_your_key_here' >> ~/.claude/.env

3. Verify it works

bash search.sh --quota

You should see your plan, searches used, and how many you have left. If you see PRISMFY_API_KEY is not set — check that you reloaded your shell or that the .env file is in the right place.


That's it. No credit card, no waitlist. 3,000 free searches every month.


How to use

Just ask naturally — the skill handles the rest:

/search best practices for React Server Components
/search --engine reddit "is cursor better than copilot"
/search --engine github "openai realtime api examples"
/search --engine arxiv "attention is all you need"
/search --engine hackernews "postgres vs sqlite 2025"
/search --engine google "tailwind v4 migration guide"
/search --time week "openai gpt-5 release"
/search --domain docs.python.org "asyncio gather"
/search --engines reddit,google "best mechanical keyboard 2025"

Or just talk normally:

  • "Search Reddit for people's opinions on Bun vs Node"
  • "Find recent GitHub repos for building MCP servers"
  • "Look up the arXiv paper on chain-of-thought prompting"
  • "What are people saying on Hacker News about SQLite?"

Available engines

EngineWhat it's good for
braveGeneral web search, privacy-first
startpageGoogle results without tracking
yahooGeneral web, news
googleMost comprehensive web search
redditReal user opinions, discussions
githubCode, repos, issues, READMEs
arxivAcademic papers, research
hackernewsTech community, startups
askubuntuLinux, Ubuntu, shell questions
yahoonewsLatest news headlines

Default (no --engine): uses brave + yahoo in parallel.


Options

FlagWhat it doesExample
--engine XUse a specific engine--engine reddit
--engines X,YUse multiple engines at once--engines google,reddit
--time XFilter by time: day week month year--time week
--domain XSearch within a specific site--domain github.com
--page NGo to results page N--page 2
--quotaCheck your remaining free quota--quota

What Claude does with results

This skill doesn't just paste a list of links. Claude:

  • Answers your question using the search results as live context
  • Cites sources with URLs so you can dig deeper
  • Extracts code from results when you're looking for examples
  • Summarizes discussions when searching Reddit or Hacker News
  • Suggests follow-up searches if the first results aren't quite right

How the skill works

The skill uses search.sh — a bundled helper script that handles the API call, error messages, and result formatting for you:

# Simple search
bash search.sh "typescript best practices 2025"

# With engine
bash search.sh --engine reddit "is bun worth switching from node"

# Multiple engines
bash search.sh --engines google,reddit "nextjs vs remix"

# With time filter
bash search.sh --time week "openai new model"

# Raw JSON output
bash search.sh --raw "rust async runtime"

Results come back with title, URL, snippet, and which engine found it. Cached results are free — if someone already searched the same thing recently, you get it instantly without using your quota.


Check your quota

/search --quota

This calls /v1/user/me and shows your current plan, searches used, searches remaining, and when your quota resets.


Troubleshooting

PRISMFY_API_KEY is not set → Add export PRISMFY_API_KEY="ss_live_..." to your shell profile and restart the terminal.

401 Unauthorized → Double-check your key starts with ss_live_. Keys are shown only once — if lost, create a new one in the Dashboard → API Keys.

Engine not available on your plan → Google, Reddit, GitHub etc. require a paid plan. Free tier supports brave, startpage, and yahoo. Use one of those or upgrade at prismfy.io.

No results / empty results → Try a different engine or rephrase your query. The skill will suggest alternatives automatically.


Implementation

When this skill is invoked, follow these steps:

  1. Parse the user's request — extract the query, engine preference, time filter, domain, and page number.

  2. Run search.sh with the parsed arguments:

bash search.sh [--engine X] [--engines X,Y] [--time X] [--domain X] [--page N] <query>
  1. Handle the output:

    • ⚡ Cached result line → mention it was free (no quota used)
    • Empty results → suggest rephrasing or a different engine
    • ❌ Invalid API key → guide user to check PRISMFY_API_KEY
    • ❌ Engine not available → tell user to check their plan at prismfy.io
  2. Present results in a clear, useful format:

    • Answer the user's underlying question using the content
    • List sources with titles and URLs
    • For Reddit/HN results: summarize the discussion sentiment
    • For GitHub results: highlight repo name and what it does
    • For arXiv results: summarize the abstract
  3. For --quota flag:

bash search.sh --quota

Powered by Prismfy — web search infrastructure for developers.

Files

2 total
Select a file
Select a file to preview.

Comments

Loading comments…