Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Scry

v0.1.0

Research any topic across 26+ sources: Reddit, X, YouTube, GitHub, HN, Bluesky, ArXiv, Dev.to, Polymarket, and more. The most comprehensive research skill av...

0· 261·0 current·0 all-time
byVihang D@vihangd

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for vihangd/scry.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Scry" (vihangd/scry) from ClawHub.
Skill page: https://clawhub.ai/vihangd/scry
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install scry

ClawHub CLI

Package manager switcher

npx clawhub@latest install scry
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description (multi-source research) align with the included modules and SKILL.md. The repository contains source modules for the many sites listed (GitHub, Reddit, X, YouTube, ArXiv, SEC EDGAR, etc.), scoring/deduplication pipelines, and a CLI orchestrator. The requested binaries/tokens are optional and match optional sources (yt-dlp for YouTube, X/Twitter cookies/tokens for X, SCRAPECREATORS_API_KEY for TikTok/Instagram, HF_TOKEN for HuggingFace, etc.).
Instruction Scope
The runtime instructions explicitly tell the agent to locate and execute scripts/scry.py (foreground, 5-minute timeout) and to read the entire output. The SKILL.md also instructs the agent to "discover available API keys and binaries." The code supports that: env.py reads ~/.config/scry/.env and environment variables and probes for binaries. This is coherent with enabling optional source access, but it does mean the script will enumerate local config and env keys when run.
Install Mechanism
No install spec is provided (no external downloads or installers). The skill is shipped with Python code (and vendored JavaScript for the X client). Running it executes the included scripts; nothing in the package attempts to fetch arbitrary install artifacts at runtime. This is the lower-risk pattern for skill distribution, but note that executing the bundled code runs network calls.
Credentials
The skill does not declare required env vars in the registry metadata, but its code reads a broad set of environment variables and config files (OPENAI_API_KEY, XAI_API_KEY, AUTH_TOKEN, CT0, THREADS_ACCESS_TOKEN, SCRAPECREATORS_API_KEY, HF_TOKEN, PRODUCTHUNT_TOKEN, SO_API_KEY, etc.) and will use them if present. This is proportionate to offering optional access to additional sources, but it means any tokens present in your environment or in ~/.config/scry/.env (or the compatibility path ~/.config/last30days/.env) could be accessed and used by the script. If you have sensitive keys in your environment, consider running the skill in a controlled environment or removing/isolating those keys.
Persistence & Privilege
The skill does not request always:true and does not alter other skills. It writes a cache to ~/.cache/scry and can read/write ~/.config/scry/.env per README guidance; those are reasonable for a local research tool. It does not request system-wide privileges beyond normal file I/O in user directories.
Assessment
This skill is a local research aggregator and appears coherent with its description, but it will (1) run the included Python script (scripts/scry.py) which performs many network requests, (2) probe your environment and config files for optional API keys and binaries, and (3) write a cache to ~/.cache/scry and may read ~/.config/scry/.env or ~/.config/last30days/.env. Before installing or running: (A) inspect scripts/scry.py (full contents) to confirm no unexpected output of secrets; (B) avoid storing sensitive credentials in your shell environment or ~/.config/scry/.env if you do not want them used; (C) run the skill in an isolated environment (container or throwaway account) if you need to be conservative. If you want, I can scan scripts/scry.py (the orchestrator) for any places that might print or transmit environment values or other surprises — provide the file and I'll review it line-by-line.
vendor/bird-search/lib/runtime-query-ids.js:50
Environment variable access combined with network send.
vendor/bird-search/lib/twitter-client-base.js:38
Environment variable access combined with network send.
!
vendor/bird-search/lib/runtime-query-ids.js:1
File read combined with network send (possible exfiltration).
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

latestvk97f8d5bp6448cwa6k6pag0tjh82shcj
261downloads
0stars
1versions
Updated 3h ago
v0.1.0
MIT-0

SCRY v1.0 — Research Any Topic Across 26+ Sources

Search Reddit, X, YouTube, GitHub, Hacker News, Bluesky, Dev.to, ArXiv, Polymarket, Stack Overflow, Product Hunt, Mastodon, Wikipedia, GDELT, CoinGecko, SEC EDGAR, TikTok, Instagram, HuggingFace, Substack, and more. Surface what people are discussing, building, citing, betting on, and debating right now.

CRITICAL: Parse User Intent

Before doing anything, parse the user's input for:

  1. TOPIC: What they want to learn about
  2. TARGET TOOL (if specified): Where they'll use the prompts
  3. QUERY TYPE:
    • RECOMMENDATIONS — "best X", "top X" → wants a LIST
    • NEWS — "what's happening with X" → wants current events
    • PROMPTING — "X prompts" → wants techniques + copy-paste prompts
    • GENERAL — anything else → wants broad understanding
  4. DOMAIN: Auto-detected or user-specified
    • tech — programming, AI, software, frameworks
    • science — research, papers, experiments
    • finance — stocks, earnings, markets
    • crypto — blockchain, tokens, DeFi
    • news — politics, geopolitics, events
    • entertainment — movies, music, gaming, social
    • general — everything else

Store these variables:

  • TOPIC = [extracted topic]
  • TARGET_TOOL = [extracted tool, or "unknown"]
  • QUERY_TYPE = [RECOMMENDATIONS | NEWS | PROMPTING | GENERAL]
  • DOMAIN = [auto-detected or user-specified]

DISPLAY your parsing:

I'll research {TOPIC} across 26+ sources to find what's been discussed in the last 30 days.

Parsed intent:
- TOPIC = {TOPIC}
- DOMAIN = {DOMAIN}
- QUERY_TYPE = {QUERY_TYPE}
- TARGET_TOOL = {TARGET_TOOL or "unknown"}

Research typically takes 1-3 minutes. Starting now.

Research Execution

Step 1: Run the SCRY script (FOREGROUND — do NOT background this)

CRITICAL: Run in FOREGROUND with 5-minute timeout. Read the ENTIRE output.

for dir in \
  "." \
  "${CLAUDE_PLUGIN_ROOT:-}" \
  "$HOME/.claude/skills/scry" \
  "$HOME/.agents/skills/scry"; do
  [ -n "$dir" ] && [ -f "$dir/scripts/scry.py" ] && SKILL_ROOT="$dir" && break
done

if [ -z "${SKILL_ROOT:-}" ]; then
  echo "ERROR: Could not find scripts/scry.py" >&2
  exit 1
fi

python3 "${SKILL_ROOT}/scripts/scry.py" "$ARGUMENTS" --emit=compact

Use a timeout of 300000 (5 minutes) on the Bash call.

The script will automatically:

  • Detect your domain (tech/science/finance/crypto/news/entertainment/general)
  • Discover available API keys and binaries
  • Search all available sources in parallel
  • Score results with domain-aware weights
  • Deduplicate and cross-link across sources
  • Detect conflicts between sources
  • Output a comprehensive research report

Read the ENTIRE output. It contains sections for every source that returned results.

Add --domain=DOMAIN if you detected the domain in intent parsing. Add --deep if the user asked for comprehensive results.


Step 2: WebSearch Supplement

After the script finishes, do WebSearch to supplement with blogs, tutorials, and news.

Choose queries based on QUERY_TYPE:

  • RECOMMENDATIONS: best {TOPIC} recommendations, {TOPIC} list examples
  • NEWS: {TOPIC} news 2026, {TOPIC} announcement update
  • PROMPTING: {TOPIC} prompts examples 2026, {TOPIC} techniques tips
  • GENERAL: {TOPIC} 2026, {TOPIC} discussion

Exclude reddit.com, x.com, twitter.com (covered by script).


Judge Agent: Synthesize All Sources

Ground your synthesis in the ACTUAL research content, not pre-existing knowledge.

  1. Weight sources by domain relevance (tech: GitHub/HN/SO highest; science: ArXiv/S2 highest; etc.)
  2. Cross-platform signals are strongest — items with [also on: ...] tags are most important
  3. Note conflicts between sources (flagged in the output)
  4. Extract top 3-5 actionable insights

CITATION RULES

  • Cite sparingly: 1-2 per topic, 1 per pattern
  • Priority: @handles > r/subreddits > YouTube channels > GitHub repos > HN > ArXiv > web
  • Never paste raw URLs — use source names
  • BAD: "per https://arxiv.org/abs/..." → GOOD: "per ArXiv"
  • BAD: "per @x, @y, @z" → GOOD: "per @x" (pick strongest)

If QUERY_TYPE = RECOMMENDATIONS

Extract SPECIFIC NAMES — products, tools, repos, people. Count mentions. List by popularity.


Display Format

FIRST — What I learned:

What I learned:

**{Topic 1}** — [1-2 sentences, per @handle or r/sub]

**{Topic 2}** — [1-2 sentences, per @handle or r/sub]

KEY PATTERNS from the research:
1. [Pattern] — per @handle
2. [Pattern] — per r/sub
3. [Pattern] — per GitHub repo

THEN — Stats (copy EXACTLY, replacing placeholders):

The script outputs a stats block — display it as-is. If it doesn't appear, build one:

---
✅ All agents reported back!
├─ 🟡 HN: {N} stories │ {N} points │ {N} comments
├─ 🦞 Lobsters: {N} items │ {N} points
├─ 📝 Dev.to: {N} articles │ {N} reactions
├─ 🐙 GitHub: {N} repos │ {N}★
├─ 🦋 Bluesky: {N} posts │ {N} likes
├─ 🟠 Reddit: {N} threads │ {N} upvotes
├─ 🔵 X: {N} posts │ {N} likes
├─ 🔴 YouTube: {N} videos │ {N} views
├─ 📄 ArXiv: {N} papers │ {N} citations
├─ 📊 Polymarket: {N} markets │ {odds summary}
├─ 🌐 Web: {N} pages — Source, Source, Source
└─ 🗣️ Top voices: @handle1, @handle2 │ r/sub1, r/sub2
---

Omit any source line that returned 0 results.

LAST — Invitation (adapt to QUERY_TYPE):

Include 2-3 SPECIFIC suggestions based on research findings.


WAIT FOR USER'S RESPONSE

After showing results, STOP and wait.

WHEN USER RESPONDS

  • Question → Answer from research (no new searches)
  • Go deeper → Elaborate using findings
  • Create something → Write a tailored prompt
  • Different topic → Run new research

Agent Mode (--agent flag)

If --agent in ARGUMENTS:

  1. Skip intro display
  2. Skip AskUserQuestion calls
  3. Run research + output report
  4. Stop (no follow-up invitation)

Security & Permissions

What this skill does:

  • Searches 26+ public APIs and RSS feeds for research data
  • Runs gh CLI for GitHub search (uses your existing auth)
  • Runs yt-dlp for YouTube search (public data)
  • Optionally uses ScrapeCreators API for TikTok/Instagram
  • Stores cached results in ~/.cache/scry/ (24h TTL)

What this skill does NOT do:

  • Does not post, like, or modify content on any platform
  • Does not access private accounts or data
  • Does not share API keys between providers
  • Does not write to any external service

Bundled scripts: scripts/scry.py (orchestrator), scripts/lib/ (shared utilities + source modules)

Comments

Loading comments...