Search Intelligence Skill
PassAudited by ClawScan on May 10, 2026.
Overview
The skill appears to be a coherent SearXNG-based search tool, but users should notice that it supports broad OSINT/security dorking and sends sensitive search queries/results through search infrastructure.
This skill looks safe to install if you want a SearXNG-backed search/dorking helper, but use it carefully: verify the package/source and Docker image, run searches only on authorized targets, treat web results as untrusted, and do not assume “full privacy” unless you control and trust the SearXNG instance.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The tool could help an agent find sensitive public exposures or reconnaissance information if used on targets without permission.
The skill deliberately generates and executes security dorks that can identify exposed files or administrative interfaces. This is disclosed and aligned with the search/OSINT purpose, but it is dual-use and should be limited to authorized targets.
Security scanning — exposed files and panels ... "find exposed .env files, admin panels, and directory listings on example.com"
Use it only for lawful, authorized research, and review generated queries before running deep or exhaustive searches.
A user could install different code or container contents over time if they do not pin a commit, package version, or image digest.
The documented setup uses a live GitHub source install and a Docker image tagged latest. This is common for development setup, but it means installed code/images can change unless the user pins and verifies versions.
git clone https://github.com/mouaad-ops/search-intelligence-skill.git ... pip install -e . ... searxng/searxng:latest
Install from a trusted source, pin a commit/version or Docker digest where practical, and inspect the repository before editable installation.
A malicious or manipulated search result could try to influence the agent if the agent treats returned snippets as instructions instead of evidence.
Search result titles/snippets from the open web are formatted directly for an AI agent's context. Retrieved web text is untrusted and could contain prompt-injection style instructions.
LLM-Ready Output | `.to_context()` formats results for AI agent consumption
Treat search results as untrusted data, not commands; keep system/developer instructions higher priority than retrieved web content.
Users may overestimate privacy, especially when searching for personal identifiers, domains, vulnerabilities, or sensitive keywords.
The privacy claim is broad. The artifacts also show that queries are sent to a SearXNG instance and routed to search engines, so privacy depends on the chosen SearXNG deployment and downstream engines.
**Zero API keys. Full privacy. 90+ engines. Intelligent dork generation.**
Use a SearXNG instance you trust, avoid sending unnecessary sensitive identifiers, and understand what your SearXNG instance logs or forwards.
