Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Travel Information and News
v1.0.0Search and aggregate travel news, information, and reviews from multiple sources. Designed for travel planning professionals, travel agents, tour operators,...
⭐ 0· 171·0 current·0 all-time
byAllen Niu@nhzallen
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The skill's code and instructions align with a travel search/aggregation tool: it queries Tavily (primary), can fall back to Brave, and can scrape blocked sites via Puppeteer. However the registry metadata claims no required env vars while both the SKILL.md and search.py clearly require a TAVILY_API_KEY (and optionally BRAVE_API_KEY). That registry omission is an inconsistency users should notice.
Instruction Scope
SKILL.md mandates language-detection and translation of all results to the query language. The script includes a simple detect_language() but the visible code does not show a translation step (the file was truncated, so translation may exist later). SKILL.md also tells the agent to use ~/.openclaw/.env or env vars for keys, but search.py only loads ../.env and ./ .env in the skill directory—these paths differ. The skill instructs spawning Xvfb and running Chromium/Puppeteer and will start Xvfb if absent; these are legitimate for scraping but escalate the agent's ability to run system processes and interact with remote sites.
Install Mechanism
No formal install spec (instruction-only) — lower risk overall. The instructions recommend installing system packages (apt-get xvfb chromium) and npm puppeteer; scripts will also download a CJK font from a GitHub raw URL at runtime if missing. These are expected for browser-based scraping and PDF generation, but they cause the skill to write files (/tmp) and spawn processes (Xvfb, node). The GitHub raw URL is a known host (reasonable), but runtime downloads mean code fetches remote content.
Credentials
The code requires TAVILY_API_KEY (and optionally BRAVE_API_KEY), which is proportionate to its functionality. However the registry's 'Required env vars: none' is inaccurate/omissive. The skill also reads .env files from the skill directory (not the ~/.openclaw path referenced in docs). The skill will propagate environment when launching subprocesses (e.g., node with DISPLAY), so API keys present in env could be available to spawned processes—this is expected but worth noting.
Persistence & Privilege
The skill does not request 'always: true' and uses the normal autonomous invocation default. It does, however, start background processes when performing browser scraping (it will launch Xvfb if not running and invoke node). The SKILL.md also recommends installing an external 'desktop-control' skill for simulated clicks — that external skill would grant stronger automation privileges. None of this implies permanent persistence, but it does increase runtime privileges (process spawning, desktop automation) when enabled.
What to consider before installing
This skill appears to do what it says (multi-source travel search + optional scraping), but check a few things before installing or running it with real credentials:
- The skill requires a TAVILY_API_KEY (and optionally BRAVE_API_KEY) even though the registry metadata lists none — do not provide API keys unless you trust the Tavily endpoint and the skill owner.
- SKILL.md mentions ~/.openclaw/.env, but search.py actually loads .env from the skill directory (../.env and .env). Ensure you place API keys where the script will read them or pass them via process env explicitly.
- SKILL.md mandates translating all results to the query language. The visible code shows only simple language detection; verify the translation implementation exists and review which translation service is used (that may require additional credentials or cause unexpected network calls).
- The browser scraping path will install/run system components: Xvfb, Chromium, and Node/Puppeteer; the script can start Xvfb and spawn a node process. Run scraping in a sandbox/container and review the browser_search.js and search.py code if you plan to enable --use_browser.
- The README suggests installing a 'desktop-control' skill for simulated clicking. That skill grants broader automation (mouse/keyboard) — only install it if you understand and trust it.
- The PDF path downloads a font at runtime from GitHub raw; if you have strict network policies, be aware of this runtime fetch.
Recommendations: review the full (untruncated) search.py for any hidden network endpoints or translation calls; run initially without enabling browser scraping; use least-privilege API keys (scoped, rate-limited) and rotate them after testing; run in an isolated environment if you must enable the browser automation features.Like a lobster shell, security has layers — review code before you run it.
exhibitionvk976r5aw2ez40t4rfdeca5nb3182vj1ninformationvk976r5aw2ez40t4rfdeca5nb3182vj1nlatestvk976r5aw2ez40t4rfdeca5nb3182vj1nnewsvk976r5aw2ez40t4rfdeca5nb3182vj1nreviewvk976r5aw2ez40t4rfdeca5nb3182vj1nsearchvk976r5aw2ez40t4rfdeca5nb3182vj1ntavilyvk976r5aw2ez40t4rfdeca5nb3182vj1ntourismvk976r5aw2ez40t4rfdeca5nb3182vj1ntravelvk976r5aw2ez40t4rfdeca5nb3182vj1n
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
