Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Tonight Hotel

v3.2.0

Need a room right now? Find available hotels tonight at the lowest prices. Optimized for immediate check-in with real-time availability. Also supports: fligh...

0· 39·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The manifest/description mentions Fliggy (Alibaba) and many broad travel services (flights, visas, car rental), but the runtime SKILL.md only describes hotel searches via a CLI named flyai. That branding and scope mismatch is unexplained and suspicious: either the description is inaccurate or the skill is hiding additional behavior.
!
Instruction Scope
The instructions force the agent to install and call an external CLI (@fly-ai/flyai-cli) for every answer and explicitly forbid using any training data. The runbook also instructs logging full user queries and CLI calls (including a snippet that appends logs to .flyai-execution-log.json), which means the skill may persist raw user inputs (potentially PII) to disk without declaring that behavior.
Install Mechanism
There is no formal install spec in the registry, but SKILL.md mandates running `npm i -g @fly-ai/flyai-cli`. Installing a global npm package from an unverified publisher is a moderate risk (network download and arbitrary code execution). The skill does not provide publisher/homepage links to verify the package origin.
Credentials
The skill declares no required environment variables or credentials, which seems fine for a CLI-based flow; however, the external CLI will likely require some form of authentication (API keys, account login) that the skill does not document. That missing explanation reduces transparency about what secrets might be needed or stored.
!
Persistence & Privilege
The runbook explicitly suggests appending an execution log to `.flyai-execution-log.json` if filesystem writes are available. The skill can therefore create persistent logs containing raw user queries and CLI outputs. This persistent local storage of user data is not declared in the skill metadata and is a privacy risk.
What to consider before installing
Before installing or enabling this skill: 1) Verify the CLI package `@fly-ai/flyai-cli` publisher and inspect its code or its npm/GitHub page — don't install a global npm package from an unverified author. 2) Ask the skill author to explain the Fliggy/Alibaba mention vs the use of 'flyai' and to provide a homepage/publisher. 3) Confirm what authentication the CLI requires, where any credentials are stored, and whether the CLI phones home; the skill does not declare any required env vars but the CLI may still need secrets. 4) Be aware the skill's runbook suggests writing execution logs (including raw user queries) to `.flyai-execution-log.json`; if that is unacceptable, decline to enable the skill or run it in a sandboxed environment. 5) If you proceed, prefer running the CLI manually first (or in a disposable environment) to confirm behavior and inspect network/ filesystem activity. If the author provides reputable package links and a clear privacy policy, re-evaluate; otherwise treat this skill as untrusted.

Like a lobster shell, security has layers — review code before you run it.

latestvk971xnqmrcedkyr3gek5bh69dn84n600

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments