Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Lingxi
v0.1.5灵犀 v3.3.6 - 智慧调度系统。心有灵犀,一点就通。智能理解用户意图,自动调度模型/技能/工具,编排多步骤任务,汇总结果反馈。支持多Agent协作架构。
⭐ 0· 295·0 current·0 all-time
byScarlett_AI@ai-scarlett
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
The human-readable description claims a full multi-agent orchestration system (models, dashboard, credentials, external integrations). However, the registry metadata lists no required env vars, no config paths, and no install spec. The SKILL.md and README instruct the user to clone a repo, run pip/Node installs, and place tokens in specific filesystem paths (~/.openclaw/workspace/.lingxi/dashboard_token.txt, ~/.github_token). That mismatch (a complex service but metadata saying 'no requirements') is incoherent.
Instruction Scope
Runtime instructions (SKILL.md / README) direct actions beyond a simple instruction-only skill: cloning repo, pip3 install --break-system-packages, creating token files, starting a web server that listens on a port, editing dashboard server code, and writing/reading local database files. The SKILL.md also references many scripts (orchestrator.py, dashboard server, executors) that are not present in the package manifest. The instructions therefore ask the agent/user to perform system-level operations and persistent configuration that are not represented in registry metadata.
Install Mechanism
No formal install spec is supplied, but the README tells users to run package installs (pip3 install -r requirements.txt) and to launch servers and modify files. Because there's no declared install mechanism, the skill's real install steps are informal and require running unverified commands that will write files and install packages on the host—this increases risk and is inconsistent with an instruction-only metadata entry.
Credentials
The registry claims no required env vars or credentials, yet the README/SKILL.md clearly state that optional features require credentials (Dashboard token saved to ~/.openclaw/workspace/.lingxi/dashboard_token.txt, GitHub token stored at ~/.github_token, cloud/model API keys, various platform bot tokens). Requiring multiple unrelated secrets (GitHub, cloud model keys, chat platform tokens) for the full feature set is plausible for that functionality, but the skill metadata failing to declare them and the docs instructing writing tokens to disk without clear least-privilege guidance is disproportionate and opaque.
Persistence & Privilege
The skill does not request always:true and does not declare autonomous-disable flags; however, its docs instruct creating persistent tokens and starting a dashboard server that persists data and listens on a network port. Persisting tokens and running a long-lived service increases blast radius if the code is untrusted, but no metadata claims elevated platform privileges. This is noteworthy but not sufficient alone to mark malicious.
What to consider before installing
This package's docs describe a powerful orchestration system that needs secrets, installs packages, writes token files, and starts a web dashboard, but the registry metadata doesn't list any of those requirements. Before installing or running it:
- Treat the code as untrusted until you can verify the referenced scripts actually exist and match the docs (the manifest here contains only docs, no code files mentioned in SKILL.md).
- Do not place real credentials (GitHub token, cloud API keys, platform bot tokens) into the paths the README suggests until you confirm the code that will read them and you trust its origin.
- Ask the publisher to provide accurate metadata: a clear install spec, exact required env vars/config paths, and a signed repository or official release URL.
- Prefer testing in an isolated sandbox/VM with no access to sensitive networks/secrets; if you must run on a host, use a throwaway account and no production credentials.
- Verify the GitHub repository (https://github.com/AI-Scarlett/lingxi-ai) and the presence and integrity of the scripts (orchestrator, executors, dashboard) before giving the skill filesystem/network access.
Given the mismatches and the instructions that would write persistent tokens and open network services, proceed cautiously; the package is suspicious until these inconsistencies are clarified.Like a lobster shell, security has layers — review code before you run it.
latestvk979v155fffn92hmsxs7ddxssh82x2ry
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
