AIRS 具身智能产业研究 Skills
PassAudited by VirusTotal on May 12, 2026.
Overview
Type: OpenClaw Skill Name: airs-embodied-intelligence-research-skills Version: 1.0.1 The skill bundle is a comprehensive research tool designed for the robotics industry to collect and analyze bidding data from the Tianyancha platform. It utilizes Puppeteer for browser automation, standard libraries for CSV/Excel processing (xlsx, csv-writer), and LLMs for structured data extraction. The code logic across files like `src/crawl_bidding.js`, `src/extract_cases.js`, and `src/utils/llm.js` is transparent, well-commented, and strictly follows the documented research workflow. There are no indicators of data exfiltration, unauthorized persistence, or malicious prompt injection; sensitive configurations like API keys are handled via a local `settings.json` file which is excluded from the repository by design.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The skill can use the logged-in Tianyancha session and may consume LLM API quota while processing research data.
The skill explicitly requires an authenticated Tianyancha browser session and an LLM provider API key. This is disclosed and purpose-aligned, but it grants access under the user's accounts.
- 天眼查账号和已登录的浏览器会话 - LLM provider:在本地 `config/settings.json` 中配置 OpenAI-compatible provider,例如 Moonshot
Use a dedicated browser profile or account where possible, keep config/settings.json private, and close the remote-debugging browser session after use.
While the debug port is open, the skill can automate the logged-in browser; other local processes may also be able to access that port.
The code attaches to Chrome through the local DevTools remote-debugging port. This is expected for the Tianyancha crawling workflow, but it is a powerful browser automation channel.
puppeteer.connect({
browserURL: 'http://127.0.0.1:9222',
defaultViewport: null,
});Run Chrome remote debugging only on a trusted local machine, avoid leaving the port open, and close that Chrome session when crawling is complete.
Raw announcement text or user-provided research data may be processed by the configured LLM provider.
The extraction workflow sends local crawl/raw-content data to a configured LLM provider for structured extraction. This is central to the stated purpose, but it is an external data flow if the provider is remote.
用于读取 data/bidding_records.csv 与 data/raw_content/*.md,调用 LLM 提取准入结果、场景分类、需求方、机器人型号、金额、部署时间和案例详情
Do not include confidential Excel files or private raw content unless your chosen LLM provider is approved for that data.
Installing the skill fetches third-party Node packages, which carries ordinary dependency supply-chain risk.
The setup relies on npm packages for browser automation, spreadsheet parsing, CSV handling, and logging. This is normal for the skill, and a package-lock is included.
"dependencies": {
"puppeteer-core": "^24.0.0",
"xlsx": "^0.18.5",
"csv-writer": "^1.6.0",
"csv-parse": "^5.6.0",
"winston": "^3.17.0"
}Install from a trusted registry, keep the lockfile, and update dependencies through normal review.
