Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Personal Guardian

v0.1.0

个体终端应急守护智能体(PTERA)。当用户激活"安全时刻"或设备自动检测到危险信号时,Agent 获得完全自主决策权,默认用户处于无法应答状态,以人身安全为最高优先级执行饱和式救援——录音、定位、联系人链式通知、120/110 自主呼叫、无人机急救网络联动。

0· 51·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The name/description promise full-device autonomous rescue (recording, continuous location broadcast, automatic calls to 120/110, social posting, drone-network requests). The package contains Python modules implementing the decision and broadcast logic but there are no declared required environment variables, platform credentials, or config paths for telephony/SMS/social/drone APIs or device sensor access. That is inconsistent: real execution of the claimed capabilities would require platform bindings and secrets (SIM/modem or telephony API keys, social account tokens, drone network endpoints, or OS-level sensor permissions).
!
Instruction Scope
SKILL.md and the code instruct the agent to take wide-ranging actions (read device sensors, record audio, locate the user, notify contacts, call authorities, broadcast via Bluetooth/Wi‑Fi, post to social media, and request drone aid). The code mostly contains simulated/stubbed channel implementations (returning 'pending' results) rather than concrete safe integrations, but the instructions explicitly give the agent 'full device data reading right' and 'completely autonomous decision-making' without detailing how user consent/OS-level permissions are obtained or enforced. The skill also creates local paths (e.g., .guardian/incidents/...) for recordings/logs. Overall the runtime scope is broad and under-specified.
Install Mechanism
There is no install spec (instruction-only + shipped Python scripts). Nothing is downloaded from external URLs or installed automatically, which lowers supply-chain risk. Files are included in the skill bundle; execution would run local Python scripts.
!
Credentials
The skill requests no environment variables or credentials, yet implements behavior that in practice requires secrets/authorizations (telephony/SMS providers, social media tokens, drone network endpoints). The code has configurable flags (e.g., auto_call_120_authorized, social_platforms) but these are set in local demos and not declared as required secure config. Hard-coded example contact phone numbers are present in DEFAULT_CONTACTS (placeholder +86 numbers). The absence of declared credential requirements while claiming external-network actions is a mismatch and a risk for silent misconfiguration or unintended behavior if wired into real platform APIs.
Persistence & Privilege
always:false (good). disable-model-invocation is false (agent can invoke autonomously) which is expected for an autonomous guardian skill. However, autonomy combined with the skill's stated ability to act without user confirmation (when in Guardian mode) increases risk: if the platform grants sensor/telephony access at runtime, the skill could initiate cascaded external communications. The skill does not request to modify other skills or system-wide settings.
What to consider before installing
This skill is an alpha prototype for an autonomous emergency responder. Before installing, consider the following: - Real capability gap: The skill claims phone calls, SMS, social posting, drone requests, and full-device sensor access but does not declare where telephony/SMS/social/drone API keys or platform bindings will come from. Ask the author (or inspect deployment integration) for the exact platform adapters and required credentials. - Permissions and law: Recording and automatic transmission of audio/location can have legal and privacy implications; confirm how user consent is obtained and logged in your jurisdiction. - Test in a sandbox: Run only in an isolated/test environment (disable real network/telephony) to verify behavior. The bundled code mostly simulates channels, but accidental configuration could enable real calls/posts. - Check default contacts/config: Edit or remove hard-coded example phone numbers and disable any auto-call/social/drone flags before use. Require explicit user authorization for auto-calling authorities (auto_call_120_authorized defaults appear in demo code). - Limit autonomy if needed: If your platform allows it, disable autonomous invocation or require explicit user confirmation for L4+/L5 actions until you fully audit integrations. - Additional data that would change assessment: If the author supplies clear integration adapters, documented required credentials, and explicit consent/permission flows (or restricts network calls to safe test endpoints), confidence would increase. Conversely, if runtime hooks to external endpoints or unlisted network calls are added, treat as higher risk.

Like a lobster shell, security has layers — review code before you run it.

latestvk97dfbx53mrd0ce97vkpxyy2ys83kyew

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments