Back to skill
Skillv0.1.2

ClawScan security

Aixin · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

SuspiciousMar 20, 2026, 10:01 AM
Verdict
suspicious
Confidence
medium
Model
gpt-5-mini
Summary
The skill largely does what it claims (AI-agent social chat), but there are several inconsistencies and privacy risks — notably plaintext password storage, a permission that allows reading the model/system prompt (which could be sent to the remote API), and conflicting backend addresses — so proceed with caution.
Guidance
Things to consider before installing: - Code behavior: main.py stores your password and token in plaintext at ~/.aixin/profile.json to support auto-login. If you register/login, that file will contain credentials; ensure you are comfortable with this and protect the file. - System prompt access: the skill requests permission to read the model/system prompt and may extract that text as the 'bio' sent to the remote API. Do not grant this permission if your system prompt contains secrets or sensitive policies. - Conflicting endpoints: SKILL.md and code default to https://aixin.chat, but README mentions http://43.135.138.144/api. Ask the author which host is authoritative; do not override the server URL unless you trust the destination. - Network trust: the skill makes real network calls for most actions. If you cannot verify the remote service/operator, avoid entering real credentials or sensitive data. - Mitigations: inspect or modify main.py before use (for example, remove password persistence or encrypt it, restrict what is sent as 'bio'), set strict file permissions on ~/.aixin/profile.json, run the skill in a sandboxed environment or with network restrictions, and verify the service's TLS certificate and domain ownership if you plan to use it with real accounts. If you want, I can highlight the exact lines in main.py that implement plaintext storage and auto-login and suggest safer code changes.

Review Dimensions

Purpose & Capability
noteThe skill's name/description (AI-agent social/chat) align with the code and declared permissions (network, storage, send/receive messages). However the skill requests 'system_prompt_read' permission (declared in skill.json) which is more sensitive than typical chat skills and is only justifiable if the skill truly needs to extract a bio from conversation/system prompt.
Instruction Scope
concernSKILL.md instructs the agent to always perform real network calls to a single API host (https://aixin.chat/api) and to display real JSON responses. It also describes extracting a 'bio' from conversation/system_prompt. Because the skill may read the system prompt and then include that text in API calls (registration bio), this expands scope to potentially exfiltrate sensitive system or prompt contents. The SKILL.md and README disagree on the 'correct' API base (SKILL.md/ code -> https://aixin.chat; README -> http://43.135.138.144/api), which is an incoherence that could route data to a different endpoint.
Install Mechanism
okNo install spec or external archive downloads are present; dependencies are standard Python requests (requirements.txt). The skill includes source (main.py) and README instructs pip install -r requirements.txt — no high-risk install URLs or extracted archives were found.
Credentials
concernNo required environment variables or external credentials are declared, which is proportionate. But the code persists sensitive data: it stores password and token in plaintext under ~/.aixin/profile.json (LOCAL_STORE) to support auto-login. Combined with the ability to read the system prompt and to send arbitrary JSON to the remote API, this raises a real risk of credential or context exfiltration. Also the README's hardcoded IP (43.135.138.144) conflicts with the documented domain and the code's default domain, which is suspicious.
Persistence & Privilege
noteThe skill writes persistent state to the user's home directory (~/.aixin/profile.json) and requests storage permission in skill.json — this is consistent with its auto-login feature. 'always' is false (not force-included). Persisted storage of password in plaintext is concerning for privacy but is an expected implementation choice for auto-login; it's a design risk rather than outright malicious behavior.