Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Aliyun AI Guardrail

Install and configure the Alibaba Cloud AI guardrail openclaw hook, which intercepts malicious content in LLM requests using Alibaba Cloud AI Guardrail servi...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
1 · 94 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The code implements an LLM request interceptor that calls Alibaba Cloud's Guardrail API — which matches the description — but the skill metadata/registry does not declare the required environment variables (ALIBABA_CLOUD_ACCESS_KEY_ID / ALIBABA_CLOUD_ACCESS_KEY_SECRET) even though SKILL.md instructs you to provide them. That mismatch between declared requirements and runtime needs is an incoherence.
Instruction Scope
SKILL.md instructs copying the bundled hook, running npm install, and adding AK/SK to openclaw.json. The hook patches global fetch to inspect and potentially replace user messages in any outgoing request body containing a messages array — this is consistent with an LLM-guardrail purpose but broad in scope (it affects all fetch calls globally) and may impact other integrations or tools that also use fetch.
Install Mechanism
There is no formal install spec; installation is instruction-driven and runs npm install on the bundled hook package, which will fetch dependencies from npm (notably @alicloud/openapi-client). This is expected for a JS hook but carries typical supply-chain risk because npm packages are pulled at install time.
!
Credentials
The skill needs Alibaba Cloud AK/SK to call the Guardrail API — that is proportional to its function — but those secrets are not declared in the skill's registry metadata. The runtime instructions tell the user to write keys into openclaw.json (persisting secrets on disk), which may be acceptable but increases risk; the skill reads process.env directly, so stored keys will be used without prompting at runtime.
!
Persistence & Privilege
The hook's HOOK.md metadata sets the hook to run on agent:bootstrap with "always": true (i.e., always enabled). That gives it broad, persistent presence across agent sessions. Combined with global fetch patching and access to AK/SK, this elevates its impact if misconfigured or malicious.
What to consider before installing
This skill appears to implement the described Alibaba Cloud guardrail, but there are a few things to consider before installing: - Metadata mismatch: The skill asks you to provide Alibaba Cloud AccessKey ID/Secret in SKILL.md but the registry metadata does not declare those required environment variables — verify you are comfortable adding long-lived AK/SK to openclaw.json. - Secrets storage: The installation writes AK/SK into openclaw.json (on-disk). Prefer using short-lived credentials (STS) or scoped keys with minimal permissions if possible, and secure the file's permissions. - Always-on hook: The hook is configured to be always-enabled at agent bootstrap and patches global fetch, so it will inspect/modify outgoing requests across the agent runtime. Confirm you want a global interceptor enabled for all sessions. - Supply-chain risk: npm install will fetch dependencies; inspect package.json and the dependency (@alicloud/openapi-client) and consider installing in a sandbox or reviewing node_modules before enabling. - Code review: If you intend to proceed, review handler.ts (already included) to confirm there are no hidden endpoints or unexpected exfiltration paths. The code appears to only send content to Alibaba's green-cip.cn-beijing.aliyuncs.com endpoint, but verify that behavior and any logging is acceptable. If unsure, test the hook in an isolated environment, use least-privilege or temporary credentials, and confirm the openclaw hook behavior (especially fetch patching and always-on semantics) meets your security requirements.
assets/aliyun-ai-guardrail/hooks/aliyun-ai-guardrail/handler.ts:8
Environment variable access combined with network send.
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk972x9kkzd4jkxpr0jxqq5ca9h8354fe

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Aliyun AI Guardrail

An openclaw hook based on Alibaba Cloud AI Guardrail that intercepts LLM requests and detects malicious content.

Installation

Step 1: Install the hook

Copy the bundled hook directory to a temporary location and install:

TMPDIR=$(mktemp -d)
cp -r <skill_assets_dir>/aliyun-ai-guardrail "$TMPDIR/aliyun-ai-guardrail"
cd "$TMPDIR/aliyun-ai-guardrail" && npm install
openclaw hooks install "$TMPDIR/aliyun-ai-guardrail"

Replace <skill_assets_dir> with the absolute path to this skill's assets/ directory.

Step 2: Ask user for AKSK

Ask the user for their Alibaba Cloud AccessKey ID and AccessKey Secret. These are required to call the Alibaba Cloud AI Guardrail API.

Step 3: Configure environment variables

After obtaining the AKSK, edit the user's openclaw.json (typically at ~/.openclaw.json or project root) to add the environment variables:

{
  "env": {
    "ALIBABA_CLOUD_ACCESS_KEY_ID": "<user-provided AK>",
    "ALIBABA_CLOUD_ACCESS_KEY_SECRET": "<user-provided SK>"
  }
}

If openclaw.json already has other configuration, merge the new entries without overwriting existing content.

Step 4: Done

Inform the user that the security guardrail is configured. Remind them to restart the Gateway. The hook will automatically load on openclaw agent startup and intercept LLM requests containing malicious content.

Files

6 total
Select a file
Select a file to preview.

Comments

Loading comments…