Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Aliyun Platform Docs Benchmark

v1.0.0

Use when benchmarking similar product documentation and API documentation across Alibaba Cloud, AWS, Azure, GCP, Tencent Cloud, Volcano Engine, and Huawei Cl...

0· 77·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for cinience/aliyun-platform-docs-benchmark.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Aliyun Platform Docs Benchmark" (cinience/aliyun-platform-docs-benchmark) from ClawHub.
Skill page: https://clawhub.ai/cinience/aliyun-platform-docs-benchmark
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install aliyun-platform-docs-benchmark

ClawHub CLI

Package manager switcher

npx clawhub@latest install aliyun-platform-docs-benchmark
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (multi-cloud docs/API benchmark) align with included code and resources (discovery, scoring, presets). Requesting Alibaba Cloud metadata as an optional enrichment is plausible for an 'Aliyun' branded benchmark, but the skill metadata declares no required env vars while the runtime instructions explicitly request ALICLOUD_* credentials — an inconsistency.
!
Instruction Scope
SKILL.md instructs the agent to run a local Python script that performs web discovery and API calls and to 'configure least-privilege Alibaba Cloud credentials' and include region/resource id/time range in evidence. It also warns to ask the user before running mutating operations, which implies possible non-read-only interactions. The instructions therefore reach beyond pure passive scraping and could touch cloud APIs or collect sensitive parameters; those behaviors are not fully declared or scoped in the manifest.
Install Mechanism
No install spec; skill is instruction + Python script relying on standard library (urllib, json, re). No third-party packages or external binary downloads were declared, which is proportional and lower-risk.
!
Credentials
The manifest lists no required environment variables, but SKILL.md asks for ALICLOUD_ACCESS_KEY_ID / ALICLOUD_ACCESS_KEY_SECRET (and optional region). Requesting cloud credentials (even 'least-privilege') is sensitive and should be declared in requires.env; currently the credential request is undocumented in the metadata and therefore disproportionate/untracked.
Persistence & Privilege
always:false and no install hooks were declared. The skill does write output artifacts to a local output/ directory per its instructions, which is expected for a benchmarking script. It does not request permanent platform-level privileges in the manifest.
Scan Findings in Context
[pre-scan-none] unexpected: Static pre-scan found no flagged patterns, but the SKILL.md requests Alibaba Cloud credentials and mentions potential mutating operations; the lack of pre-scan findings does not mitigate the manifest/instruction mismatch.
What to consider before installing
This skill appears to implement a reasonable multi-cloud docs benchmarking tool, but there are two things to check before running it: (1) the SKILL.md asks you to provide Alibaba Cloud credentials (ALICLOUD_ACCESS_KEY_ID / ALICLOUD_ACCESS_KEY_SECRET) and optional region, yet the skill manifest does not declare any required environment variables—ask the author to explicitly declare these env vars in the skill metadata and explain exactly what API calls require them. (2) Inspect the Python script for any operations that modify cloud resources or send data to external endpoints (it performs web/API discovery and writes an output directory; confirm it does not call any mutating APIs or post sensitive evidence elsewhere). If you must run it, use isolated/least-privilege test credentials, avoid using production secrets, prefer providing pinned official links instead of credentials when possible, and run the script in an isolated environment so you can review generated output before sharing it.

Like a lobster shell, security has layers — review code before you run it.

latestvk971j9d87mna9wkbt11rmzeprn842z6b
77downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0

Multi-Cloud Product Docs/API Benchmark

Use this skill when the user wants cross-cloud documentation/API comparison for similar products.

Supported clouds

  • Alibaba Cloud
  • AWS
  • Azure
  • GCP
  • Tencent Cloud
  • Volcano Engine
  • Huawei Cloud

Data source policy

  • L0 (highest): user-pinned official links via --<provider>-links
  • L1: machine-readable official metadata/source
    • GCP: Discovery API
    • AWS: API Models repository
    • Azure: REST API Specs repository
  • L2: official-domain constrained web discovery fallback
  • L3: insufficient discovery (low confidence)

Workflow

Run the benchmark script:

python skills/platform/docs/aliyun-platform-docs-benchmark/scripts/benchmark_multicloud_docs_api.py --product "<product keyword>"

Example:

python skills/platform/docs/aliyun-platform-docs-benchmark/scripts/benchmark_multicloud_docs_api.py --product "serverless"

LLM platform benchmark example (Bailian/Bedrock/Azure OpenAI/Vertex AI/Hunyuan/Ark/Pangu):

python skills/platform/docs/aliyun-platform-docs-benchmark/scripts/benchmark_multicloud_docs_api.py --product "Bailian" --preset "llm-platform"

If --preset is omitted, script attempts to auto-match preset based on keyword.

Scoring weights can be switched by profile (see references/scoring.json):

python skills/platform/docs/aliyun-platform-docs-benchmark/scripts/benchmark_multicloud_docs_api.py --product "Bailian" --preset "llm-platform" --scoring-profile "llm-platform"

Optional: pin authoritative links

Auto-discovery may miss pages. For stricter comparison, pass official links manually:

python skills/platform/docs/aliyun-platform-docs-benchmark/scripts/benchmark_multicloud_docs_api.py \
  --product "object storage" \
  --aws-links "https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html" \
  --azure-links "https://learn.microsoft.com/azure/storage/blobs/"

Available manual flags:

  • --alicloud-links
  • --aws-links
  • --azure-links
  • --gcp-links
  • --tencent-links
  • --volcengine-links
  • --huawei-links

Each flag accepts comma-separated URLs.

Output policy

All artifacts must be written under:

output/aliyun-platform-docs-benchmark/

Per run:

  • benchmark_evidence.json
  • benchmark_report.md

Reporting guidance

When answering the user:

  1. Show score ranking across all providers.
  2. Highlight top gaps (P0/P1/P2) and concrete fix actions.
  3. If discovery confidence is low, ask user to provide pinned links and rerun.

Validation

mkdir -p output/aliyun-platform-docs-benchmark
for f in skills/platform/docs/aliyun-platform-docs-benchmark/scripts/*.py; do
  python3 -m py_compile "$f"
done
echo "py_compile_ok" > output/aliyun-platform-docs-benchmark/validate.txt

Pass criteria: command exits 0 and output/aliyun-platform-docs-benchmark/validate.txt is generated.

Output And Evidence

  • Save artifacts, command outputs, and API response summaries under output/aliyun-platform-docs-benchmark/.
  • Include key parameters (region/resource id/time range) in evidence files for reproducibility.

Prerequisites

  • Configure least-privilege Alibaba Cloud credentials before execution.
  • Prefer environment variables: ALICLOUD_ACCESS_KEY_ID, ALICLOUD_ACCESS_KEY_SECRET, optional ALICLOUD_REGION_ID.
  • If region is unclear, ask the user before running mutating operations.

References

  • Rubric: references/review-rubric.md

Comments

Loading comments...