Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Aliyun Sls Openclaw Integration

v1.0.0

Use when the user needs to integrate OpenClaw with Alibaba Cloud SLS/Observability, including collector setup, machine groups, indexes, dashboards, collectio...

0· 82·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for cinience/aliyun-sls-openclaw-integration.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Aliyun Sls Openclaw Integration" (cinience/aliyun-sls-openclaw-integration) from ClawHub.
Skill page: https://clawhub.ai/cinience/aliyun-sls-openclaw-integration
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: aliyun
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install aliyun-sls-openclaw-integration

ClawHub CLI

Package manager switcher

npx clawhub@latest install aliyun-sls-openclaw-integration
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
The skill's name and description (OpenClaw <> Alibaba SLS integration) align with its actions: installing aliyun CLI, installing LoongCollector, creating machine groups, indexes, dashboards, configs and bindings. However the registry metadata declares no required environment variables while SKILL.md explicitly requires ALIBABA_CLOUD_ACCESS_KEY_ID, ALIBABA_CLOUD_ACCESS_KEY_SECRET and ALIYUN_UID — an incoherence between declared metadata and the runtime instructions.
!
Instruction Scope
Runtime instructions perform system-level changes (mkdir/touch under /etc/ilogtail, start services via /etc/init.d), download and run a remote installer, and create a collector config whose FilePaths target user home OpenClaw session files (/home/*/.openclaw/agents/main/sessions/*jsonl). That means local conversation/session logs would be ingested and sent to Alibaba Cloud SLS; this is a high-sensitivity data flow and should be explicitly consented to and validated before execution.
!
Install Mechanism
Although the download target is region-specific Alibaba OSS (aliyuncs.com) — an official host — the skill instructs downloading a remote shell script (loongcollector.sh) and executing it. There is no packaged install spec in the registry; executing remote install scripts poses a significant risk unless the script is inspected/verified first.
!
Credentials
The skill requires cloud credentials (ALIBABA_CLOUD_ACCESS_KEY_ID/SECRET) and sudo. Those are reasonable for creating cloud resources and installing system collectors, but the SKILL.md was not reflected in the declared requires.env. More importantly, the collector configuration will read per-user OpenClaw session files from home directories and ship them to SLS — giving the supplied AK/SK access to potentially sensitive conversation data. Ensure least-privilege credentials and that you accept uploading those files to Alibaba Cloud.
!
Persistence & Privilege
The skill writes system files under /etc/ilogtail, creates UID marker files, installs and starts system services, and creates persistent cloud resources (machine groups, dashboards, configs). While these behaviors are coherent with installing a log collector, they are privileged operations requiring sudo and permanent changes to the host and cloud account; proceed only on hosts and accounts where this is acceptable.
What to consider before installing
This skill will: (1) require and use your Alibaba AK/SK and sudo to install a collector, (2) download and execute a remote loongcollector installer from Alibaba OSS, (3) create system files under /etc/ilogtail and start services, and (4) configure the collector to read OpenClaw session files from users' home directories and send them to SLS dashboards and indexes. Before installing: (a) verify the registry metadata and SKILL.md match (the manifest omits required env vars), (b) inspect loongcollector.sh from the referenced URL before running it, (c) review references/collector-config.json and references/index.json to confirm exactly which file paths and fields will be collected, (d) use least-privilege AK/SK (preferably a test/project-scoped key) and never put high-privilege keys in long-lived hosts, (e) test in an isolated environment or non-production host first, and (f) if you do not want your OpenClaw session or other local files uploaded to Alibaba Cloud, do not run this skill or modify the collector config to exclude sensitive paths.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

📊 Clawdis
Binsaliyun
latestvk97b8x3axk631h15hpwq94fcc5842x13
82downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0

OpenClaw SLS Integration

This skill provisions Alibaba Cloud SLS observability for OpenClaw on Linux and keeps reruns safe.

At a high level, execute this flow:

  1. Check and install aliyun CLI (install latest when missing)
  2. Install LoongCollector by project region (skip if already running)
  3. Create an identifier-based machine group (local identifier + cloud machine group)
  4. Create logstore index and dashboards
  5. Create logstore collection config
  6. Bind the collection config to the machine group

Capture Intent Before Execution

Before running commands, make sure the user intent is complete:

  1. Confirm the target PROJECT and LOGSTORE.
  2. Confirm Linux host access with sudo available.
  3. Confirm AK/SK are already exported in environment variables.
  4. If any required input is missing, ask for it first and do not run partial setup.

Prerequisites

Required:

  • PROJECT: SLS project name
  • LOGSTORE: SLS logstore name

Read from environment variables:

  • ALIBABA_CLOUD_ACCESS_KEY_ID
  • ALIBABA_CLOUD_ACCESS_KEY_SECRET
  • ALIYUN_UID (used for the local UID file under /etc/ilogtail/users)

Recommended optional:

  • ALIBABA_CLOUD_REGION_ID (auto-resolved from PROJECT when not set)

If you use different AK/SK variable names, export them to these standard names first.


Expected Result

After successful execution, the environment should contain:

  • Running LoongCollector (or ilogtaild) on the host
  • Machine group openclaw-sls-collector
  • Logstore index created on the target LOGSTORE
  • Dashboards openclaw-audit and openclaw-gateway
  • Collection config openclaw-audit_${LOGSTORE}
  • Config binding between openclaw-audit_${LOGSTORE} and openclaw-sls-collector

One-Time Execution Flow (Idempotent)

The commands below are designed as "exists -> skip" and are safe to rerun. Strict template mode: for index/config/dashboard payloads, always read from files in references/. Do not handcraft or simplify JSON bodies beyond required placeholder replacement.

set -euo pipefail

# ===== User inputs =====
: "${PROJECT:?Please export PROJECT}"
: "${LOGSTORE:?Please export LOGSTORE}"
: "${ALIBABA_CLOUD_ACCESS_KEY_ID:?Please export ALIBABA_CLOUD_ACCESS_KEY_ID}"
: "${ALIBABA_CLOUD_ACCESS_KEY_SECRET:?Please export ALIBABA_CLOUD_ACCESS_KEY_SECRET}"
: "${ALIYUN_UID:?Please export ALIYUN_UID}"

MACHINE_GROUP="openclaw-sls-collector"
CONFIG_NAME="openclaw-audit_${LOGSTORE}"

# 1) Install aliyun CLI if missing (Linux)
if ! command -v aliyun >/dev/null 2>&1; then
  if command -v apt-get >/dev/null 2>&1; then
    sudo apt-get update
    sudo apt-get install -y aliyun-cli
  elif command -v dnf >/dev/null 2>&1; then
    sudo dnf install -y aliyun-cli
  elif command -v yum >/dev/null 2>&1; then
    sudo yum install -y aliyun-cli
  elif command -v zypper >/dev/null 2>&1; then
    sudo zypper -n install aliyun-cli
  else
    echo "aliyun CLI not found. Install aliyun-cli manually for your Linux distribution." >&2
    exit 1
  fi
fi

# Export auth variables for aliyun CLI
export ALIBABA_CLOUD_ACCESS_KEY_ID
export ALIBABA_CLOUD_ACCESS_KEY_SECRET

is_loong_running() {
  if sudo /etc/init.d/loongcollectord status 2>/dev/null | grep -qi "running"; then
    return 0
  fi
  if sudo /etc/init.d/ilogtaild status 2>/dev/null | grep -qi "running"; then
    return 0
  fi
  return 1
}

# 2) Resolve region and install LoongCollector (skip when already running)
REGION_ID="${ALIBABA_CLOUD_REGION_ID:-}"
if [ -z "$REGION_ID" ]; then
  REGION_ID="$(aliyun sls GetProject --project "$PROJECT" --cli-query 'region' --quiet 2>/dev/null | tr -d '\"' || true)"
fi
if [ -z "$REGION_ID" ]; then
  echo "Cannot resolve region from project: $PROJECT. Please set ALIBABA_CLOUD_REGION_ID." >&2
  exit 1
fi

if ! is_loong_running; then
  wget "https://aliyun-observability-release-${REGION_ID}.oss-${REGION_ID}.aliyuncs.com/loongcollector/linux64/latest/loongcollector.sh" -O loongcollector.sh
  chmod +x loongcollector.sh
  ./loongcollector.sh install "${REGION_ID}"
fi

# Post-install verification: one of loongcollectord/ilogtaild must be running.
if ! is_loong_running; then
  sudo /etc/init.d/loongcollectord start >/dev/null 2>&1 || true
  sudo /etc/init.d/ilogtaild start >/dev/null 2>&1 || true
fi
if ! is_loong_running; then
  echo "LoongCollector installation check failed: neither loongcollectord nor ilogtaild is running." >&2
  exit 1
fi

# 3) Local user-defined identifier + create machine group
sudo mkdir -p /etc/ilogtail
sudo mkdir -p /etc/ilogtail/users
if [ ! -f /etc/ilogtail/user_defined_id ]; then
  sudo touch /etc/ilogtail/user_defined_id
fi
RAND8="$(LC_ALL=C tr -dc 'a-z0-9' </dev/urandom | head -c 8)"
USER_DEFINED_ID_PREFIX="${PROJECT}_openclaw_sls_collector_"
EXISTING_USER_DEFINED_ID="$(sudo awk -v p="${USER_DEFINED_ID_PREFIX}" 'index($0,p)==1 {print; exit}' /etc/ilogtail/user_defined_id 2>/dev/null || true)"
if [ -n "${EXISTING_USER_DEFINED_ID}" ]; then
  USER_DEFINED_ID="${EXISTING_USER_DEFINED_ID}"
else
  USER_DEFINED_ID="${USER_DEFINED_ID_PREFIX}${RAND8}"
  echo "${USER_DEFINED_ID}" | sudo tee -a /etc/ilogtail/user_defined_id >/dev/null
fi
if ! sudo grep -Fxq "${USER_DEFINED_ID}" /etc/ilogtail/user_defined_id 2>/dev/null; then
  echo "Failed to persist USER_DEFINED_ID to /etc/ilogtail/user_defined_id" >&2
  exit 1
fi
if [ ! -f "/etc/ilogtail/users/${ALIYUN_UID}" ]; then
  sudo touch "/etc/ilogtail/users/${ALIYUN_UID}"
fi
if [ ! -f "/etc/ilogtail/users/${ALIYUN_UID}" ]; then
  echo "Failed to create UID marker file: /etc/ilogtail/users/${ALIYUN_UID}" >&2
  exit 1
fi

if ! aliyun sls GetMachineGroup --project "$PROJECT" --machineGroup "$MACHINE_GROUP" >/dev/null 2>&1; then
  cat > /tmp/openclaw-machine-group.json <<EOF
{
  "groupName": "${MACHINE_GROUP}",
  "groupType": "",
  "machineIdentifyType": "userdefined",
  "machineList": ["${USER_DEFINED_ID}"]
}
EOF
  aliyun sls CreateMachineGroup \
    --project "$PROJECT" \
    --body "$(cat /tmp/openclaw-machine-group.json)"
fi
if ! aliyun sls GetMachineGroup --project "$PROJECT" --machineGroup "$MACHINE_GROUP" >/dev/null 2>&1; then
  echo "Machine group was not created successfully: ${MACHINE_GROUP}" >&2
  exit 1
fi

# 4) Create logstore (if missing) + index + multiple dashboards
if ! aliyun sls GetLogStore --project "$PROJECT" --logstore "$LOGSTORE" >/dev/null 2>&1; then
  aliyun sls CreateLogStore --project "$PROJECT" \
    --body "{\"logstoreName\":\"${LOGSTORE}\",\"ttl\":30,\"shardCount\":2}"
fi

if ! aliyun sls GetIndex --project "$PROJECT" --logstore "$LOGSTORE" >/dev/null 2>&1; then
  # Use the index template as-is from references/index.json
  aliyun sls CreateIndex \
    --project "$PROJECT" \
    --logstore "$LOGSTORE" \
    --body "$(cat references/index.json)"
fi

sed "s/\${logstoreName}/${LOGSTORE}/g" references/dashboard-audit.json > /tmp/openclaw-audit-dashboard.json
sed "s/\${logstoreName}/${LOGSTORE}/g" references/dashboard-gateway.json > /tmp/openclaw-gateway-dashboard.json

# Create dashboard uses project + body(detail). Update uses path + project + body.
if aliyun sls GET "/dashboards/openclaw-audit" --project "$PROJECT" >/dev/null 2>&1; then
  aliyun sls PUT "/dashboards/openclaw-audit" \
    --project "$PROJECT" \
    --body "$(cat /tmp/openclaw-audit-dashboard.json)"
else
  aliyun sls POST "/dashboards" \
    --project "$PROJECT" \
    --body "$(cat /tmp/openclaw-audit-dashboard.json)"
fi

if aliyun sls GET "/dashboards/openclaw-gateway" --project "$PROJECT" >/dev/null 2>&1; then
  aliyun sls PUT "/dashboards/openclaw-gateway" \
    --project "$PROJECT" \
    --body "$(cat /tmp/openclaw-gateway-dashboard.json)"
else
  aliyun sls POST "/dashboards" \
    --project "$PROJECT" \
    --body "$(cat /tmp/openclaw-gateway-dashboard.json)"
fi

# 5) Create collection config (update when already exists)
# Render collector config strictly from references/collector-config.json
sed \
  -e "s/\${configName}/${CONFIG_NAME}/g" \
  -e "s/\${logstoreName}/${LOGSTORE}/g" \
  -e "s/\${region_id}/${REGION_ID}/g" \
  references/collector-config.json > /tmp/openclaw-collector-config.json

if aliyun sls GetConfig --project "$PROJECT" --configName "$CONFIG_NAME" >/dev/null 2>&1; then
  aliyun sls UpdateConfig \
    --project "$PROJECT" \
    --configName "$CONFIG_NAME" \
    --body "$(cat /tmp/openclaw-collector-config.json)"
else
  aliyun sls CreateConfig \
    --project "$PROJECT" \
    --body "$(cat /tmp/openclaw-collector-config.json)"
fi

# 6) Bind collection config to machine group
aliyun sls ApplyConfigToMachineGroup \
  --project "$PROJECT" \
  --machineGroup "$MACHINE_GROUP" \
  --configName "$CONFIG_NAME"

echo "OpenClaw SLS observability setup completed."

Response Format

When this skill completes, return a concise status report with:

  1. Inputs used: PROJECT, LOGSTORE, resolved REGION_ID
  2. Created/updated resources (machine group, index, dashboards, config, binding)
  3. Any skipped steps (already existed / already running)
  4. Next verification commands for the user

Verification Commands

aliyun sls GetMachineGroup --project "$PROJECT" --machineGroup openclaw-sls-collector
aliyun sls GetIndex --project "$PROJECT" --logstore "$LOGSTORE"
aliyun sls GetDashboard --project "$PROJECT" --dashboardName openclaw-audit
aliyun sls GetDashboard --project "$PROJECT" --dashboardName openclaw-gateway
aliyun sls GetConfig --project "$PROJECT" --configName "openclaw-audit_${LOGSTORE}"

Reference Files

  • Command flow: references/cli-commands.md
  • Index definition: references/index.json
  • Dashboard templates: references/dashboard-audit.json, references/dashboard-gateway.json
  • Collection config template: references/collector-config.json

Read reference files only when needed:

  • Use cli-commands.md for step-by-step troubleshooting.
  • Use JSON templates when creating/updating resources.

Comments

Loading comments...