Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

GLM Swarm

v1.1.1

경량 모델 병렬 하네스. 명시적으로 swarm/병렬 처리를 요청하거나, AGENTS.md 하네스 규칙에 의해 복합 작업(도구 3회+, 독립 하위작업 2개+)이 감지되었을 때만 사용. 단순 질답, 번역, 요약, 단일 도구 호출에는 절대 사용하지 않는다.

0· 90·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for mupengi-bot/glm-swarm.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "GLM Swarm" (mupengi-bot/glm-swarm) from ClawHub.
Skill page: https://clawhub.ai/mupengi-bot/glm-swarm
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install glm-swarm

ClawHub CLI

Package manager switcher

npx clawhub@latest install glm-swarm
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (lightweight parallel model harness) match the included instructions and files: a planner initializes /tmp/swarm, workers use context packets and a shared scratchpad, and results are aggregated. No unrelated credentials, binaries, or external services are requested.
!
Instruction Scope
Runtime instructions require creating and manipulating /tmp/swarm, spawning subagents (sessions_spawn) and writing/reading worker files. The included cleanup.sh takes an arbitrary task-id and does rm -rf ${SWARM_BASE}/${1} without sanitization — a malicious or malformed task-id could enable path traversal and deletion outside /tmp/swarm. The SKILL.md forbids accessing ~/.secrets and system memory files, but those are only advisory and not enforced by the scripts.
Install Mechanism
No install spec; scripts are bundled in the skill and nothing is downloaded or executed from external URLs. This is low-risk from an installation code-fetch perspective.
Credentials
The skill requests no environment variables, no credentials, and no config paths. Its resource access (temporary /tmp directories and a 'memory/...' result path) is proportionate to a local orchestration tool.
Persistence & Privilege
always:false and no special OS restrictions. The skill can be invoked autonomously (disable-model-invocation:false) and spawns subagents; autonomous invocation combined with the unsafe cleanup behavior raises the blast radius if an agent were to pass untrusted input. This is not automatically malicious but worth caution.
What to consider before installing
This skill appears to implement the advertised swarm orchestration, but review and harden the included scripts before use. Specific recommendations: 1) Do not run cleanup.sh or planner.sh as root; test in an isolated container or ephemeral VM. 2) Add input validation/sanitization to planner.sh and cleanup.sh to reject ../ or absolute paths (or restrict TASK_ID to a safe whitelist/regex). 3) Prefer generating task IDs server-side (by the planner) rather than accepting arbitrary user-provided IDs. 4) Confirm your platform enforces the SKILL.md prohibitions (workers must not be able to access ~/.secrets or edit MEMORY.md). 5) If you allow autonomous agent invocation, limit its scope or monitor runs until you are confident the sanitization and access controls are correct.

Like a lobster shell, security has layers — review code before you run it.

latestvk973zp32menqrm04m8heeerscs84v0fb
90downloads
0stars
4versions
Updated 2w ago
v1.1.1
MIT-0

GLM Swarm

경량 모델 복합 작업 병렬 하네스. 상세 패턴은 references/patterns.md, Context Packet 규칙은 references/context-packet.md 참조.

복잡도 게이트 (필수)

swarm 진입 전 반드시 판단:

  • 독립 하위 작업 2개 미만 → 직접 처리 (swarm 안 씀)
  • 도구 호출 3회 미만 → 직접 처리
  • 위 둘 다 해당 → swarm 모드

실행 흐름

0. bash scripts/planner.sh {task-id} → /tmp/swarm/{task-id}/ 초기화
1. Planner (1회) — 패턴 매칭(A~E) → atomic task 분해
2. Context Packet — worker별 최소 맥락 압축 (출력 300토큰 이내 명시)
3. Worker Pool — sessions_spawn 병렬 (최대 6개, lightContext: true)
4. Shared Scratchpad — /tmp/swarm/{id}/shared.md
5. Aggregator — 구조화 합산 (섹션별 분류 + 핵심 인사이트 3개 + 액션 아이템)
6. bash scripts/cleanup.sh {task-id} → /tmp/swarm/{id}/ 삭제

패턴 요약

패턴트리거구조
A조사→판단→실행t1∥t2 → t3 → t4
B다중 수집→합성t1∥t2∥t3∥t4 → t5
C반복 N건 병렬t1∥...∥tN → concat
D분석→제안t1∥t2 → t3 → t4
E검증→수정→배포순차 (파일 충돌 방지)

매칭 안 되면 동적 분해. 상세: references/patterns.md

Worker 스폰

sessions_spawn({
  task: "{context_packet}\n출력은 300토큰 이내로 작성해.",
  mode: "run", runtime: "subagent",
  lightContext: true,
  label: "swarm-{task-id}-{worker-id}"
})

Planner 초기화: bash scripts/planner.sh {task-id} 정리: bash scripts/cleanup.sh {task-id}|--all|--list

안전

  • Worker: MEMORY/SOUL/AGENTS.md 편집 금지, ~/.secrets/ 접근 금지
  • 외부 행동 → Aggregator 후 승인 요청
  • 2회 실패 → 해당 task 스킵. 50%+ 실패 → swarm 중단

Aggregator 규칙

결과 합산 시 반드시 포함:

  1. 섹션별 분류 — 결과를 주제별로 정리
  2. 핵심 인사이트 3개 — 가장 중요한 발견
  3. 액션 아이템 — 즉시 실행 가능한 다음 스텝
  4. 출처 표기 — 각 판단에 어떤 worker 결과 기반인지 명시

컨텍스트 최적화 (v1.1)

문제: worker 결과 전문이 메인 세션에 주입되면 컨텍스트 누적 해결: Aggregator는 메인에 요약만 반환

Aggregator 출력:
1. 메인 세션: 요약 3~5줄 (핵심 발견 + 액션 아이템만)
2. 상세 결과: memory/swarm-results/{task-id}.md 에 저장
3. 메인은 상세가 필요하면 해당 파일을 읽어옴

Aggregator 스킵 조건에 추가:

  • worker 결과 총합이 500토큰 이하면 → 전문 반환 (요약 불필요)
  • 500토큰 초과면 → 요약 반환 + 파일 저장

Comments

Loading comments...