Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Class Seven

v1.0.0

Multi-agent development team workflow skill. Use when coordinating complex development tasks requiring multiple specialized roles - PM, Architect, Developer,...

0· 318·1 current·1 all-time
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (multi-agent development workflow) aligns with the content: spawning PM/Architect/Developer/Tester/Debugger agents and orchestrating development phases. Examples and tool choices are coherent for a development orchestration skill.
!
Instruction Scope
SKILL.md instructs the agent to spawn sub-agents and to operate on code, logs, and local paths (e.g., identify_modules("./legacy-code"), <logs attached>, fetch_pr). That is reasonable for a dev workflow, but the documentation also instructs installing and running remote installers (PowerShell: irm https://... | iex), which directs execution of arbitrary remote code — outside the narrow scope of 'workflow guidance' and a significant operational risk. Instructions are also somewhat vague about what environment the agent expects (what sessions_spawn/fetch_pr actually do and what permissions they require).
!
Install Mechanism
Although the registry has no install spec, the included tools guide explicitly recommends running remote PowerShell install commands that pipe downloaded scripts into iex (irm https://claude.ai/install.ps1 | iex and irm https://code.kimi.com/install.ps1 | iex). Executing remote installer scripts via piping to a shell is a high-risk installation mechanism. One of the URLs (claude.ai) is a known vendor domain; the other (code.kimi.com) is not verifiable here. The skill should not recommend or assume running arbitrary remote installers without verification.
Credentials
The skill declares no required env vars or credentials, and this is consistent with the registry metadata. However, the instructions reference per-user config files (~/.claude/settings.json and ~/.kimi/config.toml) and advise system prompts, and they assume external tooling that will likely require credentials or tokens at install/run time. The absence of declared required credentials means the skill does not make explicit what secrets/tools the operator must provide.
Persistence & Privilege
always is false and the skill does not request system-wide config modification beyond per-user tool config files. It does not claim to modify other skills or force inclusion. No persistence/privilege escalation is declared.
What to consider before installing
This skill appears to implement a reasonable multi-agent dev workflow, but review the following before installing or following its instructions: - Do not run remote installer scripts piped directly into a shell (irm | iex or curl | sh). That executes code fetched from the network with no local review — replace with official package installers, verified releases, or manual review of the script first. - Verify the installer domains (e.g., claude.ai is Anthropic's domain; confirm code.kimi.com is the legitimate vendor). If you can't confirm, avoid installing that CLI. - The skill assumes the agent can read code, logs, and local project directories. Confirm what access your agent runtime grants and limit it to only the repositories/paths needed. - The SKILL.md references helper functions (sessions_spawn, fetch_pr, identify_modules). Confirm these are safe built-ins in your agent environment and understand their permissions and network behavior. - Because the skill doesn't declare any credentials but suggests tools that likely require API keys/tokens, prepare to provide credentials separately and audit where those tokens are stored/used. - If you plan to allow autonomous invocation, consider restricting it while you test the skill in a sandboxed environment and verify the toolchain and installer sources. If you want a safer assessment, provide: (1) confirmation/verified URLs for any recommended installers, (2) documentation for sessions_spawn/fetch_pr runtime APIs, and (3) whether this agent will have direct filesystem or network access in your deployment — that information would raise confidence to high or allow targeted remediation steps.

Like a lobster shell, security has layers — review code before you run it.

latestvk972qheajd4n1zxks8nmnag0vh827e5z
318downloads
0stars
1versions
Updated 21h ago
v1.0.0
MIT-0

Class Seven - Multi-Agent Development Team

Class Seven (七班) is a structured multi-agent workflow that treats sub-agents as specialized development team members.

When to Use

Use this skill when:

  • Complex development tasks requiring multiple perspectives/roles
  • Tasks needing PM planning + architecture + implementation + testing
  • Debugging scenarios requiring systematic investigation
  • Code review and quality assurance workflows
  • Projects requiring end-to-end delivery (plan → build → test → deploy)

Team Structure

Main Session (Manager)
├── PM Agent (产品经理)
├── Architect Agent (架构师)
├── Developer Agent (开发工程师)
├── Tester Agent (测试工程师)
└── Debugger Agent (调试专家)

Workflow Phases

Phase 1: Task Analysis & Planning

  1. Manager (Main Session) analyzes task complexity
  2. Spawn PM Agent for requirement clarification
  3. PM returns: requirements doc, scope, acceptance criteria

Phase 2: Architecture & Design

  1. Spawn Architect Agent with PM output
  2. Architect returns: tech stack, module design, interfaces

Phase 3: Implementation

  1. Spawn Developer Agent with architecture specs
  2. Developer returns: implemented code

Phase 4: Quality Assurance

  1. Spawn Tester Agent with code + requirements
  2. Tester returns: test plan, test cases, bugs found

Phase 5: Debugging (if needed)

  1. Spawn Debugger Agent with bug reports
  2. Debugger returns: root cause analysis, fixes

Phase 6: Integration & Delivery

  1. Manager reviews all outputs
  2. Integrates final deliverable
  3. Validates against acceptance criteria

Tool Selection Matrix

Task TypePrimary ToolSecondary ToolReason
Complex architectureClaude CodeKimiDeep reasoning, context management
Quick prototypingKimiNativeFast iteration, lower latency
Deep debuggingClaude CodeKimiMulti-file analysis, bug tracing
Code reviewKimiClaude CodePattern recognition, best practices
TestingNativeKimiDeterministic execution
DocumentationKimiNativeChinese/English bilingual

Agent Personas

PM Agent

Role: 产品经理
Expertise: Requirements analysis, user stories, acceptance criteria
Output: PRD, user stories, scope definition
Tools: Kimi (for Chinese context), Claude Code (for complex products)

Architect Agent

Role: 架构师
Expertise: System design, tech stack selection, API design
Output: Architecture doc, module diagrams, interface specs
Tools: Claude Code (preferred for architecture), Kimi (for validation)

Developer Agent

Role: 开发工程师
Expertise: Code implementation, refactoring, optimization
Output: Production-ready code
Tools: Claude Code (complex logic), Kimi (quick implementation), Native (boilerplate)

Tester Agent

Role: 测试工程师
Expertise: Test design, edge case identification, quality assurance
Output: Test cases, test scripts, bug reports
Tools: Native (execution), Kimi (test design), Claude Code (complex scenarios)

Debugger Agent

Role: 调试专家
Expertise: Root cause analysis, performance profiling, bug fixing
Output: RCA report, patches, prevention recommendations
Tools: Claude Code (deep analysis), Kimi (pattern matching)

Execution Modes

Mode A: Full Team (Full Orchestration)

All 5 phases executed sequentially. Use for complex projects.

Mode B: Sprint Team (Dev + Test)

Skip PM/Architect phases. Use when requirements are clear.

Mode C: Firefighter (Debug Only)

Debugger agent only. Use for urgent bug fixes.

Mode D: Review Board (PM + Tester)

Code review workflow. Use for quality gates.

Quick Commands

# Full team deployment
class_seven deploy --mode=full --task="<description>"

# Sprint mode
class_seven deploy --mode=sprint --specs="<requirements>"

# Debug mode
class_seven deploy --mode=debug --bug="<bug description>"

Best Practices

  1. Always start with task analysis - Determine mode and required agents
  2. Pass context explicitly - Each agent receives relevant previous outputs
  3. Set clear boundaries - Define what each agent should/shouldn't do
  4. Use appropriate timeout - Complex tasks need longer timeouts
  5. Review before integration - Manager validates all outputs

Error Handling

If an agent fails or produces insufficient output:

  1. Analyze failure reason
  2. Respawn with clearer instructions or different tool
  3. Consider breaking task into smaller sub-tasks
  4. Escalate to human if stuck after 2 retries

References

Comments

Loading comments...