Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Agent Team Coordinator

v1.0.0

Coordinates multi-agent collaboration by decomposing tasks, managing dependencies, scheduling execution, and aggregating results in a task orchestration system.

0· 31·1 current·1 all-time
Security Scan
Capability signals
CryptoRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name and description (multi-agent orchestration, task decomposition, scheduling, message bus) align with the TypeScript pseudocode in SKILL.md. The code shows expected components: BaseAgent, Orchestrator, MessageBus, DAG scheduling and LLM calls — these are coherent with the stated purpose.
!
Instruction Scope
SKILL.md contains concrete runtime instructions that call external LLM APIs (fetch to https://api.openai.com/v1/chat/completions using process.env.OPENAI_API_KEY), build prompts, parse JSON plans, and route tasks via an LLMRouter. The instructions therefore access environment variables and transmit task data to external endpoints. The skill's instructions are not purely explanatory: they prescribe networked actions and secret usage that are broader in scope than the skill.json metadata declares.
Install Mechanism
This is an instruction-only skill with no install spec and no code files to be written to disk by the installer. That is the lowest-risk install mechanism.
!
Credentials
The SKILL.md explicitly uses process.env.OPENAI_API_KEY to call OpenAI endpoints, but the skill metadata lists no required environment variables or primary credential. A multi-agent orchestrator that invokes LLMs legitimately needs an API key, so this omission is an incoherence — the skill requests secret access at runtime but does not declare it. There may also be unspecified environment access in other truncated sections.
Persistence & Privilege
always is false and the skill is user-invocable with model invocation allowed (defaults). Those are standard/default privileges. There is no indication the skill requests permanent/global presence or modifies other skills' settings.
Scan Findings in Context
[ENV_VAR_OPENAI_API_KEY_IN_SKILL_MD] unexpected: SKILL.md contains code that does `process.env.OPENAI_API_KEY` and sends requests to api.openai.com. Access to an OpenAI API key is expected for an LLM-driven orchestrator, but the skill.json metadata did not declare any required environment variables or credentials — the reference is therefore not declared and is a mismatched requirement.
What to consider before installing
This skill looks like a legitimate multi-agent orchestrator, but SKILL.md instructs runtime use of an OpenAI API key while the metadata lists no required credentials — that mismatch is the main red flag. Before installing: (1) ask the publisher to explicitly declare required env vars (OPENAI_API_KEY) and document what data is sent to external LLMs; (2) avoid supplying a production/global API key — use a limited-scope or quota-limited key or a sandbox account; (3) review the complete SKILL.md (the provided excerpt is truncated) for other env var or file accesses; (4) run the skill in an isolated environment or container and monitor network traffic on first run; (5) if you don't trust the unknown owner, do not install. If the author updates the metadata to list required credentials and explains data handling, re-evaluate — that information would likely move this to benign.

Like a lobster shell, security has layers — review code before you run it.

latestvk970a0wempfkr2rw2bfsb6qya985ces3
31downloads
0stars
1versions
Updated 9h ago
v1.0.0
MIT-0

Multi-Agent Orchestrator

功能说明

设计和管理多Agent协作系统。

架构模式

┌─────────────┐
│ Orchestrator │ ← 任务分解、协调
└──────┬──────┘
       │
   ┌───┼───┐
   ▼   ▼   ▼
 ┌───┐┌───┐┌───┐
 │ A ││ B ││ C │ ← 专业Agent
 └───┘└───┘└───┘

核心实现

1. Agent基类

interface AgentConfig {
  name: string;
  role: string;
  capabilities: string[];
  llm: LLMConfig;
  tools: Tool[];
  instructions: string;
}

class BaseAgent {
  protected config: AgentConfig;
  protected memory: AgentMemory;
  
  constructor(config: AgentConfig) {
    this.config = config;
    this.memory = new AgentMemory(config.name);
  }
  
  async think(task: Task): Promise<Response> {
    const context = await this.memory.buildContext(task.description);
    const prompt = this.buildPrompt(task, context);
    const response = await this.callLLM(prompt);
    await this.memory.add({ type: 'semantic', content: task.description + ' -> ' + response.content, importance: 8 });
    return response;
  }
  
  protected buildPrompt(task: Task, context: string): Message[] {
    return [
      { role: 'system', content: this.config.instructions },
      { role: 'system', content: context },
      { role: 'user', content: task.description }
    ];
  }
  
  protected async callLLM(messages: Message[]): Promise<Response> {
    const res = await fetch('https://api.openai.com/v1/chat/completions', {
      method: 'POST',
      headers: { 'Authorization': `Bearer ${process.env.OPENAI_API_KEY}` },
      body: JSON.stringify({ model: this.config.llm.model, messages, tools: this.config.tools.map(t => t.definition) })
    });
    return res.json();
  }
}

2. 编排器

interface TaskResult {
  agentId: string;
  status: 'pending' | 'running' | 'done' | 'failed';
  output?: string;
  dependencies: string[];
  startTime?: number;
  endTime?: number;
}

class Orchestrator {
  private agents: Map<string, BaseAgent> = new Map();
  private taskGraph: DAG<Task>;
  
  constructor(private llmRouter: LLMRouter) {}
  
  registerAgent(agent: BaseAgent) {
    this.agents.set(agent.config.name, agent);
  }
  
  async execute(goal: string): Promise<string> {
    // 1. 任务分解
    const plan = await this.decompose(goal);
    
    // 2. 构建DAG
    this.taskGraph = this.buildDAG(plan);
    
    // 3. 执行调度
    const results = await this.schedule();
    
    // 4. 汇总结果
    return this.summarize(goal, results);
  }
  
  private async decompose(goal: string): Promise<Task[]> {
    const response = await this.llmRouter.route({
      prompt: `将以下任务分解为可执行的子任务,返回JSON数组:
      
目标: ${goal}

要求:
- 每个子任务只由一个Agent负责
- 明确任务依赖关系
- 返回格式: [{"id":"t1","description":"...","agent":"researcher","depends":[]},...]`,
      system: '你是任务分解专家。'
    });
    
    return JSON.parse(response.content);
  }
  
  private async schedule(): Promise<Map<string, TaskResult>> {
    const results = new Map<string, TaskResult>();
    const pending = new Set(this.taskGraph.nodes);
    const running: Promise<void>[] = [];
    const maxConcurrent = 3;
    
    while (pending.size > 0 || running.length > 0) {
      // 启动可并行的任务
      while (running.length < maxConcurrent) {
        const next = this.findNextRunnable(pending, results);
        if (!next) break;
        
        pending.delete(next.id);
        const p = this.runTask(next, results).catch(console.error);
        running.push(p);
      }
      
      // 等待一个完成
      await Promise.race(running);
      running.splice(running.findIndex(p => false), 1);
    }
    
    return results;
  }
  
  private async runTask(task: Task, results: Map<string, TaskResult>) {
    results.set(task.id, { agentId: task.agent, status: 'running', dependencies: task.depends || [], startTime: Date.now() });
    
    try {
      // 等待依赖完成
      for (const depId of task.depends || []) {
        const dep = results.get(depId);
        if (dep?.status !== 'done') {
          await this.waitFor(depId, results);
        }
      }
      
      const agent = this.agents.get(task.agent);
      const context = this.buildContext(task, results);
      const response = await agent.think({ id: task.id, description: task.description, context });
      
      results.set(task.id, { ...results.get(task.id)!, status: 'done', output: response.content, endTime: Date.now() });
    } catch (error) {
      results.set(task.id, { ...results.get(task.id)!, status: 'failed', output: String(error), endTime: Date.now() });
    }
  }
  
  private buildContext(task: Task, results: Map<string, TaskResult>): string {
    return (task.depends || []).map(depId => {
      const dep = results.get(depId);
      return dep?.output || '';
    }).join('\n\n');
  }
}

3. 消息总线

class MessageBus {
  private subscriptions = new Map<string, Subscriber[]>();
  
  publish(channel: string, message: Message) {
    const subs = this.subscriptions.get(channel) || [];
    for (const sub of subs) {
      sub.handler(message);
    }
  }
  
  subscribe(channel: string, handler: (msg: Message) => void): () => void {
    if (!this.subscriptions.has(channel)) {
      this.subscriptions.set(channel, []);
    }
    const sub = { id: crypto.randomUUID(), handler };
    this.subscriptions.get(channel)!.push(sub);
    return () => this.unsubscribe(channel, sub.id);
  }
  
  unsubscribe(channel: string, subId: string) {
    const subs = this.subscriptions.get(channel) || [];
    const idx = subs.findIndex(s => s.id === subId);
    if (idx >= 0) subs.splice(idx, 1);
  }
}

// 消息类型
interface Message {
  id: string;
  type: 'request' | 'response' | 'broadcast' | 'event';
  from: string;
  to?: string;
  content: any;
  timestamp: number;
}

4. 状态机

type AgentState = 'idle' | 'thinking' | 'waiting' | 'acting' | 'error';

interface AgentSession {
  id: string;
  agentId: string;
  state: AgentState;
  currentTask?: string;
  history: Turn[];
  sharedContext: Record<string, any>;
}

class StateManager {
  private sessions = new Map<string, AgentSession>();
  
  transition(sessionId: string, newState: AgentState) {
    const session = this.sessions.get(sessionId);
    if (!session) return;
    
    const oldState = session.state;
    session.state = newState;
    
    // 状态转换钩子
    this.onTransition(sessionId, oldState, newState);
  }
  
  // 状态转换规则
  private canTransition(from: AgentState, to: AgentState): boolean {
    const rules: Record<AgentState, AgentState[]> = {
      idle: ['thinking'],
      thinking: ['waiting', 'acting', 'error', 'idle'],
      waiting: ['thinking', 'error', 'idle'],
      acting: ['thinking', 'error', 'idle'],
      error: ['idle', 'thinking']
    };
    return rules[from]?.includes(to) || false;
  }
}

通信模式

模式说明适用场景
广播所有Agent接收全局通知
点对点指定Agent接收任务分配
发布/订阅按主题分发事件驱动
黑板共享知识空间协作推理

常见模式

角色扮演(Role Play)

class RolePlayOrchestrator extends Orchestrator {
  async execute(goal: string) {
    // 分配角色
    const planner = this.getAgent('planner');
    const executor = this.getAgent('executor');
    const critic = this.getAgent('critic');
    
    const plan = await planner.think({ description: goal });
    const result = await executor.think({ description: plan.output, context: '' });
    const review = await critic.think({ description: result.output, context: '' });
    
    return review.output;
  }
}

辩论(Debate)

async debate(topic: string, rounds = 3) {
  const pro = this.getAgent('pro');
  const con = this.getAgent('con');
  const judge = this.getAgent('judge');
  
  let context = '';
  for (let i = 0; i < rounds; i++) {
    const proArg = await pro.think({ description: `正方论点 (第${i+1}轮): ${topic}`, context });
    context += `\n正方: ${proArg.output}`;
    
    const conArg = await con.think({ description: `反方论点 (第${i+1}轮): ${topic}`, context });
    context += `\n反方: ${conArg.output}`;
  }
  
  return judge.think({ description: `裁决: ${topic}`, context });
}

最佳实践

  1. 单一职责:每个Agent有明确的专业领域
  2. 松耦合:通过消息总线通信,避免直接依赖
  3. 超时控制:防止某个Agent卡死
  4. 熔断机制:失败次数过多自动降级
  5. 可观测性:完整日志和追踪

Usage

  1. Install the skill
  2. Configure as needed
  3. Run with OpenClaw

Comments

Loading comments...