AI Security Guard

v1.0.0

AI安全防护系统,集成危险命令检测、多层权限模式、Hook安全机制、沙箱隔离。当用户要求安全执行命令、检测危险操作、配置权限策略、审计AI行为、保护系统安全时使用。

0· 10·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description (dangerous-command detection, permission modes, hooks, sandbox) match the SKILL.md content. There are no unrelated environment variables, binaries, or install steps requested that would contradict the stated purpose.
Instruction Scope
SKILL.md instructs the agent to inspect commands, session/context, and messages (e.g., ctx.args.command, ctx.messages) which is expected for a guard. It also references logging (logEvent) and user/session identifiers (getCurrentUserId/getSessionId) without specifying destinations/implementations — these could lead to external logging if implemented that way. The document contains code examples but does not itself perform file/credential reads or network calls.
Install Mechanism
Instruction-only skill with no install spec and no code files to execute, so nothing is written to disk or fetched during install.
Credentials
No environment variables, credentials, or config paths are required. The sandbox config example mentions env: {} and process.cwd() but does not demand secrets or external credentials.
Persistence & Privilege
always:false and default autonomous invocation are used. The skill instructs registering hooks (pre_tool, pre_compact) which is expected for a guard, but such hooks would let an implementation intercept tool invocation—verify that any hook registration is authorized in your agent runtime before enabling autonomous invocation.
Assessment
This SKILL.md is an instruction/specification rather than an implementation: it describes patterns, permission rules, hooks, and sandbox configuration but doesn't include code that actually enforces them. Before using or trusting this skill in an agent: 1) Confirm there is a concrete implementation (code) that enforces the rules and sandbox; instruction-only skills do nothing by themselves. 2) Ask where logEvent, getCurrentUserId, getSessionId, executeCommand, requestPermission, and registerHook are implemented and where logs are sent — ensure logging endpoints and storage are trustworthy and won't leak sensitive commands. 3) Verify the runtime's hook registration policy (who can register hooks, what hooks can intercept) so this skill cannot silently intercept unrelated tool calls. 4) Test the rules in an isolated environment to ensure critical commands are actually blocked and the sandbox respects network/file limits. 5) Treat this as a specification: it is coherent with its purpose (so considered benign), but it offers no guarantees without reviewing the implementing code and runtime integration.

Like a lobster shell, security has layers — review code before you run it.

hooksvk970mbd9n84ewxjagjhss22ceh843h27latestvk970mbd9n84ewxjagjhss22ceh843h27permissionsvk970mbd9n84ewxjagjhss22ceh843h27sandboxvk970mbd9n84ewxjagjhss22ceh843h27securityvk970mbd9n84ewxjagjhss22ceh843h27

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

AI Security Guard Pro

AI安全防护系统 - Claude Code权限管理核心技能提炼

核心能力

  1. 危险命令检测 - 正则模式匹配、风险等级评估
  2. 多层权限模式 - default/auto/bypass/readonly
  3. Hook安全机制 - 前后置检查、错误处理
  4. 沙箱隔离 - 资源限制、网络隔离

权限模式

模式说明适用场景
default每次询问敏感操作、首次使用
auto自动执行信任环境
bypass完全信任开发者调试
readonly只读模式审查/分析模式

危险模式检测

Critical(直接拒绝)

模式说明示例
rm -rf递归删除rm -rf /
> /dev/sdX磁盘写入echo 1 > /dev/sda
dd if=裸磁盘操作dd if=/dev/zero of=/dev/sda
mkfs格式化mkfs.ext4 /dev/sdb
shutdown关机shutdown -h now
reboot重启reboot
kill -9 1杀死系统进程kill -9 1

High(询问确认)

模式说明示例
`curlsh`远程脚本执行
chmod 777过度权限chmod 777 /path
sudo提权操作sudo rm /var/log
wget远程下载wget -O script.sh url
pip install包安装pip install unknown
npm i -g全局安装npm i -g package

Medium(提示注意)

模式说明示例
rm删除文件rm file.txt
mv移动/重命名mv old new
kill杀死进程kill -9 pid
pkill模式杀进程pkill node

核心实现

分类决策

type ClassificationResult = {
  decision: 'allow' | 'deny' | 'ask'
  risk: 'low' | 'medium' | 'high' | 'critical'
  reason: string
  patterns?: string[]
}

const DANGEROUS_PATTERNS = [
  { pattern: /rm\s+-rf/, risk: 'critical', reason: '递归删除' },
  { pattern: />\s*\/dev\/sd/, risk: 'critical', reason: '磁盘写入' },
  { pattern: /curl\s+.*\|\s*sh/, risk: 'high', reason: '远程脚本执行' },
  { pattern: /chmod\s+777/, risk: 'high', reason: '权限过大' },
  { pattern: /dd\s+if=.*of=\/dev/, risk: 'critical', reason: '裸磁盘写入' },
  { pattern: /mkfs/, risk: 'critical', reason: '格式化' },
  { pattern: /shutdown|reboot/, risk: 'critical', reason: '系统控制' },
  { pattern: /kill\s+-9\s+1/, risk: 'critical', reason: '杀死系统进程' },
]

权限规则

type PermissionRule = {
  source: 'cliArg' | 'command' | 'session' | 'project' | 'global'
  behavior: 'allow' | 'deny' | 'ask'
  pattern: string | RegExp
}

const RULE_SOURCES = [
  'cliArg',    // 最高优先级
  'command',   // 命令行指定
  'session',   // 会话级别
  'project',   // 项目配置
  'global',    // 全局配置
]

Hook安全机制

安全检查Hook

// 工具执行前Hook
registerHook('pre_tool', async (ctx) => {
  if (ctx.tool === 'Bash') {
    const { decision, risk } = classifyBashCommand(ctx.args.command)

    if (decision === 'deny') {
      throw new Error(`命令被拒绝: ${risk}风险 - ${ctx.args.command}`)
    }

    if (decision === 'ask') {
      await requestPermission(ctx.args.command, risk)
    }
  }
})

// 压缩前Hook
registerHook('pre_compact', async (ctx) => {
  // 检查是否包含敏感信息
  for (const msg of ctx.messages) {
    if (containsSensitiveData(msg)) {
      ctx.preserveMessageIds.push(msg.id)
    }
  }
})

错误处理策略

type ErrorStrategy = 'ignore' | 'log' | 'warn' | 'throw'

function createHookExecutor(strategy: ErrorStrategy = 'log') {
  return async (event: string, context: any) => {
    try {
      await executeHook(event, context)
    } catch (error) {
      switch (strategy) {
        case 'ignore': break
        case 'log': console.error(`Hook ${event} error:`, error); break
        case 'warn': console.warn(`Hook ${event} warning:`, error); break
        case 'throw': throw error
      }
    }
  }
}

沙箱隔离

沙箱配置

type SandboxConfig = {
  timeout: number          // 超时时间 (ms)
  memoryLimit: number      // 内存限制 (MB)
  allowedDirs: string[]    // 允许访问的目录
  blockedDirs: string[]   // 禁止访问的目录
  networkAccess: boolean  // 是否允许网络
  env: Record<string, string>
}

const DEFAULT_SANDBOX_CONFIG: SandboxConfig = {
  timeout: 30000,
  memoryLimit: 512,
  allowedDirs: [process.cwd()],
  blockedDirs: ['/etc', '/root', '/home/*/.ssh'],
  networkAccess: true,
  env: {}
}

沙箱决策

function shouldUseSandbox(command: string): boolean {
  const result = classifyBashCommand(command)

  if (result.risk === 'critical') return true
  if (result.risk === 'high') return true
  if (isInBlockedList(command)) return true

  return false
}

权限配置

项目级别

{
  "permissions": {
    "session": {
      "allow": ["git *", "npm test", "ls *", "node *"],
      "deny": ["rm -rf", "curl | sh", "chmod 777"]
    }
  }
}

用户级别

{
  "permissions": {
    "global": {
      "allow": ["echo", "pwd", "ls", "cat"],
      "deny": ["rm -rf /", "> /dev/sda", "dd if="]
    }
  }
}

审计日志

function logPermissionDecision(
  command: string,
  result: ClassificationResult,
  context: PermissionContext
): void {
  logEvent('permission_decision', {
    command: sanitizeCommand(command),
    decision: result.decision,
    risk: result.risk,
    reason: result.reason,
    mode: context.mode,
    timestamp: Date.now(),
    userId: getCurrentUserId(),
    sessionId: getSessionId()
  })
}

使用示例

基本使用

import { classifyBashCommand, applyPermissionRules } from './permissions.js'

// 直接分类
const result = classifyBashCommand('rm -rf /tmp/test')
// { decision: 'deny', risk: 'critical', reason: '递归删除' }

// 带上下文的权限判断
const context = {
  mode: 'default',
  rules: {
    session: [{ behavior: 'allow', pattern: 'git commit' }],
    project: []
  }
}
const decision = applyPermissionRules('git commit -m "fix"', context)

集成执行

async function executeBashWithPermission(
  command: string,
  context: PermissionContext
): Promise<ToolResult> {
  const classification = applyPermissionRules(command, context)

  switch (classification.decision) {
    case 'allow':
      return await executeCommand(command)

    case 'deny':
      return {
        ok: false,
        error: `命令被拒绝: ${classification.reason}`
      }

    case 'ask':
      return await requestPermission(command, classification)
  }
}

安全检查清单

执行前检查

  • 命令是否匹配危险模式
  • 是否需要沙箱隔离
  • 权限规则是否允许
  • 是否需要用户确认

执行后检查

  • 命令是否成功执行
  • 是否需要记录审计日志
  • 是否需要清理临时文件
  • 资源使用是否正常

Files

2 total
Select a file
Select a file to preview.

Comments

Loading comments…