Federated Chaos Testing
v1.0.0Simulate faults in federated learning systems by injecting noise, dropout, data poisoning, and delays to evaluate model robustness and fault tolerance.
MIT-0
Security Scan
OpenClaw
Benign
medium confidencePurpose & Capability
Name/description (federated chaos testing) match the SKILL.md content: the document describes fault types, injection strategies (noise, dropout, poisoning, delays) and robustness metrics. Nothing requested or described is unrelated to federated-chaos testing.
Instruction Scope
SKILL.md is high-level guidance (fault taxonomy, injection strategies, a resilience metric) rather than step-by-step commands. It does not instruct reading files, accessing environment variables, or contacting external endpoints. However, the instructions are deliberately open-ended (agent/user must design and execute injections), which grants operational discretion and creates a dual-use risk if used against production or third-party systems; the skill lacks explicit safety/authorization constraints.
Install Mechanism
No install spec or code files are present; this is instruction-only, so nothing will be written to disk or auto-downloaded by installing the skill.
Credentials
The skill declares no required environment variables, credentials, or config paths. There are no mismatched or unexplained credential requests.
Persistence & Privilege
Skill flags are default (always: false, user-invocable: true, model invocation enabled). No elevated persistence or cross-skill/system configuration changes are requested.
Assessment
This skill is internally consistent and is a conceptual playbook for testing federated learning robustness, but it is dual-use: the same techniques used for testing can be misused against live or third-party systems. Before installing or using it, ensure you will: 1) run experiments only in isolated/sandboxed test environments (not production), 2) obtain appropriate authorization from system owners and data custodians, 3) add explicit safeguards and rollback procedures, 4) log and audit all injected actions, and 5) prefer an implementation that requires explicit operator confirmation for any destructive or data-modifying steps. If you expect the skill to perform concrete actions, request a version with precise, auditable runbooks or code (and perform a code review) rather than relying on high-level instructions.Like a lobster shell, security has layers — review code before you run it.
latest
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
SKILL.md
Federated Chaos Testing
联邦学习 × 混沌工程:验证分布式AI系统在节点故障下的学习质量。
何时使用
- 联邦学习系统的鲁棒性验证
- 评估恶意/故障节点对全局模型的影响
- 设计容错的联邦训练协议
核心认知
1. 联邦系统的故障面与传统系统不同
传统分布式系统的故障是"正确性"问题(数据一致、请求完整)。联邦学习的故障是"质量"问题——某个节点的模型更新是恶意的、低质的、或基于偏斜数据的,全局聚合后导致模型退化。
故障模式分类:
- 静默故障:节点返回看似正常但含微妙偏见的模型更新(最难检测)
- 拜占庭故障:节点返回任意或恶意的模型参数
- 数据偏斜故障:节点的本地数据分布严重偏离全局分布
- 通信故障:模型更新在传输中丢失或损坏
2. 联邦混沌注入策略
- 梯度扰动注入:在随机节点的模型更新中注入噪声,测试聚合算法的鲁棒性
- 节点撤离模拟:训练过程中随机踢出节点,验证全局模型是否退化
- 数据投毒模拟:在部分节点注入有毒数据,测试异常检测机制
- 通信延迟注入:模拟高延迟/丢包环境,测试异步聚合的收敛性
3. 联邦弹性指标
FERI = (基准准确率 - 故障后准确率) / 故障节点比例
FERI越低越好(说明故障节点对全局影响小)
目标: FERI < 0.1(10%故障节点只造成<1%准确率下降)
碰撞来源
federated-learning×chaos-engineering-playbook×chaos-data-pipelineself-healing-database(自愈模式)×byzantine-fault-tolerance概念
Files
1 totalSelect a file
Select a file to preview.
Comments
Loading comments…
