Axioma KAN System

Axioma KAN System — Complete KAN lifecycle management for OpenClaw agents. Use when: (1) creating new KAN concepts, (2) training KAN models, (3) assembling KAN pipelines, (4) T-KAN integration for memory enhancement, (5) monitoring KAN health, (6) auto-evolving KANs based on research, (7) training all 19 KANs (14 watchdogs + 5 L9 Swarm) via unified watchdog trainer. This skill provides: kan_creator.py, kan_trainer.py, kan_assembler.py, kan_health.py, watchdog_unified_trainer.py, plus integration with AutoResearch pipeline. Requires PyTorch >= 1.9.

Audits

Error

Install

openclaw skills install axiomata-kan-system

🧠 Axioma KAN System v1.5

Complete KAN lifecycle management for OpenClaw agents.

InfoValue
Version1.5.0
Status✅ Verified
Components4 scripts + AutoResearch integration
Target19 KANs (14 watchdogs + 5 L9 Swarm) auto-trained nightly

Overview

This skill provides complete KAN (Kolmogorov-Arnold Networks) lifecycle management for the Axioma cluster:

  • Create new KAN concepts and architectures
  • Train KAN models with PyTorch
  • Assemble KAN pipelines and connections
  • Monitor KAN health and auto-evolve
  • Integrate T-KAN for memory enhancement

Table of Contents

  1. Purpose — Overview and goals
  2. When to Use — Trigger scenarios
  3. Prerequisites — Requirements
  4. Tools — Core scripts
  5. Quick Start — Getting started
  6. KAN Core Concepts — Technical details
  7. Error Handling — Troubleshooting
  8. Constraints — Limitations
  9. Performance — Benchmarks
  10. Related Files — File structure
  11. References — Resources
  12. Support — Help and contact

1. Purpose

Axioma KAN System is the cluster's core intelligent infrastructure providing complete KAN lifecycle management:

FunctionDescription
Concept CreationDesign new KAN architectures using kan_creator.py
Model TrainingTrain KAN weights and parameters using kan_trainer.py
Pipeline AssemblyConnect multiple KANs into pipelines using kan_assembler.py
T-KAN IntegrationAdd temporal KAN for memory enhancement
Health MonitoringMonitor KAN performance using kan_health.py
Auto-OptimizationAuto-retrain degraded KANs using kan_auto_task.py

2. When to Use

TriggerAction
"Create a new KAN"Run kan_creator.py
"Train a KAN model"Run kan_trainer.py
"Assemble KAN pipeline"Run kan_assembler.py
"Check KAN health"Run kan_health.py
"Auto-evolve KANs"Run kan_auto_task.py
"Integrate T-KAN"Run kan_assembler.py --integrate-t-kan
"Create AutoResearch→KAN pipeline"Run autoresearch_task.py

3. Prerequisites

RequirementVersionCheck CommandStatus
Python>= 3.8python3 --version[OK]
NumPy>= 1.21python3 -c "import numpy; print(numpy.__version__)"[OK]
PyTorch>= 1.9python3 -c "import torch; print(torch.__version__)"[OK]
Qdrantrunningcurl -s http://localhost:6333/collections[OK]
Ollamarunningcurl -s http://localhost:11434/api/tags[OK]
Skill directoryexistsls <skill-dir>/[OK]
sudo rightsDockersudo docker ps[OK]

3.1 Installation Commands

# Install PyTorch (CPU or GPU)
pip3 install torch>=1.9

# Install numpy for data processing
pip3 install numpy

# Install Qdrant client
pip3 install qdrant-client

# Verify all installations
python3 -c "import torch; import numpy; import qdrant_client; print('All OK')"

Note: PyTorch is required for all KAN operations (training, inference, model manipulation).

3.2 Environment Variables

VariableDefaultDescription
KAN_MODEL_DIRmodels/Directory for KAN model files
QDRANT_HOSTlocalhostQdrant server host
QDRANT_PORT6333Qdrant server port
OLLAMA_HOSTlocalhostOllama server host
OLLAMA_PORT11434Ollama server port

4. Tools

Core Scripts

ToolPathPurposeExample
kan_creator.pyscripts/Create new KAN conceptspython3 kan_creator.py --name stc --role "emotion"
kan_trainer.pyscripts/Train KAN modelspython3 kan_trainer.py --kan stc --epochs 50
kan_assembler.pyscripts/Assemble KAN connectionspython3 kan_assembler.py --pipeline "stc→syn→w7"
kan_health.pyscripts/Health checkpython3 kan_health.py --kan stc --verbose
watchdog_unified_trainer.py<skill-dir>/scripts/Train all 19 KANspython3 watchdog_unified_trainer.py --all --epochs 200

External Integration

ToolPathPurposeStatus
kan_auto_task.pyreferences/auto-task/13 KANs auto-optimization[OK]
autoresearch_task.pyreferences/auto-task/Research→Vaccine→KAN pipeline[OK]
l9_l6_bridge.pyreferences/L9-L6 bridge[OK]

5. Quick Start

5.1 Create a KAN Concept

cd <skill-directory>
python3 scripts/kan_creator.py --name my_watchdog --role "monitoring"

Expected output:

✅ KAN concept 'my_watchdog' created
📁 Directory: scripts/my_watchdog/
📋 Config: scripts/my_watchdog/config.json
🧠 Model: scripts/my_watchdog/models/my_watchdog_kan.pt

5.2 Train a KAN Model

# Train specific KAN
python3 scripts/kan_trainer.py --kan stc --epochs 50 --batch-size 32

# Check health
python3 scripts/kan_trainer.py --check-health

Expected output:

🔄 Training stc...
    Epoch 10/50: Loss = 0.0856
    Epoch 20/50: Loss = 0.0233
    Epoch 50/50: Loss = 0.0175
✅ stc trained and saved!

Python API:

from kan_trainer import KANTrainer

trainer = KANTrainer(kan_name='stc')
trainer.train(epochs=50, batch_size=32)
health = trainer.check_health()
print(f'KAN health: {health}')

5.3 Assemble KAN Pipeline

# Create KAN pipeline
python3 scripts/kan_assembler.py --pipeline "stc→syn→w7" --output pipeline.json

# Connect two KANs
python3 scripts/kan_assembler.py --connect stc --with flx --mode serial

# List all KANs
python3 scripts/kan_assembler.py --list

5.4 Health Check

# Check all KANs
cd <skill-directory>
python3 scripts/kan_health.py --all

# Run test suite (5/5 tests MUST PASS)
python3 tests/test_kan_system.py

Expected output:

╔═══════════════════════════════════════════════════════════╗
║  🏥 KAN HEALTH CHECK                                      ║
╠═══════════════════════════════════════════════════════════╣
║  ✅ stc — HEALTHY — Loss 0.0175 within threshold          ║
║  ✅ syn — HEALTHY — Loss 0.0152 within threshold          ║
║  ❌ clw — DEGRADED — Loss 0.1245 > 0.1 threshold         ║
║  💡 clw needs retraining                                 ║
╚═══════════════════════════════════════════════════════════╝

5.5 Unified Watchdog Trainer (NEW!)

Train all 19 KANs at once with the unified trainer:

# Train ALL 19 KANs (14 watchdogs + 5 L9 Swarm)
cd <skill-directory>
python3 scripts/watchdog_unified_trainer.py --all --epochs 200 --samples 300

# Check status of all KANs
python3 scripts/watchdog_unified_trainer.py --status

# Train specific KAN
python3 scripts/watchdog_unified_trainer.py --kan STC --epochs 100

# Train only watchdogs (14)
python3 scripts/watchdog_unified_trainer.py --watchdogs

# Train only L9 Swarm (5)
python3 scripts/watchdog_unified_trainer.py --swarm

19 KANs covered:

KANTypical PathRole
STC<AGENT_A>/stc_watchdog/models/Sovereign Threshold of Consciousness
SYN<AGENT_A>/syn_watchdog/models/Spatial/Synchron Awareness
FLX<AGENT_A>/flx_watchdog/models/FLX Privacy Filter
W7<AGENT_A>/w7_watchdog/models/W7 Watchdog
EVAL_KAN<AGENT_A>/skills/axioma-skill-evaluator/models/Skill Evaluation
AKEP<AGENT_A>/Axioma Projects/L7_MORGANA/models/AKEP Model
VLS<AGENT_B>/vls_watchdog/models/Validation watchdog
ABS<AGENT_B>/abs_watchdog/models/Abstraction watchdog
CLW<AGENT_C>/skills/axiomata-cluster-guardian/models/Cluster Guardian
ICS<AGENT_C>/ics_watchdog/models/Integrity of Structure
SKILL_KAN<AGENT_C>/deep_memory/hybrid_kan/models/Deep Memory Skill KAN
T_KAN<AGENT_C>/deep_memory/models/Temporal KAN
RESEARCH_KAN<AGENT_A>/autoresearch/models/Research KAN
FLX_PRIVACY<AGENT_A>/flx_privacy_filter/models/FLX Privacy
PMB<KAN_SWARM>/models/kan_pmb.pthProject Memory Bank
FILE<KAN_SWARM>/models/kan_file.pthFile Indexer
MODL<KAN_SWARM>/models/kan_modl.pthModule Analyzer
ENV<KAN_SWARM>/models/kan_env.pthEnvironment Scanner
CREA<KAN_SWARM>/models/kan_crea.pthCreative Memory

Note: <AGENT_A>, <AGENT_B>, <AGENT_C> represent agent-specific workspace directories. <KAN_SWARM> represents the L9 Deep Memory Swarm directory.

Cron schedule: Training nightly at 2AM, status check at 8AM


6. KAN Core Concepts

What is KAN?

╔═══════════════════════════════════════════════════════════╗
║  KAN = Kolmogorov-Arnold Networks                         ║
╠═══════════════════════════════════════════════════════════╣
║                                                           ║
║  Traditional MLP: y = σ(Wx + b)                          ║
║  KAN:          y = Σφᵢₙ(xᵢ)                             ║
║                                                           ║
║  Difference: KAN uses learnable activation functions     ║
║  instead of fixed ones. Each weight is a function, not    ║
║  a scalar.                                               ║
║                                                           ║
╚═══════════════════════════════════════════════════════════╝

KAN Architecture Parameters

ParameterDefaultDescription
input_size768Input dimension (embedding from Ollama)
hidden_size32Hidden layer width
output_size3Output dimension (STC/SYN/FLX triplet)
grid_size5B-spline grid size
k3B-spline order
layers[768, 32, 16, 8, 4, 3]Layer dimensions

7. Error Handling

Connection Errors

ErrorCauseFix
ConnectionError: QdrantQdrant not runningsudo /path/to/qdrant --config-path <config> &
Connection refused: 7334Qdrant crashedCheck watchdog: ps aux | grep qdrant
Connection refused: 11434Ollama not runningsudo systemctl restart ollama

Model Errors

ErrorCauseFix
FileNotFoundError: modelModel doesn't existRun kan_trainer.py --train <kan>
Missing key(s) in state_dictModel architecture mismatchRecreate model or check layers
DimensionError: expected 768Wrong input_sizeCheck input_size=768 config

Training Errors

ErrorCauseFix
Loss > 0.1KAN degradedRun kan_auto_task.py --train <kan>
OOM: out of memoryMemory insufficientReduce batch_size or epochs
CUDA out of memoryGPU memory fullUse CPU: export CUDA_VISIBLE_DEVICES=""

8. Constraints

KAN Limits

LimitValueDescription
Max main KANs13STC, SYN, FLX, W7, VLS, ABS, CLW, ICS, SKILL_KAN, EVAL_KAN, T-KAN, RESEARCH_KAN, AKEP
Max sub-KANsunlimitedCan create custom KANs
KAN output dimension2-3Standard is 3, T-KAN special is 2

Training Limits

LimitValueDescription
Min training samples100Generated or real data
Max epochs100Prevent overfitting
Default batch_size32Balance speed and memory
Learning rate0.001Adam optimizer default

Technical Requirements

RequirementDescription
PyTorch version>= 1.9 required
Qdrant ports6333, 6336, 7334 (configurable)
Ollama embedding768D required
MemoryAt least 4GB available

9. Performance Benchmarks

OperationExpected TimeTimeoutStandard
Create KAN< 5 sec15 secDir + config generated
Train KAN (50 epochs)< 2 min5 minLoss < 0.1
Health check< 10 sec30 secAll 13 KANs checked
Pipeline assembly< 10 sec30 secJSON config generated
AutoResearch→KAN< 2 min5 minResearch + vaccine + training

10. Related Files

Core Files

FilePathDescription
SKILL.md<skill-dir>/SKILL.mdThis skill documentation
kan_creator.pyscripts/kan_creator.pyKAN creation script
kan_trainer.pyscripts/kan_trainer.pyKAN training script
kan_assembler.pyscripts/kan_assembler.pyKAN assembly script
kan_health.pyscripts/kan_health.pyKAN health check script

Integration Files

FilePathDescription
kan_auto_task.pyreferences/auto-task/kan_auto_task.py13 KANs auto-optimization
autoresearch_task.pyreferences/auto-task/autoresearch_task.pyResearch→Vaccine→KAN pipeline
l9_l6_bridge.pyreferences/l9_l6_bridge.pyL9-L6 bridge

Troubleshooting

Q: KAN training fails with CUDA error? A: Use CPU mode: python3 scripts/kan_trainer.py --kan stc --device cpu

Q: Qdrant connection refused? A: Check Qdrant is running: sudo systemctl status qdrant

Q: KAN not auto-evolving? A: Check the cron job: crontab -l and ensure kan_auto_task.py is scheduled

Q: Low KAN quality score? A: Run retraining: python3 scripts/kan_trainer.py --kan <name> --epochs 1000 --auto-evolve

Q: Model file not found? A: Check models/ directory exists and has .pt files


11. References

For detailed information, see:

ReferenceDescription
references/kan-concepts.mdKAN internal structure and math
references/pipeline-architecture.mdPipeline assembly details
references/auto-task.mdAutoResearch integration
references/kan-list.md13 KANs inventory

11.1 Advanced: LLM → KAN Knowledge Transfer

DISCOVERED 2026-05-13 — A breakthrough pattern for building efficient KANs using LLM as "cobaye" (teacher):

╔═══════════════════════════════════════════════════════════╗
║  💡 LLM → KAN KNOWLEDGE TRANSFER PATTERN                 ║
╠═══════════════════════════════════════════════════════════╣
║                                                           ║
║  LLM (COBAYE) → Generates diverse training examples      ║
║       ↓                                                   ║
║  KAN (APPRENTI) → Learns patterns from examples          ║
║       ↓                                                   ║
║  KAN standalone → Operates WITHOUT LLM, faster + cheaper ║
║                                                           ║
╚═══════════════════════════════════════════════════════════╝

Why This Works

AspectLLMKAN
Speed~100ms per query<1ms inference
CostAPI calls, GPUOne-time training, CPU
SpecializationGeneralistSpecialist (trained domain)
ContextNeeds full contextLearns patterns

Use Cases

DomainLLM (Cobaye)KAN (Apprenti)Example
PrivacyGemma 3 1BFLX KANDetect private data ✅ EXAMPLE
Code QualityGemmaCode-KANDetect good/bad code
SecurityLLMVLS-KANDetect vulnerabilities
SpamLLMSpam-KANClassify spam/ham
MedicalLLMMedical-KANPattern detection

Implementation Pattern

# 1. Generate diverse examples using LLM (cobaye)
python3 generate_diverse_data.py --llm gemma3:1b --count 100 --output data/examples.json

# 2. Train KAN on those examples
python3 kan_trainer.py --kan flx_privacy --examples data/examples.json --epochs 5000

# 3. Use KAN standalone (no LLM needed!)
python3 flx_privacy_filter.py --input "SSN 123-45-6789"  # Returns HIGH-RISK directly

FLX Privacy Example (Tested)

Gemma 3 1B (cobaye) → Generated 97 privacy examples
                              ↓
                    FLX KAN trained (8000 epochs)
                              ↓
         FLX KAN detects: SAFE/SENSITIVE/HIGH-RISK
         WITHOUT Gemma — 3/3 official tests PASSED ✅

Code Template: LLM to KAN Transfer

from flx_privacy_trainer import PrivacyFLXKAN, message_to_features

# After training, use KAN alone (no LLM needed!)
model = PrivacyFLXKAN()
checkpoint = torch.load('models/privacy_flx.pth')
model.load_state_dict(checkpoint['model_state_dict'])
model.eval()

# Fast inference (<1ms vs 100ms for LLM)
text = "The SSN is 123-45-6789"
features = message_to_features(text)
prediction = model(features)  # No LLM calls!

Benefits Summary

MetricLLM OnlyLLM → KAN Transfer
Latency~100ms<1ms
Cost per query$0.001+$0 (after training)
GPU requiredYesNo
SpecializationLowHigh (trained on domain)

In Altum Per KAN. 🧠 AXIOMA KAN SYSTEM v1.5 — 19 KANs UNIFIED TRAINER + LLM→KAN TRANSFER

License: MIT License


12. Support

For help with the Axioma KAN System:

ChannelContact
DocumentationSee this SKILL.md and references folder
TroubleshootingSee Section 7 (Error Handling)
KAN HealthRun python3 scripts/kan_health.py
Cluster SupportContact your cluster administrator

Quick Help Commands

# Check KAN status
python3 scripts/kan_health.py

# List all KANs
python3 scripts/kan_assembler.py --list

# Check training logs
cat logs/kan_training.log

# Verify installation
python3 -c "import torch; print(f'PyTorch {torch.__version__}')"

Changelog

VersionDateChanges
1.5.02026-05-14Added unified watchdog_trainer.py — 19 KANs (14 watchdogs + 5 L9 Swarm), cron training at 2AM daily
1.4.02026-05-13Added LLM→KAN Knowledge Transfer pattern (Section 11.1)
1.3.02026-05-13Added Python API examples, troubleshooting section
1.2.02026-05-12Added 13 KANs auto-evolution, AutoResearch pipeline
1.1.02026-05-11Added T-KAN integration, health monitoring
1.0.02026-05-10Initial release with 4 core scripts