Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Nima Core

v3.1.5

Neural Integrated Memory Architecture — Persistent memory, emotional intelligence, and semantic recall for AI agents. Memory pruner, VADER affect, 5 embeddin...

4· 2.7k·3 current·4 all-time
byLilu@dmdorta1111
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (persistent memory, affect, recall) align with the code and hooks present (nima-core Python package + OpenClaw hooks). Requested binaries (python3, node) and the declared read/write paths (~/.openclaw sessions and ~/.nima/) match the stated purpose. However, the package includes many code files but the registry entry lists no install spec — and the bundled installer requires git and Python >=3.10 while README/docs mention other Python versions. These mismatches are unexpected and merit caution.
Instruction Scope
SKILL.md clearly instructs the agent to install hooks, read session transcripts (~/.openclaw/agents/*/sessions/*.jsonl) and write persistent state to ~/.nima/. It documents conditional network calls to embedding providers (voyage/openai/ollama) and provides opt-in controls. The runtime instructions therefore do request sensitive data (agent session transcripts) and conditional external network use — but those actions are documented and coherent with a memory plugin.
!
Install Mechanism
Registry metadata indicated 'instruction-only' (no install spec) but the bundle includes an install.sh and many files that will be written to disk and deployed to ~/.openclaw/extensions. install.sh clones from GitHub, runs pip installs (including attempting to install real-ladybug), may initialize databases, and writes ~/.nima/.env. The installer also requires git and Python 3.10+, which are not listed in the registry requirements. The mismatch between 'no install spec' and an extractive installer increases risk and surprise for users.
Credentials
The registry lists only NIMA_DATA_DIR as required and the skill documents optional env vars (NIMA_EMBEDDER, VOYAGE_API_KEY, OPENAI_API_KEY, plus other LLM-related vars in code). These envs are proportionate to embedding/LLM features. Be aware that enabling non-local embedders or the memory pruner (LLM distillation) will require API keys and will send text externally; the default local embeddings mode does not. Also some canonical LLM env vars appear in code but were not declared in the registry's required list — review which envs the runtime actually reads.
Persistence & Privilege
always:false and user-invocable:true. The skill installs persistent OpenClaw hooks (writes to ~/.openclaw/extensions) so it will run on agent events as intended for a memory plugin. That level of persistence is expected for this purpose and is documented. No evidence it modifies other skills' configurations beyond adding its own hooks, but the installer writes to ~/.nima/.env and copies hooks into the OpenClaw extensions directory — review and backup OpenClaw config before installing.
Scan Findings in Context
[pre-scan-injection-signals-none] expected: Automated pre-scan reported no injection signatures. The repository includes a SECURITY.md describing Cypher injection risks and implemented mitigations (escape and whitelist functions), which is expected for a graph DB backend.
[version_mismatch_files] unexpected: Multiple inconsistent version strings appear (registry 3.1.5, install.sh header v3.3.3, __init__.py __version__=2.3.0, SKILL.md shows mixed versions). This is not a regex scan finding but a packaging inconsistency that raises supply-chain/maintenance concerns.
[installer_requires_git_not_declared] unexpected: install.sh requires git and enforces Python >=3.10, but registry required binaries only list python3 and node; git and a specific Python version are not declared as required in the registry metadata. Unexpected installer prerequisites increase install-time surprise.
What to consider before installing
This skill largely does what it says (captures OpenClaw session transcripts, stores local memory, optional external embeddings, and affect analysis), but several red flags merit review before installing: - Packaging/metadata mismatches: version numbers and declared install method disagree across files. Treat the bundle as unvetted source — prefer installing from the upstream GitHub repo you trust and confirm tags/commits. - Installer behavior: install.sh will clone, pip-install packages (possibly system-wide), initialize databases, and copy hooks into ~/.openclaw/extensions. It also writes ~/.nima/.env. Run it in a sandbox or in a dedicated VM/container, or inspect and run steps manually (use a venv, avoid --break-system-packages). - Sensitive data flow: by design the hooks read agent session transcripts (~/.openclaw/agents/*/sessions/*.jsonl). If you enable non-local embedders or the memory-pruner LLM distillation, text will be sent to external services (voyage.ai, openai.com, etc.). If you must avoid exfiltration, keep NIMA_EMBEDDER=local and audit code paths that call external APIs. - Undeclared prerequisites: the installer requires git and enforces Python >=3.10; the registry did not include git in required binaries. Ensure your environment meets installer checks or perform a manual install. - Audit before enabling: review install.sh and the OpenClaw hook JS entry points that will run on agent events. Backup ~/.openclaw/openclaw.json and your OpenClaw environment before activating the plugin. If you want to proceed safely: clone the repo yourself, inspect the install script and hook code, perform pip installs inside a virtual environment, set NIMA_EMBEDDER=local, set NIMA_DATA_DIR to an isolated path, and test in a non-production agent sandbox. If you need help verifying specific files (hooks or network calls), provide those files and I can review them in detail.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🧠 Clawdis
Binspython3, node
EnvNIMA_DATA_DIR
dreamvk97fr9czncpk8q19n6d17knsjs81g503latestvk97erphtx1rb71cddc5v8epwtd83a7ehmemoryvk97fr9czncpk8q19n6d17knsjs81g503v2.5.0vk9732ah5r4tc1cbgf8jwv5w61981jek8
2.7kdownloads
4stars
49versions
Updated 10h ago
v3.1.5
MIT-0

NIMA Core 2.3

Neural Integrated Memory Architecture — A complete memory system for AI agents with emotional intelligence.

Website: https://nima-core.ai GitHub: https://github.com/lilubot/nima-core

🚀 Quick Start

# Install
pip install nima-core

# Or with LadybugDB (recommended for production)
pip install nima-core[vector]

# Set embedding provider
export NIMA_EMBEDDER=voyage
export VOYAGE_API_KEY=your-key

# Install hooks
./install.sh --with-ladybug

# Restart OpenClaw
openclaw restart

🔒 Privacy & Permissions

Data Access:

  • ✅ Reads session transcripts from ~/.openclaw/agents/*/sessions/*.jsonl
  • ✅ Writes to local storage at ~/.nima/ (databases, affect history, embeddings)

Network Calls (conditional on embedder choice):

  • 🌐 Voyage API — Only when NIMA_EMBEDDER=voyage (sends text for embeddings)
  • 🌐 OpenAI API — Only when NIMA_EMBEDDER=openai (sends text for embeddings)
  • 🔒 Local embeddings — Default (NIMA_EMBEDDER=local), no external API calls

Opt-in Controls:

// openclaw.json
{
  "plugins": {
    "entries": {
      "nima-memory": {
        "enabled": true,
        "skip_subagents": true,      // Exclude subagent sessions (default)
        "skip_heartbeats": true,      // Exclude heartbeat checks (default)
        "noise_filtering": {
          "filter_heartbeat_mechanics": true,
          "filter_system_noise": true
        }
      }
    }
  }
}

Privacy Defaults:

  • Subagent sessions excluded
  • Heartbeat/system noise filtered
  • Local embeddings (no external calls)
  • All data stored locally

To disable: Remove nima-memory from plugins.allow in openclaw.json

What's New in 2.1

VADER Affect Analyzer

  • Contextual Analysis: Caps boost (1.5x), punctuation emphasis (!!!), negation handling, degree modifiers
  • 30+ Idiom Recognition: Understands phrases like "not bad", "kind of", "sort of"
  • Panksepp 7-Affect Mapping: Direct mapping from VADER sentiment to SEEKING, RAGE, FEAR, LUST, CARE, PANIC, PLAY
  • Guardian Archetype Transformation: User anger → Agent concern/care response modulation
  • Replaces previous lexicon-based emotion detection

Noise Remediation (4-Phase)

  1. Empty Validation — Filters out null/empty messages
  2. Heartbeat Filters — Excludes system noise (HEARTBEAT_OK, polling messages)
  3. Deduplication — Removes duplicate content within sessions
  4. Metrics Collection — Tracks capture quality and filter effectiveness

Performance Improvements

  • LadybugDB Circular Import Fix: Resolved import issues in LadybugDB backend
  • Increased Token Budget: Recall budget increased from 500 to 3000 tokens
  • Connection Pooling: Improved connection management for LadybugDB backend

What's New in 2.0

LadybugDB Backend

  • 3.4x faster text search (9ms vs 31ms)
  • Native vector search with HNSW (18ms)
  • 44% smaller database (50MB vs 91MB)
  • Graph traversal with Cypher queries

Security Hardened

  • Query sanitization (FTS5, SQL injection prevention)
  • Path traversal protection
  • Temp file cleanup
  • Error handling throughout

Thread Safe

  • Singleton pattern with double-checked locking
  • API timeouts (30s Voyage, 10s LadybugDB)
  • Connection pooling ready

348 Tests

  • Full unit test coverage
  • Thread safety verified
  • Edge cases covered

Architecture

OPENCLAW HOOKS
├── nima-memory      — Three-layer capture with 4-phase noise remediation
├── nima-recall-live — Lazy recall injection (before_agent_start)
└── nima-affect      — VADER-based real-time affect analysis

PYTHON CORE
├── nima_core/cognition/
│   ├── dynamic_affect.py       — Panksepp 7-affect system
│   ├── personality_profiles.py — JSON personality configs
│   ├── vader_affect.py         — VADER sentiment analyzer (NEW v2.1)
│   └── archetypes.py           — Baseline affect profiles
└── scripts/
    ├── nima_ladybug_backend.py — LadybugDB CLI
    └── ladybug_parallel.py     — Parallel migration

DATABASE (SQLite or LadybugDB)
├── memory_nodes   — Messages with embeddings
├── memory_edges   — Graph relationships
└── memory_turns   — Conversation turns

Performance

MetricSQLiteLadybugDB
Text Search31ms9ms (3.4x)
Vector SearchExternal18ms (native)
Context Tokens~180~30 (6x smaller)
Recall Token Budget5003000 (v2.1+)

API

from nima_core import DynamicAffectSystem, get_affect_system
from nima_core.cognition.vader_affect import VaderAffectAnalyzer

# Get singleton instance (thread-safe)
affect = get_affect_system(identity_name="lilu")

# Process input and get affect state
state = affect.process_input("I'm so excited about this project!")
print(state.current)  # {"SEEKING": 0.72, "PLAY": 0.65, ...}

# Use VADER analyzer directly
analyzer = VaderAffectAnalyzer()
result = analyzer.analyze("This is AMAZING!!!")
print(result.affects)  # {'PLAY': 0.78, 'SEEKING': 0.71, ...}

# Recall memories (via hooks - automatic)
# Or manually via CLI:
# nima-query who_search "David" --limit 5
# nima-query text_search "project" --limit 5

Configuration

VariableDefaultDescription
NIMA_DATA_DIR~/.nimaMemory storage path
NIMA_EMBEDDERvoyagevoyage, openai, or local
VOYAGE_API_KEYRequired for Voyage
NIMA_LADYBUG0Set 1 for LadybugDB backend

Hooks

nima-memory (Capture)

  • Captures input, contemplation, output on every turn
  • 4-phase noise remediation (empty validation, heartbeat filters, dedup, metrics)
  • Stores to SQLite or LadybugDB
  • Computes and stores embeddings

nima-recall-live (Recall)

  • Injects relevant memories before agent starts
  • Lazy loading — only top N results
  • Deduplicates with injected context
  • Token budget: 3000 (increased from 500 in v2.1)

nima-affect (Emotion)

  • VADER-based real-time affect analysis from text
  • Contextual analysis (caps, punctuation, negation, degree modifiers)
  • 30+ idiom recognition
  • Maintains Panksepp 7-affect state
  • Guardian archetype transformation (user anger → agent care)

Installation Options

SQLite (Development)

pip install nima-core
./install.sh

LadybugDB (Production)

pip install nima-core[vector]
./install.sh --with-ladybug

Documentation

GuideDescription
README.mdFull system overview
SETUP_GUIDE.mdStep-by-step installation
docs/DATABASE_OPTIONS.mdSQLite vs LadybugDB
docs/EMBEDDING_PROVIDERS.mdVoyage, OpenAI, Local
MIGRATION_GUIDE.mdMigrate from old versions
CHANGELOG.mdRelease history

Security & Privacy

Data Access

This plugin accesses:

  • ~/.openclaw/agents/.../*.jsonl — Session transcripts (for memory capture)
  • ~/.nima/ — Local memory database (SQLite or LadybugDB)
  • ~/.openclaw/extensions/ — Hook installation

Network Calls

Embeddings are sent to external APIs:

  • Voyage AI (api.voyageai.com) — Default embedding provider
  • OpenAI (api.openai.com) — Optional embedding provider
  • Local — No external calls when using sentence-transformers

Required Environment Variables

VariablePurposeRequired
NIMA_EMBEDDERvoyage, openai, or localNo (default: voyage)
VOYAGE_API_KEYVoyage AI authenticationIf using Voyage
OPENAI_API_KEYOpenAI authenticationIf using OpenAI
NIMA_DATA_DIRMemory storage pathNo (default: ~/.nima)
NIMA_LADYBUGUse LadybugDB backendNo (default: 0)

Installation Script

The install.sh script:

  1. Checks for Python 3 and Node.js
  2. Creates ~/.nima/ directories
  3. Installs Python packages via pip
  4. Copies hooks to ~/.openclaw/extensions/

No external downloads. All packages come from PyPI.


Changelog

v2.1.0 — VADER Affect Analyzer (Feb 17, 2026)

  • Added: VADER-based affect analyzer replacing lexicon-based detection
    • Contextual analysis: caps boost (1.5x), punctuation (!!!), negation, degree modifiers
    • 30+ idiom recognition
    • Panksepp 7-affect mapping (SEEKING, RAGE, FEAR, LUST, CARE, PANIC, PLAY)
    • Guardian archetype transformation (user anger → agent concern/care)
  • Added: 4-phase noise remediation (empty validation, heartbeat filters, dedup, metrics)
  • Fixed: LadybugDB circular import issue
  • Changed: Recall token budget increased from 500 to 3000
  • Improved: Connection pooling for LadybugDB backend

v2.0.3 — Security Hardening (Feb 15, 2026)

  • Security: Fixed path traversal vulnerability in affect_history.py (CRITICAL)
  • Security: Fixed temp file resource leaks in 3 files (HIGH)
  • Fixed: Corrected non-existent json.JSONEncodeError → TypeError/ValueError
  • Improved: Exception handling - replaced 5 generic catches with specific types
  • Quality: Better error visibility and debugging throughout

v2.0.1 — Thread Safety + Metadata

  • Fixed: Thread-safe singleton with double-checked locking
  • Security: Clarified metadata requirements (Node.js, env vars)
  • Docs: Added security disclosure for API key usage

v2.0.0 — LadybugDB + Security

  • Added: LadybugDB backend with HNSW vector search
  • Added: Native graph traversal with Cypher
  • Added: nima-query CLI for unified queries
  • Security: SQL/FTS5 injection prevention
  • Security: Path traversal protection
  • Security: Temp file cleanup
  • Fixed: Thread-safe singleton initialization
  • Fixed: API timeouts (Voyage 30s, LadybugDB 10s)
  • Tests: 348 tests passing
  • Performance: 3.4x faster text search, 44% smaller DB

v1.2.1 — Consciousness Architecture

  • Added: 8 consciousness systems (Φ, Global Workspace, self-awareness)
  • Added: Sparse Block VSA memory
  • Added: ConsciousnessCore unified interface

v1.1.9 — Hook Efficiency Fix

  • Fixed: nima-recall hook spawning new Python process every bootstrap
  • Performance: ~50-250x faster hook recall

v1.2.0 — Affective Response Engines

  • Added: 4 Layer-2 composite affect engines
  • Added: Async affective processing
  • Added: Voyage AI embedding support

Comments

Loading comments...