Aura for OpenClaw

v0.1.5

Compile multi-format documents into local queryable knowledge bases and manage persistent AI memory tiers using Aura Core with zero network requests.

3· 754·1 current·1 all-time
byRta Labs AI@auralithinc
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (compile local docs, provide persistent memory) match the included scripts and SKILL.md. The scripts call into an external 'aura' Python package (auralith-aura) to perform compilation and memory operations — that is expected for this skill. There are no unrelated environment variables, binaries, or config paths requested.
Instruction Scope
Runtime instructions tell the agent to run the provided scripts against user-specified directories and .aura files and to write/read memory under ~/.aura/memory. The scripts themselves only do local file I/O and import aura.*; they do not contain network calls. Important operational note: the agent can be asked to compile any directory the user supplies (including sensitive paths) and to write arbitrary strings into persistent memory (including secrets) — that is expected functionality but has privacy implications.
Install Mechanism
There is no install spec that downloads arbitrary archives. SKILL.md instructs users to pip install the external 'auralith-aura' package (normal for Python). The skill bundle itself contains only small helper scripts and docs; nothing in the package performs remote downloads.
Credentials
The skill declares no environment variables or credentials (appropriate). It does read and write local files (user-provided input directories and ~/.aura/memory). This is proportionate to the stated purpose, but persistent storage means users can accidentally store sensitive secrets; the skill does not declare any secret-handling or encryption behavior.
Persistence & Privilege
always:false (no forced inclusion). The skill stores durable memory shards and WALs under ~/.aura/memory and writes .aura archives to output paths — this is expected. The skill can be invoked autonomously by the agent (platform default). No changes to other skills or system-wide settings are present.
Assessment
This skill appears to do what it says: compile local documents into .aura archives and provide a persistent 3‑tier memory, all via a small set of helper scripts that call the external 'auralith-aura' library. Before installing or using it, consider: (1) Audit the auralith-aura package (PyPI/GitHub) for any telemetry or unexpected network behavior — SKILL.md claims 'zero network requests' but that applies to the skill's code, not to an external dependency you install. (2) Be cautious which directories you ask the agent to 'learn' — it will read whatever path you provide. (3) Persistent memory is stored on disk (~/.aura/memory) — avoid writing long-lived secrets (API keys, private keys, passwords) into memory unless you know how they are encrypted and protected. (4) If you require strict offline guarantees, inspect the auralith-aura source to confirm it performs no network activity at runtime. If you want a safer deploy: limit file paths provided to the skill, restrict filesystem permissions on ~/.aura, and periodically prune or encrypt memory shards.

Like a lobster shell, security has layers — review code before you run it.

agent-memoryvk9756dh67axg939za8hh1ds09981aswpcompilervk9756dh67axg939za8hh1ds09981aswpknowledge-basevk9756dh67axg939za8hh1ds09981aswplatestvk97aepkd0wfhdtearf2nh8b8t9841429memoryvk9756dh67axg939za8hh1ds09981aswpragvk9756dh67axg939za8hh1ds09981aswp
754downloads
3stars
6versions
Updated 2w ago
v0.1.5
MIT-0

Aura for OpenClaw

Persistent memory and instant knowledge retrieval for your OpenClaw agent. Compile 60+ file formats into queryable .aura archives and give your agent a 3-tier memory that survives sessions, compaction, and restarts.

Memory OS v2.1 (auralith-aura>=0.2.3): Enhanced with temporal decay scoring, noise filtering, deduplication, bloom filters, SimHash fuzzy matching, and tiered priority scoring — zero RAM overhead.

Setup

pip install auralith-aura

For full document support (PDFs, DOCX, etc.):

pip install 'auralith-aura[all]'

What It Looks Like

Learn an entire codebase

You: Learn my project in ~/repos/api-service
Agent: 🔥 Compiling ~/repos/api-service...
       ✅ Compiled 847 files → api-service.aura
       Indexed: Python (312), TypeScript (201), YAML (44), Markdown (73), JSON (217)

You: How does the auth middleware handle token refresh?
Agent: Based on src/middleware/auth.ts (lines 45-89), the auth middleware
       intercepts expired JWTs by checking the `exp` claim. When expired,
       it calls `refreshTokenService.rotate()` which issues a new access
       token and sets it via the `X-Refreshed-Token` response header.
       The original request is then replayed with the new token.

Compile legal docs with PII masking

You: Learn everything in ~/legal but mask any personal info
Agent: 🔥 Compiling with PII masking enabled...
       ✅ Compiled → legal.aura
       Masked: 23 emails, 8 phone numbers, 4 SSNs

You: What are the termination conditions in the Series A agreement?
Agent: Per series_a_term_sheet.pdf, Section 7.2 — the agreement may be
       terminated by either party with 30 days written notice if:
       (a) material breach remains uncured after 15 business days,
       (b) insolvency or bankruptcy filing, or
       (c) failure to meet the minimum revenue threshold in Schedule B...

Remember facts across sessions

You: Remember that our production database is on us-east-1, the staging
     API key rotates every 30 days, and I prefer verbose error logs.

Agent: ✅ Written to /fact:
       • Production DB region: us-east-1
       • Staging API key rotation: 30-day cycle
       • Preference: verbose error logging

--- next day, new session ---

You: Set up the deployment config for staging
Agent: I remember your staging API key rotates every 30 days — I'll add
       a reminder comment in the config. Also setting error logging to
       verbose per your preference. Targeting us-east-1 since that's
       where your production DB lives...

Track decisions across a long project

You: We decided to use PostgreSQL over MongoDB for the user service,
     and the team agreed on GraphQL for the public API.

Agent: ✅ Written to /episodic:
       • Architecture decision: PostgreSQL for user service (over MongoDB)
       • API decision: GraphQL for public-facing API

--- two weeks later ---

You: Wait, should we use MongoDB for the new analytics service?
Agent: Based on your earlier decision log — you chose PostgreSQL over
       MongoDB for the user service citing ACID compliance needs. The
       analytics service has different requirements (write-heavy, schema
       flexibility), so MongoDB could make sense here. Want me to draft
       a comparison?

Commands

Compile a Knowledge Base

When the user asks you to "learn", "compile", "index", or "remember" files from a directory:

python {SKILL_DIR}/scripts/compile.py <input_directory> <output_file>

Options:

# Mask PII before compilation
python {SKILL_DIR}/scripts/compile.py ./data knowledge.aura --pii-mask

# Filter low-quality content
python {SKILL_DIR}/scripts/compile.py ./data knowledge.aura --min-quality 0.3

Query the Knowledge Base

python {SKILL_DIR}/scripts/query.py knowledge.aura "search query here"

Agent Memory

Write to memory tiers:

python {SKILL_DIR}/scripts/memory.py write pad "scratch note"
python {SKILL_DIR}/scripts/memory.py write fact "verified information"
python {SKILL_DIR}/scripts/memory.py write episodic "session event"

Search and manage memory:

python {SKILL_DIR}/scripts/memory.py query "search query"
python {SKILL_DIR}/scripts/memory.py list
python {SKILL_DIR}/scripts/memory.py usage
python {SKILL_DIR}/scripts/memory.py prune --before 2026-01-01
python {SKILL_DIR}/scripts/memory.py end-session

Memory Tiers

TierWhat It StoresLifecycle
/padWorking notes, scratch space, in-progress thinkingTransient — cleared between sessions
/episodicSession transcripts, decisions, conversation historyAuto-archived — retained for reference
/factVerified facts, user preferences, learned rulesPersistent — survives indefinitely

Supported File Types

Documents: PDF, DOCX, DOC, RTF, ODT, EPUB, TXT, HTML, PPTX, EML Data: CSV, TSV, XLSX, XLS, Parquet, JSON, JSONL, YAML, TOML Code: Python, JavaScript, TypeScript, Rust, Go, Java, C/C++, and 20+ more Markup: Markdown (.md), reStructuredText, LaTeX

External Endpoints

URLData Sent
NoneNone

This skill makes zero network requests. All processing is local.

Data Provenance & Trust

Every memory entry stores source (agent/user/system), namespace, timestamp, session_id, and a unique entry_id. Nothing is inferred or synthesized — memory contains only what was explicitly written. No hidden embeddings, no derived data.

memory.show_usage()                              # Inspect what's stored per tier
memory.prune_shards(before_date="2026-01-01")    # Prune by date
memory.prune_shards(shard_ids=["specific_id"])   # Delete specific shards
# Or delete ~/.aura/memory/ to wipe everything

Security & Privacy

  • No data leaves your machine. All compilation and memory operations run locally.
  • The .aura format uses safetensors (no pickle) — no arbitrary code execution risk.
  • Memory files are stored locally at ~/.aura/memory/.
  • No environment variables or API keys are required.
  • No telemetry, analytics, or usage reporting.

Model Invocation Note

This skill is autonomously invoked by the agent as part of its normal operation. The agent decides when to compile documents and manage memory based on user requests. You can disable autonomous invocation in your OpenClaw settings.

Trust Statement

By using this skill, no data is sent to any external service. All processing happens on your local machine. Only install this skill if you trust Rta Labs. Source code for the compiler and RAG components is available on GitHub.

Notes

  • Memory OS provides instant writes and background compilation to durable shards.
  • Compiler and RAG components are open source (Apache 2.0). Memory OS is proprietary, free to use.
  • For emphasis weighting and training features, see OMNI Platform.

Comments

Loading comments...