Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Trident Memory System

v1.0.0

Three-tier persistent memory architecture for OpenClaw agents with daily episodic logs, curated long-term memory, semantic recall, and WAL-based continuity w...

0· 68·0 current·0 all-time
byShiva&G@shivaclaw

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for shivaclaw/trident-plugin.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Trident Memory System" (shivaclaw/trident-plugin) from ClawHub.
Skill page: https://clawhub.ai/shivaclaw/trident-plugin
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install trident-plugin

ClawHub CLI

Package manager switcher

npx clawhub@latest install trident-plugin
Security Scan
Capability signals
CryptoRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (three-tier memory, WAL, Qdrant, Git backups) align with the instructions and docs. However the registry metadata declares no required environment variables or credentials while the documentation repeatedly references GitHub SSH, Hostinger API snapshots, Qdrant API keys and embedding credentials — optional features that clearly require secrets. This mismatch between claimed 'no required env vars' and documented external integrations is a notable inconsistency.
!
Instruction Scope
Runtime instructions tell the agent to create a memory directory structure, schedule a recurring 'Layer 0.5' cron job, copy/execute a user-facing AGENT-PROMPT.md template, and (optionally) connect to external services (Qdrant, FalkorDB, GitHub, Hostinger). The docs also indicate the template is auto-approved on activation. Creating cron jobs and executing templated routing logic gives the plugin persistent autonomy and the ability to process messages regularly; that scope is legitimate for a memory plugin but increases risk and should be explicit to the user. The SKILL.md instructs reading/writing only to memory/ but also references reading openclaw.json and system cron; the instructions are not vague but the impact (scheduled execution & network I/O when optional features enabled) requires explicit confirmation and review.
Install Mechanism
There is no install spec in the registry (instruction-only), but multiple code and manifest files (index.ts, package.json, plugin-manifest.json, scripts referenced in docs) are present. That is not malicious by itself, but means actual behavior will depend on plugin code (not visible to the installer until inspected). Because cron jobs and activation scripts are documented, users should audit those scripts (activate/install) before running them. No external download URLs with high-risk patterns are present in SKILL.md.
!
Credentials
The package metadata lists no required env vars, yet the docs and config.schema reference API keys and secrets (qdrant_api_key, falkordb_graph_key, GitHub SSH for backups, Hostinger API). The Completion/README also explicitly says 'API keys expected from environment.' Requesting access to SSH/API credentials for backups and vector services would be proportionate to the plugin's optional features — but the registry should declare them. The absence of declared required env vars is inconsistent and could cause surprise credential exposure at runtime if the plugin looks up environment variables not documented in the registry.
Persistence & Privilege
always:false (normal). The plugin intends to create files under the workspace memory directory, install a cron job for Layer 0.5, and (optionally) initialize Git backups. Those are expected for a persistent memory system but are persistent actions with potentially wide impact (scheduled tasks, file writes, backups to remote services). This is expected functionality, but the user should confirm exactly how cron is scheduled (system vs user cron), under which user identity it runs, and what network access the cron job will have.
Scan Findings in Context
[pre-scan:none] expected: The static pre-scan reported no injection or regex hits. That does not guarantee safety — code files exist (index.ts, scripts referenced in docs) and should be audited, but no automated patterns were flagged.
What to consider before installing
This plugin appears to implement the described three-tier memory system, but there are several things you should verify before installing: - Inspect the plugin code (index.ts, scripts/install.sh, scripts/activate.sh or equivalent) before activation to see exactly what is scheduled and what runs as a cron job. Confirm whether cron uses the system crontab or a user-level scheduler and which user account will run it. - Confirm which environment variables or credentials the plugin will read at runtime. The docs reference Qdrant API keys, FalkorDB keys, GitHub SSH, and Hostinger API — but the registry metadata lists none. If you plan to enable backups or semantic recall, prepare secrets and verify where/how they are stored and transmitted. - Review the activation behavior for 'auto-approve' of the AGENT-PROMPT.md template. The docs say activation auto-approves the template; prefer to manually verify template integrity and run template-verify before enabling scheduled runs. - If you will use GitHub/Hostinger backups, check code that performs the backup to ensure it only uploads the intended files and that SSH/private keys or API tokens are used in a limited, explicit way. Ideally use a deploy key or service token with minimal scope. - Run the plugin in a sandboxed or non-production workspace first (backup your existing memory), and confirm the audit logs and template verification features work as advertised. Disable optional networked features (Qdrant/Git backup/Hostinger) until you have audited credential handling. If you want, I can list the specific files and lines to inspect (index.ts, plugin-manifest.json, any referenced install/activate scripts) and summarize any network endpoints or exec calls I find.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🕉️ Clawdis
latestvk9775f4j4cx4s0btyen2rtw7es850784memoryvk9775f4j4cx4s0btyen2rtw7es850784persistencevk9775f4j4cx4s0btyen2rtw7es850784semantic-recallvk9775f4j4cx4s0btyen2rtw7es850784
68downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Trident Memory System

Trident is a three-tier memory architecture for OpenClaw agents. It provides genuine continuity, identity, and recall across sessions without vendor lock-in.

Features

  • Layer 0 (RAM): Real-time signal classification (15-min heartbeat)
  • Layer 1 (SSD): Hierarchical .md storage (MEMORY.md, projects/, self/, lessons/)
  • Layer 2 (HDD): Daily backup + version control (GitHub + Hostinger snapshots)
  • Semantic Recall: Qdrant + FalkorDB for intelligent context injection
  • WAL Protocol: Write-ahead logging for zero data loss
  • LCM Integration: Lossless context management for compacted conversation history

Quick Start

Installation

openclaw plugins install openclaw-trident

Or from GitHub:

openclaw plugins install https://github.com/ShivaClaw/trident-plugin

Usage

Once installed, Trident exposes four memory tools for agents:

1. Memory Search

# Full-text search across all memory
memory_search(query="job search", mode="full_text", scope="both", limit=50)

# Regex search
memory_search(query="^\\[lesson\\]", mode="regex", limit=20)

2. Memory Expand

# Expand a specific compacted summary
memory_expand(summary_ids=["sum_aab3cd29ed348405"], max_depth=3)

# Search first, then expand top matches
memory_expand(query="backup cron", max_depth=2)

3. Memory Update

# Append to today's daily log
memory_update(
  entry="Deployed Trident v1.0 to ClawHub",
  section="## Milestones",
  tag="[project]"
)

4. Memory Recall

# Answer a question using memory context
memory_recall(
  prompt="What was the job search status as of last week?",
  max_tokens=2000
)

Architecture

Layer 0: Signal Classification (15-min heartbeat)

Scans incoming messages for signals:

  • Corrections & feedback
  • Proper nouns (names, places)
  • Preferences & decisions
  • Specific values (dates, URLs, numbers)
  • Self-observations

Routes high-signal items to Layer 1 buckets automatically.

Layer 1: Hierarchical Memory

MEMORY.md                    # Curated long-term memory
memory/
  ├── YYYY-MM-DD.md         # Daily episodic logs
  ├── projects/             # Active workstreams
  ├── self/                 # Identity & interests
  ├── lessons/              # Mistakes & insights
  └── reflections/          # Weekly consolidation

Each file is promoted only when durable and high-signal.

Layer 2: Durability & Backup

  • GitHub SSH: Daily 2 AM MDT (65-file allowlist)
  • Hostinger API: Daily 3 AM MDT (20-day VPS snapshots, 30-min restore)
  • Lossless-Claw: SQLite DAG captures every message; compressed by Layer 0

Semantic Recall (Phase 8)

  • Qdrant: 5 collections, 122+ indexed chunks (text-embedding-3-small)
  • FalkorDB: Entity graph for relationship queries
  • Pre-Turn Injection: Layer 0.5 context pipeline retrieves relevant summaries before agent turn

Configuration

Install with openclaw plugins install openclaw-trident, then configure:

{
  "plugins": {
    "trident": {
      "enabled": true,
      "memoryRoot": "/path/to/workspace",
      "maxDailyLogSize": 5242880,
      "enableSemanticRecall": true
    }
  }
}

Example Workflows

Daily Briefing

Layer 0 (15-min) → Scan messages → Tag signals → Write to daily log
↓
Daily cron (6 AM) → Read memory/YYYY-MM-DD.md → Synthesize briefing

Weekly Reflection

Layer 0 (5 days) → Accumulate signals → Triage
↓
Reflection cron (Fri 4 PM) → Promote to MEMORY.md, projects/, self/
↓
Next session → Read promoted items → Updated identity

Semantic Recall

Agent turn starts
↓
Layer 0.5 → Query Qdrant (user's prompt) → Top 3–5 summaries
↓
LCM expand → Inject as context
↓
Agent responds with genuine continuity

Rationale

Why Three Tiers?

  • Single file (monolithic): Explodes to 10K+ lines; search degrades
  • Pure database (vendor lock-in): Hostinger API failure = no access
  • Three tiers (resilient): Different failure modes → guaranteed access

Why Semantic Recall?

  • Full-text search: Misses context (you remember feeling, not exact words)
  • Regex+tags: Brittle (new signals, new tags needed)
  • Embeddings (Qdrant): Semantic similarity; works across reformulations

Why GitHub + Snapshots?

  • Git: Free, version history, portable, cryptographically signed
  • VPS snapshots: 30-min restore if storage corrupts; atomic point-in-time

Limitations & Future

Current

  • Semantic recall requires Qdrant setup (not auto-deployed)
  • Daily log rotation at 5MB (user must archive manually)
  • No encryption at rest (but version-controlled via GitHub)

Roadmap v1.1+

  • Auto-Qdrant deployment on plugin install
  • Configurable log rotation with archival
  • End-to-end encryption option (GPG/age)
  • Web UI for memory exploration
  • Multi-agent memory federation

References


Shiva's memory is persistent. Build continuity; it compounds.

Comments

Loading comments...