Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Duru Memory

v0.1.0

Markdown-based memory continuity system for agents using local Markdown files as the primary memory source. Use when building or operating local Markdown mem...

0· 57·0 current·0 all-time
byDuru@durugy

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for durugy/duru-memory.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Duru Memory" (durugy/duru-memory) from ClawHub.
Skill page: https://clawhub.ai/durugy/duru-memory
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install duru-memory

ClawHub CLI

Package manager switcher

npx clawhub@latest install duru-memory
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (Markdown-based memory continuity) align with the included scripts and README: file creation, deterministic search, tagging, compaction, archival, and an optional local semantic index. The repo mentions uv/Ollama/sqlite-vec/apsw which are reasonable for local embedding and indexing.
Instruction Scope
Runtime instructions and scripts operate on a workspace memory/ tree (reading/writing .md files, creating a SQLite index, and updating tag/state files). They also call a configurable Ollama service (default http://127.0.0.1:11434) for tagging/embeddings. Nothing in SKILL.md or scripts reads unrelated system files, but the skill will transmit memory file chunks to the configured Ollama endpoint — ensure that endpoint is local/trusted.
Install Mechanism
No install spec is provided (instruction-only from platform POV) and the repo expects you to copy/clone files into skills/. Dependencies are installed locally via the uv environment (uv sync). No remote arbitrary binary downloads or URLs of unknown provenance are executed by the skill itself.
Credentials
The skill declares no required env vars or credentials. It supports overriding settings via config.yaml or environment (e.g., DURU_MEMORY_OLLAMA_URL). This is reasonable, but because the Ollama URL is configurable, misconfiguration to a remote/hostile endpoint would allow memory contents to be sent off-host.
Persistence & Privilege
The skill does not request always:true and does not modify other skills' configs. It will write files under the workspace memory/ tree (index DB, state files, stamp files) and may perform git commits if run in a git repo (auto-commit script). These are scoped to the workspace and are expected for its purpose.
Assessment
This skill appears to do what it says: manage local Markdown memory plus optional local semantic indexing via Ollama. Before installing: (1) review and run it only in an isolated workspace you control, since it reads/writes files under workspace/memory/ and will create a local SQLite index and state files; (2) ensure ollama.base_url (or DURU_MEMORY_OLLAMA_URL) points to a local, trusted Ollama instance if you want embeddings/tagging — otherwise memory contents could be sent to an external host; (3) run uv sync yourself to install Python deps and inspect what gets installed (apsw, sqlite-vec); (4) inspect and decide whether you want automated git commits (scripts/auto-commit.sh). If you want higher assurance, run the scripts manually first (dry-run modes exist) and validate config.yaml before enabling scheduled/automated runs.
!
config.example.yaml:2
Install source points to URL shortener or raw IP.
About static analysis
These patterns were detected by automated regex scanning. They may be normal for skills that integrate with external APIs. Check the VirusTotal and OpenClaw results above for context-aware analysis.

Like a lobster shell, security has layers — review code before you run it.

latestvk974v20wzdwy9yrvvb6237ef1584y8pb
57downloads
0stars
1versions
Updated 1w ago
v0.1.0
MIT-0

Duru Memory

Use this skill to run a high-traceability memory system based on plain Markdown files.

This skill is complementary to OpenClaw's built-in memory stack:

  • duru-memory manages the Markdown files, conventions, and maintenance workflow
  • built-in memory_search and memory_get are the primary recall/read tools during normal assistant operation
  • memory-core indexes and retrieves from the same Markdown tree
  • active-memory is an optional pre-reply recall layer for eligible interactive sessions, not a guaranteed default for every deployment

Treat the Markdown files as the source of truth, and OpenClaw's built-in memory stack as the main retrieval and recall layer.

Core workflow

  1. Load memory/CORE/hard-rules.md and memory/CORE/current-state.md.
  2. Load recent daily logs from memory/daily/ (default: last 2 days).
  3. Before answering context-dependent questions, prefer built-in memory_search first.
  4. Use scripts/memory-search.sh "<query>" when you want explicit file-level control, debugging, maintenance, or a second opinion against built-in recall.
  5. Update current-state.md when project/task state changes.
  6. Append a state diff entry in state-changelog.md for every meaningful state update.
  7. At session close, run scripts/session-close.sh.

Directory contract

memory/
  CORE/
    hard-rules.md
    current-state.md
    state-changelog.md
  daily/
  projects/
  people/
  concepts/
  handoff/
  archive/raw/
  INDEX.md

Memory admission rules

Record only high-value memory:

  • decisions
  • commitments
  • deadlines
  • preferences
  • blockers
  • postmortem conclusions

Add attributes when possible:

  • status: active | superseded | invalid
  • polarity: positive | negative
  • confidence: high | medium | low
  • avoid_reason: ... (required for negative/pitfall entries)

Avoid logging casual chat unless it impacts future execution.

Retrieval policy (built-in first, local deterministic second, optional semantic third)

For normal assistant recall, built-in memory_search is the default path. It integrates with memory-core, can include indexed session transcripts, and follows the configured OpenClaw memory backend.

Use this skill's local retrieval scripts when you want explicit file-level control, debugging, maintenance, reproducible inspections, or a second opinion against built-in recall.

Local deterministic retrieval uses weighted matching:

  • exact keyword / phrase in headings
  • tags and fields (decision, todo, blocker, preference)
  • recency boost for recent daily logs
  • path boost for likely directories (projects, people, CORE)

Optional local semantic retrieval is an experimental supplement, not the primary OpenClaw memory path. When local Ollama embedding is available, it can be added as a second pass after deterministic retrieval (default embedding model in config.yaml: qwen3-embedding:0.6b). This local semantic layer can coexist with built-in retrieval, but it does not replace OpenClaw's built-in memory_search contract.

This skill's semantic mode uses a local SQLite + sqlite-vec path with incremental indexing:

  • DB path: memory/.semantic-index.db
  • Vector extension: sqlite-vec (loaded via APSW)
  • Incremental policy: file mtime/size/hash detection + chunk-level embedding cache
  • Consistency keys: fixed model, embedding dimension, and pipeline_version
  • Threshold: SEMANTIC_MIN_SCORE (default 0.48)
  • Fusion rerank mode: FUSION_MODE=rrf|linear (default rrf)
  • RRF parameter: RRF_K (default 60)
  • Keyword boost in RRF mode: KEYWORD_BOOST (default 0.006)
  • Linear fallback weights: FUSION_SEM_WEIGHT + FUSION_KEY_WEIGHT (defaults 0.65/0.35)
  • Daily warmup: session-start.sh runs memory-semantic-search.py --build-only once per day
  • Negative memory handling: entries with polarity=negative or status in {invalid,superseded} are excluded from positive ranking and surfaced in a dedicated ⚠ Avoided Pitfalls warning block

Failure and degraded mode guidance

Do not assume semantic retrieval is available.

If built-in vector search, sqlite-vec, Ollama embeddings, or the skill's local semantic service is unavailable, fall back to built-in memory_search in its degraded lexical mode or to this skill's deterministic local retrieval. In degraded mode:

  • prefer exact facts from MEMORY.md, memory/CORE/current-state.md, and recent daily logs
  • treat semantic hits as optional enrichment, not a dependency
  • explicitly report uncertainty when no strong hit exists
  • avoid presenting local semantic indexing behavior as part of OpenClaw's guaranteed built-in memory contract

Coexistence guidance

Recommended division of labor:

  • Write and maintain long-term notes in MEMORY.md and memory/*.md
  • Keep memory/CORE/current-state.md as the execution truth for active work
  • Let OpenClaw memory-core index the same tree
  • Treat built-in memory_search and memory_get as the default runtime recall path
  • Consider active-memory optional and deployment-dependent, especially if pre-reply recall causes latency or timeout issues
  • Use local scripts for maintenance, audits, tagging, and deterministic investigations

Avoid creating separate parallel memory trees for built-in memory and Markdown memory. One shared Markdown tree is the cleanest setup.

Scripts

  • scripts/session-start.sh: startup checklist + quick context load hints
  • scripts/memory-search.sh: hybrid retrieval entry (keyword first, semantic optional)
  • config.yaml: centralized model/runtime tuning (ollama.base_url, models.*, semantic.*, fusion.*)
  • scripts/memory-semantic-search.py: semantic recall via Ollama /api/embeddings
  • scripts/memory-auto-tag.py: local-model auto-tagger (model from config.yaml, default gemma4:e4b) for incremental memory changes (--mode tag|review, --files, --force)
  • scripts/memory-write-tag.sh: write/append helper that immediately tags the target file
  • scripts/memory-compact.py: weekly compaction (daily -> summaries, mark stale, re-sync vectors)
  • scripts/memory-forget.py: monthly forgetting (archive old stale daily logs, keep negative pitfalls)
  • scripts/session-close.sh: runs auto-tagger in --mode review first, then daily log append + state freshness check
  • scripts/auto-commit.sh: optional git safety-net commit

References

  • references/templates.md: canonical templates for state/daily/project/handoff files

Comments

Loading comments...