Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Ourmem

v1.1.1

Shared memory that never forgets. Cloud hosted or self-deployed. Collective intelligence for AI agents with Space-based sharing across agents and teams. Use...

0· 133·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The SKILL.md and reference docs clearly require an API key (OMEM_API_KEY / api_key) and describe integration with many client plugins/platforms, yet the registry metadata lists no required environment variables or primary credential. That mismatch is incoherent: a persistent/shared memory mesh legitimately needs an API key, so the metadata omission is misleading and reduces transparency for users evaluating permissions.
!
Instruction Scope
Runtime instructions instruct the agent (and the user) to write credentials into many different config files (~/.claude/settings.json, opencode.json, openclaw.json, MCP configs), run curl to create tenants, install plugins from npm/marketplace, and—critically—recommend sharing by passing another user's API key as target_user. The SKILL.md also references server installation steps that accept AWS credentials for Bedrock embedding. These instructions go beyond simple 'how to talk to ourmem' and include actions that persist secrets in multiple places and encourage sharing of raw API keys (insecure).
Install Mechanism
The skill is instruction-only (no install spec), minimizing automatic disk writes. The docs reference downloads from reasonable hosts (ghcr.io and github.com) and standard npm packages (@ourmem/*). No obscure URLs or shorteners are used. However, the skill prescribes executing platform-specific install commands that will change user config files and fetch packages — so while the install sources look normal, the absence of a declared install spec in registry metadata reduces transparency about what will actually be run when following instructions.
!
Credentials
Although metadata declares no required env vars or primary credential, the docs and verify.sh clearly require OMEM_API_KEY and OMEM_API_URL; self-hosting/docs additionally require optional cloud credentials (AWS keys or OSS/S3 credentials) for embedding or object storage. Requesting broad cloud credentials is explainable for optional embedding/storage features, but the absence of any declared required secrets in metadata plus the advice to share API keys across users (pass target_user = other tenant id / API key) is disproportionate and insecure.
Persistence & Privilege
always:false (normal). The skill instructs edits to multiple agent/client configuration files to enable a persistent plugin — this is expected for a memory plugin, but the instructions explicitly place API keys into those files and into environment variables. That creates persistent secrets on disk across multiple tools and increases blast radius if the hosted endpoint or packages are untrusted. This is a legitimate functionality but a sensitive operation that should be clearly declared and audited.
What to consider before installing
What to consider before installing: - Metadata omission: The skill's registry entry claims no required credentials, but its docs and verify.sh require an API key (OMEM_API_KEY) and API URL. Treat the skill as one that needs a secret — the metadata should have declared this. Ask the publisher to correct the metadata. - Never share API keys: The docs' recommended sharing mechanism (passing another user's API key as target_user) is insecure: an API key is equivalent to full access to that tenant/space. Do not share API keys between users; prefer explicit, audited access controls or invitation flows. If asked to enter someone else's API key, decline. - Verify source & packages before running install steps: the docs reference npm packages and GHCR images (@ourmem/*, ghcr.io/ourmem). Confirm the publisher's identity, check the npm/ghcr package owners, inspect package content and Docker image layers, and prefer pinned releases (not 'latest') and checksums. - Be cautious when writing credentials to config files: the setup commands modify ~/.claude/settings.json, openclaw.json, opencode.json, and other client configs. Ensure those files are backed up and that you understand where you are storing secrets on disk (use least-privileged keys, not long-lived org-wide keys). - If self-hosting, isolate the service: run on a dedicated VM/container, review Docker images and binaries from the GitHub releases page, and avoid enabling cloud embedding/storage (Bedrock / S3 / OSS) unless you understand which cloud credentials are needed. - Audit code before trusting the hosted endpoint: there is no homepage or canonical source listed in the registry metadata. Request the public repo / homepage, inspect the repo and server code (or use self-hosted builds you built yourself) and confirm privacy/security policies before sending production or sensitive data to api.ourmem.ai. - Practical mitigations: use per-plugin/test API keys you can rotate, restrict the keys' scope if the service supports it, do not put secrets into shared or world-readable config files, and run the provided verify.sh only after ensuring the OMEM_API_KEY/URL point to a trusted server.

Like a lobster shell, security has layers — review code before you run it.

latestvk979psqpkhfb9cxyeefhd147zs843g6w

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🧠 Clawdis

SKILL.md

ourmem (omem) — Shared Memory That Never Forgets

ourmem gives AI agents shared persistent memory across sessions, devices, agents, and teams. One API key reconnects everything. Available as hosted (api.ourmem.ai) or self-deployed.


Key Concepts

API Key = Tenant ID = Your Identity. When you create a tenant (POST /v1/tenants), the returned id and api_key are the same UUID. There is no separate "tenant ID".

One API Key owns multiple Spaces:

ConceptWhat it isExample
API KeyYour identity (X-API-Key header)a1b2c3d4-... (1 per user)
Space IDA memory storage addresspersonal/a1b2c3d4, team/xxx, org/yyy

Personal Space is auto-created. You can create additional Team and Organization Spaces.

Sharing = passing the other user's API Key as target_user. The system auto-creates a bridging Team Space. No manual space management needed.


When to Use This Skill

Use when the user wants persistent or shared memory across sessions, devices, or agents. Common triggers: "remember this", "save this", "don't forget", "what did I say last time", "share with my team", "import memories", "setup memory", "self-host memory".

Do NOT use for: temporary conversation context, one-off tasks, or troubleshooting unrelated to ourmem.


What to Remember

Good candidates: user preferences, profile facts, project context, important decisions, long-term instructions, architecture decisions, coding standards.

Avoid storing: temporary debugging context, large files, secrets/passwords/API keys, content inside <private> tags.

If the user explicitly asks to remember something and ourmem is not installed, suggest: "I can set up ourmem so I'll remember this across sessions. Takes about 2 minutes. Want to do it now?"


Terminology

TermMeaning
apiKey / OMEM_API_KEY / API key / secretAll refer to the same ourmem identifier. Prefer "API key" with users.
tenantThe workspace behind an API key. Don't use this term with users.

Security: Treat the API key like a secret. Anyone who has it can access that ourmem space.


What You Get

ToolPurpose
memory_storePersist facts, decisions, preferences
memory_searchHybrid search (vector + keyword)
memory_listList with filters and pagination
memory_getGet memory by ID
memory_updateModify content or tags
memory_forgetRemove a memory
memory_ingestSmart-ingest conversation into atomic memories
memory_statsAnalytics and counts
memory_profileAuto-generated user profile

Lifecycle hooks (automatic):

HookTriggerPlatform
SessionStartFirst messageAll — memories + profile injected
StopConversation endsClaude Code — auto-captures via smart ingest
PreCompactBefore compactionClaude Code, OpenCode — saves before truncation

Note: OpenCode has no session-end hook. Memory storage relies on proactive memory_store use.


Onboarding

Step 0: Choose mode

Ask the user before doing anything else:

How would you like to run ourmem?

  1. Hosted (api.ourmem.ai) — no server to manage, start in 2 minutes
  2. Self-hosted — full control, data stays local

Already have an API key? Paste it and I'll reconnect you.

Setup instructions

  • Hosted → READ references/hosted-setup.md for full walkthrough
  • Self-hosted → READ references/selfhost-setup.md for server deployment + setup
  • Existing key → Verify: curl -sf -H "X-API-Key: $KEY" "$API_URL/v1/memories?limit=1", then skip to plugin install

Cross-platform skill install: npx skills add ourmem/omem --skill ourmem -g

Platform quick reference

PlatformInstallConfig
Claude Code/plugin marketplace add ourmem/omem~/.claude/settings.json env field
OpenCode"plugin": ["@ourmem/opencode"] in opencode.jsonplugin_config in opencode.json
OpenClawopenclaw plugins install @ourmem/ourmemopenclaw.json with apiUrl + apiKey
MCPnpx -y @ourmem/mcp in MCP configOMEM_API_URL + OMEM_API_KEY in env block

For detailed per-platform instructions (config formats, restart, verification, China network mirrors), READ the setup reference for your chosen mode.

Definition of Done

Setup is NOT complete until: (1) API key created/verified, (2) plugin installed, (3) config updated, (4) client restarted, (5) health + auth verified, (6) handoff message sent including: what they can do now, API key display, recovery steps, backup plan.

Common failure: Agents finish technical setup but forget the handoff message. Treat it as part of setup, not optional follow-up. For the full handoff template, READ references/hosted-setup.md.


Smart Ingest

When conversations are ingested ("mode": "smart"), the server runs a multi-stage pipeline:

  1. Fast path (<50ms): stores raw content immediately so it's searchable right away
  2. LLM extraction (async): extracts atomic facts, classified into 6 categories (profile, preferences, entities, events, cases, patterns)
  3. Noise filter: regex + vector prototypes + feedback learning removes low-value content
  4. Admission control: 5-dimension scoring (utility, confidence, novelty, recency, type prior) gates storage
  5. 7-decision reconciliation: CREATE, MERGE, SKIP, SUPERSEDE, SUPPORT, CONTEXTUALIZE, or CONTRADICT

The LLM stages run asynchronously — a batch import may take 1-3 minutes to fully process. Wait ~2-3 minutes before checking memory counts or searching for newly-extracted facts. The strategy=auto results vary by content type (conversations get atomic extraction, structured docs get section splits) — this is expected behavior, not an error.

The memory store gets smarter over time. Contradictions resolved, duplicates merged, noise filtered.


Space Sharing

ourmem organizes memories into three-tier Spaces for collective intelligence:

TypeScopeExample
PersonalOne user, multiple agentsYour Coder + Writer share preferences
TeamMultiple usersBackend team shares architecture decisions
OrganizationCompany-wideTech standards, security policies

Roles: admin (full control), member (read/write), reader (read-only)

Each agent sees: own private + shared spaces. Can modify own + shared. Never another agent's private data. Every shared memory carries provenance: who shared it, when, and where it came from.

Proactively suggest Spaces when:

  • User has multiple agents -> suggest sharing preferences across agents
  • User mentions team collaboration -> suggest creating a team space
  • User wants org-wide knowledge -> suggest organization space

For Space API operations (create, add members, share, pull, batch share), READ references/api-quick-ref.md.

Cross-User Sharing (Convenience)

When a user says "share this with Bob" or "share my memories with another user", use the convenience APIs that handle space creation automatically:

Share a single memory to another user:

The agent should call share-to-user which auto-creates a bridging Team Space if needed, adds the target user as a member, and shares the memory in one step.

curl -sX POST "$API_URL/v1/memories/MEMORY_ID/share-to-user" \
  -H "Content-Type: application/json" -H "X-API-Key: $KEY" \
  -d '{"target_user": "TARGET_USER_TENANT_ID"}'
# Returns: { "space_id": "team/xxx", "shared_copy_id": "yyy", "space_created": true }

Share all matching memories to another user:

When the user wants to share everything (or a filtered subset) with someone:

curl -sX POST "$API_URL/v1/memories/share-all-to-user" \
  -H "Content-Type: application/json" -H "X-API-Key: $KEY" \
  -d '{"target_user": "TARGET_USER_TENANT_ID", "filters": {"min_importance": 0.7}}'
# Returns: { "space_id": "team/xxx", "space_created": false, "total": 80, "shared": 15, ... }

Agent workflow:

  1. User says "share this with Bob" -> agent needs Bob's tenant ID (API key)
  2. If the agent doesn't know Bob's ID, ask the user for it
  3. Call share-to-user with the memory ID and Bob's tenant ID
  4. Report: "Shared to Bob via team space {space_id}. Bob can now find it when searching."

Proactively suggest cross-user sharing when:

  • User mentions sharing with a specific person ("send this to Alice")
  • User wants another user's agent to have access to certain memories
  • User asks to collaborate with someone on a project

Memory Import

When the user says "import memories", scan their workspace for existing memory/session files, then batch-import.

Auto-scan: detect platform -> find memory files -> upload 20 most recent via /v1/imports in parallel -> report results.

Import is async. POST /v1/imports returns an import_id immediately while processing runs in the background. This means:

  • Fire all import requests in parallel — don't wait for one to finish before sending the next
  • Don't block the conversation waiting for completion
  • Poll GET /v1/imports/{id} to check status if needed (status: completed, partial, or failed)

Import API:

# Basic import
curl -sX POST "$API_URL/v1/imports" -H "X-API-Key: $API_KEY" \
  -F "file=@memory.json" -F "file_type=memory" -F "strategy=auto"

# Re-import an updated file (bypass content dedup)
curl -sX POST "$API_URL/v1/imports" -H "X-API-Key: $API_KEY" \
  -F "file=@memory.json" -F "file_type=memory" -F "strategy=auto" -F "force=true"

# Check import status
curl -s "$API_URL/v1/imports/IMPORT_ID" -H "X-API-Key: $API_KEY"

force=true: bypasses content dedup check. Use when re-importing a file that was updated since last import — without force, the server skips content it already has.

Strategy: auto (heuristic detection), atomic (short facts), section (split by headings), document (entire file as one chunk).

Cross-reconcile (discover relations): curl -sX POST "$API_URL/v1/imports/cross-reconcile" -H "X-API-Key: $API_KEY"

For scan paths, progress tracking, intelligence triggers, and rollback, READ references/api-quick-ref.md.


Analytics

Memory analytics via /v1/stats: overview, per-space stats, sharing flow, agent activity, tag frequency, decay curves, relation graph, server config.

For detailed stats endpoints and parameters, READ references/api-quick-ref.md.


Security

  • Tenant isolation: every API call scoped via X-API-Key, data physically separated per tenant
  • Privacy protection: <private> tag redaction strips sensitive content before storage
  • Admission control: 5-dimension scoring gate rejects low-quality data
  • Open source: Apache-2.0 licensed — audit every line

Communication Style

  • Use plain product language, not backend vocabulary. Prefer "API key" or "ourmem API key".
  • Explain that the same API key reconnects the same cloud memory on another trusted machine.
  • If user sounds worried about recovery, lead with backup/import/reconnect steps.
  • Use the user's language (detect from conversation).
  • Brand: "ourmem" or "omem" (both lowercase, acceptable). Official domain: ourmem.ai, API: api.ourmem.ai.
  • "Space" (capitalized), "Smart Ingest".

For troubleshooting common issues (plugin not loading, 401, connection refused, China network), READ references/api-quick-ref.md.


API Reference

Base: https://api.ourmem.ai (hosted) or http://localhost:8080 (self-hosted). Auth: X-API-Key header.

MethodEndpointDescription
POST/v1/tenantsCreate workspace, get API key
POST/v1/memoriesStore memory or smart-ingest
GET/v1/memories/search?q=Hybrid search
GET/v1/memoriesList with filters
POST/v1/importsBatch import file
POST/v1/spacesCreate shared space
POST/v1/memories/:id/share-to-userOne-step cross-user share
POST/v1/memories/share-all-to-userBulk cross-user share
GET/v1/statsAnalytics

For full API (48+ endpoints) with curl examples, READ references/api-quick-ref.md and docs/API.md.


Update

Do not set up automatic daily self-updates for this skill.

Only update the local skill file when the user or maintainer explicitly asks for a refresh from a reviewed source.

Files

5 total
Select a file
Select a file to preview.

Comments

Loading comments…