Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Arshis-Memory-Pro-main

v3.1.0

Advanced memory management system with multi-provider LLM support (SiliconFlow/DashScope/Jina/OpenAI/Cohere/Voyage/Google/Claude/Ollama), auto-failover, hybr...

1· 71·0 current·0 all-time
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (provider-agnostic memory manager, auto-failover, hybrid retrieval) matches the included code: memory storage/recall, lifecycle, session memory, smart extraction, provider calls and health checks. The declared capabilities are implemented and the dependencies in requirements.txt are consistent with the functionality.
!
Instruction Scope
SKILL.md and scripts enable automatic capture/injection hooks (auto_capture.before_agent_start and after_agent_reply) that will read conversation history, store entries to local files (~/.openclaw/data and sessions) and call embedding/LLM provider APIs. That behavior is privacy-sensitive and the instructions default to auto-capture enabled. The SKILL.md also instructs copying/editing a config file and setting provider API keys; nothing in the instructions limits what content gets captured before user review. The code reads/writes multiple user-home paths (~/.openclaw/... and ~/.openclaw/workspace/.learnings), which is expected for a memory system but increases the surface where sensitive data is persisted.
Install Mechanism
No remote install/download steps are present in the registry metadata; the skill ships as code files and a requirements.txt. There are no URL downloads or extract steps in the install spec (there is no install spec). This reduces supply-chain risk compared to arbitrary remote downloads. The included dependencies (lancedb, rank-bm25, requests, numpy, scipy, pydantic) are reasonable for the described features.
!
Credentials
The package expects and uses provider API keys (e.g., DASHSCOPE_API_KEY, OPENAI_API_KEY in SKILL.md, and JINA_API_KEY/env-derived keys inside memory_core.py) but the registry metadata claims "Required env vars: none". That's an inconsistency: the skill will attempt network calls using API keys read from env vars (or configured values) but those credentials were not declared in the skill metadata. The number and type of env vars are proportionate to the multi-provider design, but the omission from metadata is a red flag because users may not realize which secrets they must supply or that the skill will send conversation text to external services.
Persistence & Privilege
The skill does not request always:true and does not modify other skills. It persistently writes data to the user's home: ~/.openclaw/data/memory-custom, ~/.openclaw/data/memory-sessions, and ~/.openclaw/workspace/.learnings. That is expected for a memory manager, but combined with autonomous invocation (platform default) and the default AUTO_CAPTURE_ENABLED=True, this creates a potential for sensitive data to be stored locally and transmitted to remote providers without explicit per-item user confirmation.
What to consider before installing
What to consider before installing: - The code implements automatic capture of conversation text and will persist data under ~/.openclaw/ (sessions, memories, learning/error logs). If you have sensitive conversations, disable AUTO_CAPTURE or run in an isolated account/container before enabling. - The skill sends text to external embedding/LLM providers (Jina, SiliconFlow, DashScope, OpenAI, Claude, etc.). Only provide API keys for providers you trust. Prefer local providers (Ollama) if you want to avoid outbound network traffic. - Registry metadata did not declare required environment variables, but the code expects several (e.g., JINA_API_KEY, DASHSCOPE_API_KEY, OPENAI_API_KEY). Treat this omission as a packaging/information-quality issue — you should explicitly audit and set environment variables rather than guessing. - Review and/or run the code in a sandboxed environment (isolated VM/container) to confirm behavior and to inspect what gets stored and sent to remote endpoints. Search the repo for any additional endpoint URLs or unexpected network calls. - If you proceed, set AUTO_CAPTURE_ENABLED=False (or carefully configure capture rules) and review the config file (SKILL.md points to /root/.openclaw/data/memory-custom-config-multi.json but the code uses ~/.openclaw — ensure you edit the actual path in your environment). If you want, I can: (1) list the exact env vars and file paths the code will read/write, (2) show where network calls are made so you can audit the endpoints, or (3) produce a minimal safe configuration to run the skill locally without sending data to third-party providers.

Like a lobster shell, security has layers — review code before you run it.

latestvk97baf9g6z1kbqdzbtjv65gckd85bv8xlatest Memory-Provk97exwxw63mdbjgvsd72y9zym585626t
71downloads
1stars
2versions
Updated 3h ago
v3.1.0
MIT-0

Arshis-memory-pro v3.0.0

Advanced Memory Management System with Multi-Provider Support


🎯 New Features in v3.0.0

1. Multi-Provider LLM Support 🌍

Supported Providers:

  • SiliconFlow (Default)

    • Embedding: BAAI/bge-m3 (1024 dim)
    • Rerank: BAAI/bge-reranker-v2-m3
    • LLM: Qwen/Qwen2.5-72B-Instruct
  • DashScope (Alibaba)

    • Embedding: text-embedding-v3 (1536 dim)
    • Rerank: gte-rerank
    • LLM: qwen-max
  • OpenAI (Optional)

    • Embedding: text-embedding-3-small (1536 dim)
    • LLM: gpt-4o-mini
  • Claude (Optional)

    • LLM: claude-3-5-sonnet-20241022
  • Ollama (Local)

    • Embedding: nomic-embed-text (768 dim)
    • LLM: qwen2.5:7b

2. Auto-Failover 🔄

Automatic Provider Switching:

  • ✅ Primary provider fails → Auto-switch to backup
  • ✅ Configurable priority order
  • ✅ Timeout protection
  • ✅ Health check monitoring

Configuration Example:

{
  "embedding": {
    "provider": "auto",
    "autoFailover": true,
    "providers": {
      "siliconflow": { "priority": 1 },
      "dashscope": { "priority": 2 },
      "openai": { "priority": 3 }
    }
  }
}

3. Environment Variable Support 🔐

Secure API Key Management:

{
  "embedding": {
    "providers": {
      "dashscope": {
        "apiKey": "${DASHSCOPE_API_KEY}"
      },
      "openai": {
        "apiKey": "${OPENAI_API_KEY}"
      }
    }
  }
}

Usage:

export DASHSCOPE_API_KEY="your-key"
export OPENAI_API_KEY="your-key"

📊 Core Features (Unchanged)

1. Hybrid Retrieval

  • Vector similarity (70%) + BM25 (30%)
  • Cross-Encoder reranking
  • +15-20% accuracy improvement

2. Smart Extraction

  • Auto-summary (20 chars)
  • Auto-keywords (3-5)
  • Auto-categorization
  • Importance scoring (0-1)

3. Lifecycle Management

  • Weibull decay model
  • Category-specific decay rates
  • Knowledge: 2%/year
  • Characters: 10%/year
  • Preferences: 30%/year
  • Events: 90% decay

4. Self-Evolution

  • Auto feedback collection
  • Parameter optimization
  • Pattern learning
  • Continuous improvement

5. Short-Term Memory

  • 50 items max
  • 2-hour expiry
  • Auto-expire
  • Priority filtering

6. Dreaming Mode

  • Sleep memory consolidation
  • Morning brief
  • Creative incubation

🔧 Configuration

Multi-Provider Config

File: /root/.openclaw/data/memory-custom-config-multi.json

Quick Start:

# Copy multi-provider template
cp memory-custom-config-multi.json memory-custom-config.json

# Edit configuration
nano memory-custom-config.json

# Set environment variables
export DASHSCOPE_API_KEY="your-key"
export OPENAI_API_KEY="your-key"

Provider Priority

Default Order:

  1. SiliconFlow (Primary)
  2. DashScope (Backup)
  3. OpenAI (Optional)
  4. Claude (Optional)
  5. Ollama (Local)

Change Priority:

{
  "embedding": {
    "providers": {
      "dashscope": { "priority": 1 },
      "siliconflow": { "priority": 2 }
    }
  }
}

📈 Performance Comparison

ProviderEmbedding SpeedLLM SpeedCostAccuracy
SiliconFlowFastFastLowHigh
DashScopeFastFastMediumHigh
OpenAIMediumMediumHighVery High
ClaudeN/ASlowVery HighVery High
OllamaSlowSlowFreeMedium

🎯 Usage Examples

Example 1: Store Memory

from memory_core import MemoryAPI

api = MemoryAPI()

# Store with auto-provider
api.store("User prefers coffee over tea", 0.8, "preference")

# Auto-failover if primary fails
# Falls back to secondary provider

Example 2: Recall Memory

# Search with hybrid retrieval
results = api.recall("What does user like to drink?", limit=5)

# Results ranked by:
# 1. Vector similarity
# 2. BM25 keyword match
# 3. Cross-Encoder rerank

Example 3: Provider Status

# Check provider health
status = api.get_provider_status()

# Output:
# {
#   "siliconflow": "healthy",
#   "dashscope": "healthy",
#   "openai": "unavailable"
# }

🔍 Troubleshooting

Issue 1: Provider Fails

Symptom: API timeout/error

Solution:

{
  "embedding": {
    "autoFailover": true,
    "timeout": 30
  }
}

Issue 2: API Key Error

Symptom: 401 Unauthorized

Solution:

# Check environment variable
echo $DASHSCOPE_API_KEY

# Update config
"apiKey": "${DASHSCOPE_API_KEY}"

Issue 3: Slow Response

Symptom: High latency

Solution:

{
  "embedding": {
    "providers": {
      "siliconflow": { "priority": 1 },
      "ollama": { "priority": 2 }
    }
  }
}

📝 Version History

v3.0.0 (2026-04-22)

  • ✅ Multi-provider LLM support
  • ✅ Auto-failover mechanism
  • ✅ Environment variable support
  • ✅ Provider health monitoring
  • ✅ Configurable priority order

v2.0.0 (2026-04-15)

  • ✅ Self-evolution system
  • ✅ Short-term memory
  • ✅ Dreaming mode
  • ✅ Hybrid retrieval

v1.0.0 (2026-04-13)

  • ✅ Initial release
  • ✅ Basic memory storage
  • ✅ Vector retrieval

🦊 Support

GitHub: https://github.com/Arshis/Arshis-Memory-Pro
Issues: https://github.com/Arshis/Arshis-Memory-Pro/issues
Author: Arshis
License: MIT-0


Arshis-memory-pro v3.0.0
Make memory management more professional, efficient, and provider-agnostic!

Comments

Loading comments...