Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

SwarmRecall Learnings

v1.1.0

Error tracking, correction logging, and pattern detection via the SwarmRecall API. Tracks agent mistakes, corrections, and discoveries to surface recurring i...

0· 129·0 current·0 all-time
byWayde@waydelyle

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for waydelyle/swarmrecall-learnings.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "SwarmRecall Learnings" (waydelyle/swarmrecall-learnings) from ClawHub.
Skill page: https://clawhub.ai/waydelyle/swarmrecall-learnings
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: SWARMRECALL_API_KEY
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install swarmrecall-learnings

ClawHub CLI

Package manager switcher

npx clawhub@latest install swarmrecall-learnings
Security Scan
Capability signals
Requires OAuth token
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill is described as a wrapper for the SwarmRecall API and only requests a single API key (SWARMRECALL_API_KEY). This is proportionate for an external logging/persistence integration and matches the stated purpose.
!
Instruction Scope
SKILL.md instructs the agent to auto-register (if no key present), to call POST /learnings on errors and corrections, and to GET patterns on session start. While the doc also states the agent must obtain user consent before storing personal/sensitive information, the operational steps ('On error: call POST /api/v1/learnings') do not require or describe prompting the user before each upload. That gap could lead to automatic transmission of command outputs or other sensitive content. The instructions also reference an override env var SWARMRECALL_API_URL (to change the base URL) but that env var is not declared in the skill metadata.
Install Mechanism
This is an instruction-only skill with no install spec and no code files. Nothing is written to disk by default and no external binaries are required — lowest-risk install surface.
Credentials
Only SWARMRECALL_API_KEY is declared as required (appropriate). However SKILL.md references SWARMRECALL_API_URL as an optional override but does not declare it in requires.env. The auto-registration flow returns an apiKey which the instructions tell the agent to store in the SWARMRECALL_API_KEY environment variable — the document warns not to write keys to disk without consent, which is appropriate. Overall the requested env access is limited and proportionate, but the undeclared SWARMRECALL_API_URL and auto-registration behavior are minor inconsistencies to be aware of.
!
Persistence & Privilege
The skill will be able to send and read persistent data from the SwarmRecall service (learnings, patterns, promotions). Autonomous invocation is allowed by default and SKILL.md explicitly instructs the agent to GET patterns on session start and POST errors/corrections — that means, unless the agent enforces explicit consent checks, data could be sent automatically. Shared pool functionality can also publish learnings to other tenants; the doc notes pools exist but does not show any guardrails for user confirmation before sharing externally.
What to consider before installing
This skill appears to be what it says (an API-backed learning/error-logging integration), but pay attention to these points before installing: - Consent gating: SKILL.md says you must have user consent for storing personal/sensitive info, but also instructs the agent to automatically POST errors. Confirm the agent implementation will always prompt you before uploading any command outputs or other potentially sensitive data. - API key handling: Consider providing your own SWARMRECALL_API_KEY instead of allowing the agent to self-register. The auto-registration flow returns a key and claim token; the instructions tell the agent not to write keys to disk without consent — verify how (and where) your key will be stored and who can access it. - Undeclared env var: The document mentions SWARMRECALL_API_URL as an override but that env var is not declared in the skill metadata. If you plan to use a different API endpoint, verify how that will be configured and whether the agent will read it. - Data sharing: Be cautious with 'poolId' / shared pools — learnings written to a pool may be visible to others. If you have sensitive operational data, avoid sharing into pools or confirm pool membership rules. - Domain and hosting: The API endpoint is hosted at swarmrecall-api.onrender.com (a third-party hosting domain). If you have strict compliance requirements, review SwarmRecall's privacy policy and data retention/deletion controls, and confirm you trust the external host. Recommended actions: - Prefer providing a pre-issued SWARMRECALL_API_KEY and require the agent to ask for explicit consent before any POST that includes user data or command output. - Ask the skill author (or check agent config) to add explicit prompts and filters that redact secrets from any learning content before uploading. - If you will use shared pools, confirm who is in the pool and what data is allowed to be shared. If uncertain, disable pool writes. Given these inconsistencies and the potential for inadvertent data upload, treat the skill as suspicious until you verify consent and upload controls.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

💡 Clawdis
EnvSWARMRECALL_API_KEY
Primary envSWARMRECALL_API_KEY
latestvk97asmcwjzvpm3nn379dgmsa3h847atk
129downloads
0stars
4versions
Updated 3w ago
v1.1.0
MIT-0

SwarmRecall Learnings

Error tracking, correction logging, and pattern detection via the SwarmRecall API.

Auto-Registration

Before making any API calls, check for a SWARMRECALL_API_KEY environment variable:

  1. If SWARMRECALL_API_KEY is set, use it for all requests as a Bearer token.
  2. If SWARMRECALL_API_KEY is not set, self-register by calling:
    POST https://swarmrecall-api.onrender.com/api/v1/register
    Content-Type: application/json
    
    { "name": "<your-agent-name>" }
    
  3. The response returns { "apiKey": "...", "claimToken": "..." }. Save the apiKey to the SWARMRECALL_API_KEY environment variable for all subsequent requests. Do NOT write the key to disk or any file without user consent.
  4. Tell the user: "SwarmRecall is set up! To manage your agent's data, visit swarmrecall.ai/claim with code: <claimToken>"

Authentication

All API requests require:

Authorization: Bearer <SWARMRECALL_API_KEY>

API Base URL

https://swarmrecall-api.onrender.com (override with SWARMRECALL_API_URL if set)

All endpoints below are prefixed with /api/v1.

Privacy & Data Handling

  • All data is sent to swarmrecall-api.onrender.com over HTTPS
  • Learning data (errors, corrections, discoveries) is stored server-side with vector embeddings for semantic search
  • Data is isolated per agent and owner — no cross-tenant access
  • Before storing user-provided content, ensure the user has consented to external storage
  • The SWARMRECALL_API_KEY should be stored as an environment variable only, not written to disk

Endpoints

Log a learning

POST /api/v1/learnings
{
  "category": "error",        // error | correction | discovery | optimization | preference
  "summary": "npm install fails with peer deps",
  "details": "Full error output...",
  "priority": "high",         // low | medium | high | critical
  "area": "build",
  "suggestedAction": "Use --legacy-peer-deps flag",
  "tags": ["npm", "build"],
  "metadata": {},
  "poolId": "<uuid>"          // optional — write to shared pool
}

Search learnings

GET /api/v1/learnings/search?q=<query>&limit=10&minScore=0.5

Get a learning

GET /api/v1/learnings/:id

List learnings

GET /api/v1/learnings?category=error&status=open&priority=high&area=build&limit=20&offset=0

Update a learning

PATCH /api/v1/learnings/:id
{ "status": "resolved", "resolution": "Added --legacy-peer-deps", "resolutionCommit": "abc123" }

Get recurring patterns

GET /api/v1/learnings/patterns

Get promotion candidates

GET /api/v1/learnings/promotions

Link related learnings

POST /api/v1/learnings/:id/link
{ "targetId": "<other-learning-id>" }

Behavior

  • On error: call POST /api/v1/learnings with category: "error", the summary, details, and the command/output that failed.
  • On correction: call POST /api/v1/learnings with category: "correction" and what was wrong vs. what is correct.
  • On session start: call GET /api/v1/learnings/patterns to preload known recurring issues. Check GET /api/v1/learnings/promotions for patterns ready to be promoted.
  • On promotion candidates: surface candidates to the user for approval before acting on them.

Shared Pools

  • The POST /api/v1/learnings endpoint accepts an optional "poolId" field.
  • When poolId is provided, the learning is shared with all pool members who have learnings read access.
  • The agent must have readwrite access to the pool's learnings module to write shared learnings.
  • Search (GET /api/v1/learnings/search) and list (GET /api/v1/learnings) results automatically include data from pools the agent belongs to.
  • Pool data in responses includes poolId and poolName fields to distinguish shared data from the agent's own data.

Dreaming Integration

Learnings benefit from dream-time promotion:

  • Promotion candidates: The existing GET /api/v1/learnings/promotions endpoint surfaces patterns meeting promotion criteria (3+ recurrences, 2+ sessions, within 30 days). During a dream cycle, the agent reads each candidate, synthesizes a best-practice learning, and creates it via POST /api/v1/learnings with category: "best_practice" and status: "promoted".
  • Pattern consolidation: Related learnings are already linked via POST /api/v1/learnings/:id/link. During dreaming, the agent can review patterns and archive individual learnings that are fully subsumed by the promoted best practice.

Comments

Loading comments...