Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Threadline — Persistent Memory and Context Layer for AI Agents

v1.0.3

Your AI agents start from zero. Every session. Users repeat themselves — their stack, their preferences, their ongoing projects. Threadline fixes this in 2 l...

1· 109·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The skill claims to provide persistent memory and shows exactly how to inject and update context around LLM calls. The single required env var (THREADLINE_API_KEY) and the shown SDK usage align with the stated purpose.
Instruction Scope
Instructions explicitly tell agents to call inject() before LLM calls and update() after responses and to avoid logging the enriched system prompt. This is expected for a context-injection service but grants the remote service the ability to alter system prompt content and store broad scopes (including 'emotional_state' and 'general'), which may include sensitive personal or project data.
Install Mechanism
Instruction-only skill with no install spec or code files reduces on-disk risk. Examples reference 'threadline-sdk' but no install instructions or declared dependencies are provided in the registry metadata—this is not harmful but worth noting for implementers.
Credentials
Only THREADLINE_API_KEY is required and is appropriate for a hosted service. No unrelated credentials, system paths, or extra secrets are requested.
Persistence & Privilege
always is false and the skill does not request permanent platform-wide privileges. The SDK pattern requires the service to persist user context by design; this is expected behavior.
Scan Findings in Context
[system-prompt-override] expected: The skill must inject system prompts to provide persistent context, so a 'system-prompt-override' pattern is expected. However, this capability is a sensitive attack surface: if the remote service is compromised or malicious it can influence agent behavior via injected prompts.
Assessment
This skill appears to do what it claims, but it will send user messages and agent responses to an external service that injects system prompts back into your LLM calls. Before installing: (1) Verify you trust threadline.to and review their privacy and data-retention policies; (2) Confirm the official SDK/package name and install from a trusted registry (npm) or the vendor's documented source; (3) Avoid sending highly sensitive PII or secrets into the memory store, or implement client-side redaction/encryption if needed; (4) Limit the API key's permissions where possible and rotate keys regularly; (5) Test with non-sensitive data and verify deletion/retention behavior via their dashboard; (6) Consider self-hosting or an alternative if you require full control over stored context.
!
SKILL.md:54
Prompt-injection style instruction pattern detected.
About static analysis
These patterns were detected by automated regex scanning. They may be normal for skills that integrate with external APIs. Check the VirusTotal and OpenClaw results above for context-aware analysis.

Like a lobster shell, security has layers — review code before you run it.

latestvk977g7jj3yn5r3b3n2v8bqgthx83wqex

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

Environment variables
THREADLINE_API_KEYrequiredYour Threadline API key — get one at threadline.to/dashboard

Comments