discord-soul
WarnAudited by ClawScan on May 10, 2026.
Overview
The skill matches its stated purpose, but it asks for a Discord account token, stores an entire server’s conversations as persistent agent memory, and its advertised safety filtering is not reliably enforced.
Review carefully before installing. Do not use a personal Discord authorization token unless you fully trust the scripts and environment. Limit which channels are exported, redact secrets, get community consent, verify the safety pipeline actually runs, and avoid enabling cron, heartbeats, or cross-chat bindings until filtering and retention controls are in place.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
A local script or anyone who can read the token file may be able to act with the user’s Discord account authority, and the skill uses it to export server history.
The skill asks the user to extract a Discord browser authorization token and store it locally in plaintext. That token can represent broad account authority, while the registry metadata declares no primary credential.
Open Discord in browser ... Copy the `authorization` header value ... Save to `~/.config/discord-exporter-token`
Prefer an official Discord bot/OAuth flow with the narrowest possible scopes. If using this workflow, protect the token file, delete it after export, and do not run the scripts on an untrusted machine.
Private or sensitive Discord messages, names, mentions, reactions, and related metadata can become persistent searchable agent context and may be surfaced in later answers.
The memory generator writes full Discord conversations into markdown memory files, and the visible query does not filter by safety_status, channel allowlist, consent, or sensitivity.
Get ALL messages for the day with full content ... FROM messages WHERE date(timestamp) = ? ORDER BY timestamp ... # Full Conversation Log ... Everything that was said today, in order
Add explicit channel/user allowlists, redaction, retention controls, and a hard `safety_status = 'safe'` filter before writing memory files. Inform server members before indexing their conversations.
A user may believe the prompt-injection safety pipeline is available and complete, but the supplied artifacts may not run as documented or may require unreviewed replacement code.
The advertised critical security pipeline calls helper scripts that are not present in the provided file manifest, creating a provenance/completeness gap for the safety path.
python3 "$SCRIPT_DIR/to-sqlite.py" "$EXPORT_DIR" "$SQLITE_DB" ... python3 "$SCRIPT_DIR/index-to-lancedb.py" "$SQLITE_DB" "$LANCE_DIR"
Provide and review all referenced helper scripts, declare dependencies, pin external tooling where possible, and make the pipeline fail closed if any safety step is missing.
A single malicious or sensitive Discord message can be automatically carried into memory and influence future agent behavior or connected chat bindings.
The scheduled update path exports new Discord content, regenerates memory, and wakes the agent, but the visible workflow does not run regex or LLM safety screening before the wake step.
"$SCRIPT_DIR/incremental_export.sh" --guild "$GUILD_ID" --db "$SQLITE_DB" ... python3 "$SCRIPT_DIR/generate_daily_memory.py" "$TODAY" ... openclaw gateway wake --text "New Discord activity for $TODAY" --mode next-heartbeat
Disable cron/heartbeat automation until safety filtering is integrated into the update path. Require manual review or fail closed for flagged, pending, or unverified messages.
Some Discord message content may be processed by an external AI provider during safety review.
The optional safety evaluator sends Discord message snippets and metadata to Anthropic for classification. This is purpose-aligned, but it means private Discord content can leave the local machine.
'content': m['content'][:500] ... client.messages.create(model="claude-3-5-haiku-20241022"
Use this only with appropriate consent and provider data-policy review, or replace it with a local classifier for sensitive communities.
