OpenClaw Profanity Plugin
ReviewAudited by ClawScan on May 1, 2026.
Overview
This is a coherent profanity-moderation guide, but it relies on an external npm package and can be configured to log or moderate users, so permissions and retention should be reviewed.
Before using this skill, verify the referenced npm/GitHub package, pin the dependency version, and configure the bot with least-privilege permissions. Use cautious moderation defaults, especially before enabling deletion or bans, and set clear privacy/retention rules for any violation logs.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Installing the referenced package would bring in code outside the provided skill artifact.
The skill instructs users to install an external npm package, while the submitted artifacts contain no package code to review. This is normal for a plugin setup, but users should verify the upstream package provenance.
npm install openclaw-profanity
Install only from the intended npm package, review the linked repository/package contents, and use pinned versions or a lockfile where possible.
A misconfigured bot could delete, block, or ban users based on automated profanity decisions.
The custom moderation-handler example can ban a user and block a message. This is purpose-aligned for moderation, but it is a high-impact action if connected to a live community.
await banUser(message.userId);
return { blocked: true };Use conservative defaults, clear thresholds, manual review for bans where appropriate, and least-privilege bot permissions.
If the bot account is over-permissioned, the moderation plugin could affect more channels or users than intended.
Deleting original messages on a chat platform implies delegated bot moderation authority. This fits the stated moderation purpose, but the granted platform permissions should be bounded.
platform: 'telegram' ... deleteOriginal: true
Limit the bot token/account to the specific workspaces, channels, and moderation permissions required for the intended use.
Violation logs could expose user IDs, message-derived content, or false-positive moderation history if stored too broadly or retained too long.
The example tracks user identifiers and profane words for repeat-offender handling. This can create persistent moderation records containing sensitive user behavior data.
await trackViolation(message.userId, result.profaneWords);
Define retention, access controls, and appeal/review procedures for moderation logs; avoid storing more message content than needed.
