OpenClaw Profanity Plugin

ReviewAudited by ClawScan on May 1, 2026.

Overview

This is a coherent profanity-moderation guide, but it relies on an external npm package and can be configured to log or moderate users, so permissions and retention should be reviewed.

Before using this skill, verify the referenced npm/GitHub package, pin the dependency version, and configure the bot with least-privilege permissions. Use cautious moderation defaults, especially before enabling deletion or bans, and set clear privacy/retention rules for any violation logs.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Installing the referenced package would bring in code outside the provided skill artifact.

Why it was flagged

The skill instructs users to install an external npm package, while the submitted artifacts contain no package code to review. This is normal for a plugin setup, but users should verify the upstream package provenance.

Skill content
npm install openclaw-profanity
Recommendation

Install only from the intended npm package, review the linked repository/package contents, and use pinned versions or a lockfile where possible.

What this means

A misconfigured bot could delete, block, or ban users based on automated profanity decisions.

Why it was flagged

The custom moderation-handler example can ban a user and block a message. This is purpose-aligned for moderation, but it is a high-impact action if connected to a live community.

Skill content
await banUser(message.userId);
      return { blocked: true };
Recommendation

Use conservative defaults, clear thresholds, manual review for bans where appropriate, and least-privilege bot permissions.

What this means

If the bot account is over-permissioned, the moderation plugin could affect more channels or users than intended.

Why it was flagged

Deleting original messages on a chat platform implies delegated bot moderation authority. This fits the stated moderation purpose, but the granted platform permissions should be bounded.

Skill content
platform: 'telegram' ... deleteOriginal: true
Recommendation

Limit the bot token/account to the specific workspaces, channels, and moderation permissions required for the intended use.

What this means

Violation logs could expose user IDs, message-derived content, or false-positive moderation history if stored too broadly or retained too long.

Why it was flagged

The example tracks user identifiers and profane words for repeat-offender handling. This can create persistent moderation records containing sensitive user behavior data.

Skill content
await trackViolation(message.userId, result.profaneWords);
Recommendation

Define retention, access controls, and appeal/review procedures for moderation logs; avoid storing more message content than needed.