Glin Profanity

v1.0.0

Profanity detection and content moderation library with leetspeak, Unicode homoglyph, and ML-powered detection. Use when filtering user-generated content, moderating comments, checking text for profanity, censoring messages, or building content moderation into applications. Supports 24 languages.

1· 1.8k·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name, description, and usage examples all describe a profanity-detection library. The SKILL.md does not request unrelated credentials, binaries, or system access — everything requested is consistent with a content-moderation library.
Instruction Scope
The instructions are example code snippets (JS/Python/React) and do not instruct the agent to read local files, environment secrets, or exfiltrate data. However the ML-related examples (TensorFlow.js toxicity model) imply that runtime behavior may include downloading ML models or loading third-party model files; that could cause network activity and bring in additional dependencies not visible in the SKILL.md.
Install Mechanism
There is no install spec in the skill bundle (instruction-only). The README suggests installing via npm or pip (public registries), which is a common approach. Because the skill does not bundle code, you should verify the actual npm/PyPI package(s) and their maintainers before installing; third-party packages can include additional dependencies or postinstall steps.
Credentials
The skill declares no required environment variables, credentials, or config paths. That is proportional for a library that operates on text and does not require external service authentication.
Persistence & Privilege
The skill is not always-enabled, does not request persistent agent privileges, and contains no install-time code in the bundle that would modify agent configuration. Autonomy flags are default and appropriate for a user-invocable skill.
Assessment
This skill is an instruction-only description of a profanity-detection library and appears internally consistent, but before installing or using it you should: 1) verify the npm and PyPI package names and the GitHub repository owners (confirm the code matches the documentation and is from a trusted maintainer); 2) inspect package dependencies and any postinstall scripts or native extensions; 3) check whether the ML functionality downloads models at runtime or contacts external endpoints (this can raise privacy and bandwidth concerns); 4) review license and GDPR/privacy implications for sending user content to models; and 5) test the package in an isolated environment (sandbox/container) before deploying to production. If you cannot locate a legitimate package/repo matching these docs, treat it as untrusted and do not install.

Like a lobster shell, security has layers — review code before you run it.

content-filtervk97dp2eb9rwff30w8bt8ks74p180aj55latestvk97dp2eb9rwff30w8bt8ks74p180aj55moderationvk97dp2eb9rwff30w8bt8ks74p180aj55profanityvk97dp2eb9rwff30w8bt8ks74p180aj55pythonvk97dp2eb9rwff30w8bt8ks74p180aj55typescriptvk97dp2eb9rwff30w8bt8ks74p180aj55
1.8kdownloads
1stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Glin Profanity - Content Moderation Library

Profanity detection library that catches evasion attempts like leetspeak (f4ck, sh1t), Unicode tricks (Cyrillic lookalikes), and obfuscated text.

Installation

# JavaScript/TypeScript
npm install glin-profanity

# Python
pip install glin-profanity

Quick Usage

JavaScript/TypeScript

import { checkProfanity, Filter } from 'glin-profanity';

// Simple check
const result = checkProfanity("Your text here", {
  detectLeetspeak: true,
  normalizeUnicode: true,
  languages: ['english']
});

result.containsProfanity  // boolean
result.profaneWords       // array of detected words
result.processedText      // censored version

// With Filter instance
const filter = new Filter({
  replaceWith: '***',
  detectLeetspeak: true,
  normalizeUnicode: true
});

filter.isProfane("text")           // boolean
filter.checkProfanity("text")      // full result object

Python

from glin_profanity import Filter

filter = Filter({
    "languages": ["english"],
    "replace_with": "***",
    "detect_leetspeak": True
})

filter.is_profane("text")           # True/False
filter.check_profanity("text")      # Full result dict

React Hook

import { useProfanityChecker } from 'glin-profanity';

function ChatInput() {
  const { result, checkText } = useProfanityChecker({
    detectLeetspeak: true
  });

  return (
    <input onChange={(e) => checkText(e.target.value)} />
  );
}

Key Features

FeatureDescription
Leetspeak detectionf4ck, sh1t, @$$ patterns
Unicode normalizationCyrillic fսckfuck
24 languagesIncluding Arabic, Chinese, Russian, Hindi
Context whitelistsMedical, gaming, technical domains
ML integrationOptional TensorFlow.js toxicity detection
Result cachingLRU cache for performance

Configuration Options

const filter = new Filter({
  languages: ['english', 'spanish'],     // Languages to check
  detectLeetspeak: true,                 // Catch f4ck, sh1t
  leetspeakLevel: 'moderate',            // basic | moderate | aggressive
  normalizeUnicode: true,                // Catch Unicode tricks
  replaceWith: '*',                      // Replacement character
  preserveFirstLetter: false,            // f*** vs ****
  customWords: ['badword'],              // Add custom words
  ignoreWords: ['hell'],                 // Whitelist words
  cacheSize: 1000                        // LRU cache entries
});

Context-Aware Analysis

import { analyzeContext } from 'glin-profanity';

const result = analyzeContext("The patient has a breast tumor", {
  domain: 'medical',        // medical | gaming | technical | educational
  contextWindow: 3,         // Words around match to consider
  confidenceThreshold: 0.7  // Minimum confidence to flag
});

Batch Processing

import { batchCheck } from 'glin-profanity';

const results = batchCheck([
  "Comment 1",
  "Comment 2",
  "Comment 3"
], { returnOnlyFlagged: true });

ML-Powered Detection (Optional)

import { loadToxicityModel, checkToxicity } from 'glin-profanity/ml';

await loadToxicityModel({ threshold: 0.9 });

const result = await checkToxicity("You're the worst");
// { toxic: true, categories: { toxicity: 0.92, insult: 0.87 } }

Common Patterns

Chat/Comment Moderation

const filter = new Filter({
  detectLeetspeak: true,
  normalizeUnicode: true,
  languages: ['english']
});

bot.on('message', (msg) => {
  if (filter.isProfane(msg.text)) {
    deleteMessage(msg);
    warnUser(msg.author);
  }
});

Content Validation Before Publish

const result = filter.checkProfanity(userContent);

if (result.containsProfanity) {
  return {
    valid: false,
    issues: result.profaneWords,
    suggestion: result.processedText  // Censored version
  };
}

Resources

Comments

Loading comments...