Glin Profanity
v1.0.0Profanity detection and content moderation library with leetspeak, Unicode homoglyph, and ML-powered detection. Use when filtering user-generated content, moderating comments, checking text for profanity, censoring messages, or building content moderation into applications. Supports 24 languages.
⭐ 1· 1.7k·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
medium confidencePurpose & Capability
The name, description, and usage examples all describe a profanity-detection library. The SKILL.md does not request unrelated credentials, binaries, or system access — everything requested is consistent with a content-moderation library.
Instruction Scope
The instructions are example code snippets (JS/Python/React) and do not instruct the agent to read local files, environment secrets, or exfiltrate data. However the ML-related examples (TensorFlow.js toxicity model) imply that runtime behavior may include downloading ML models or loading third-party model files; that could cause network activity and bring in additional dependencies not visible in the SKILL.md.
Install Mechanism
There is no install spec in the skill bundle (instruction-only). The README suggests installing via npm or pip (public registries), which is a common approach. Because the skill does not bundle code, you should verify the actual npm/PyPI package(s) and their maintainers before installing; third-party packages can include additional dependencies or postinstall steps.
Credentials
The skill declares no required environment variables, credentials, or config paths. That is proportional for a library that operates on text and does not require external service authentication.
Persistence & Privilege
The skill is not always-enabled, does not request persistent agent privileges, and contains no install-time code in the bundle that would modify agent configuration. Autonomy flags are default and appropriate for a user-invocable skill.
Assessment
This skill is an instruction-only description of a profanity-detection library and appears internally consistent, but before installing or using it you should: 1) verify the npm and PyPI package names and the GitHub repository owners (confirm the code matches the documentation and is from a trusted maintainer); 2) inspect package dependencies and any postinstall scripts or native extensions; 3) check whether the ML functionality downloads models at runtime or contacts external endpoints (this can raise privacy and bandwidth concerns); 4) review license and GDPR/privacy implications for sending user content to models; and 5) test the package in an isolated environment (sandbox/container) before deploying to production. If you cannot locate a legitimate package/repo matching these docs, treat it as untrusted and do not install.Like a lobster shell, security has layers — review code before you run it.
content-filtervk97dp2eb9rwff30w8bt8ks74p180aj55latestvk97dp2eb9rwff30w8bt8ks74p180aj55moderationvk97dp2eb9rwff30w8bt8ks74p180aj55profanityvk97dp2eb9rwff30w8bt8ks74p180aj55pythonvk97dp2eb9rwff30w8bt8ks74p180aj55typescriptvk97dp2eb9rwff30w8bt8ks74p180aj55
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
