Csam Shield
WarnAudited by ClawScan on May 10, 2026.
Overview
This safety-themed skill claims to automatically report users to authorities, ban accounts, and retain evidence indefinitely, but its authority, data handling, and implementation are not adequately bounded or declared.
Do not deploy this as-is for real moderation or reporting. Before installing or using it, require legal/compliance review, verify the npm package source code, declare and scope all credentials, add human review for high-impact actions, define retention/deletion rules, and test only with lawful safe test data.
Findings (6)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
A false positive could block content, suspend a user, affect related accounts, and trigger official reports.
The skill instructs multiple high-impact actions to occur on detection, including account suspension, IP blocking, evidence preservation, and external reporting, without a clearly bounded approval or review workflow.
"onDetection": ["block_content", "suspend_user", "preserve_evidence", "report_ncmec", "alert_security_team", "block_ip", "flag_related_accounts"]
Require explicit human review and approval for enforcement and reporting actions, separate hash-match handling from AI-only predictions, and provide dry-run/audit modes.
Users may grant sensitive reporting credentials or moderation authority without clear scope, auditing, or least-privilege controls.
The skill expects sensitive reporting and encryption credentials, while the registry metadata declares no required environment variables, primary credential, or permission boundary.
ncmecApiKey: process.env.NCMEC_API_KEY, encryptionKey: process.env.EVIDENCE_ENCRYPTION_KEY
Declare all required credentials and permissions, use least-privileged service accounts, and document exactly which actions each credential can perform.
Sensitive user, content, and incident data could be sent to an external reporting service automatically.
The artifact describes automatic external reporting to NCMEC, but does not clearly define what data is transmitted, how reports are verified, or how false positives are contained.
"ncmec": { "endpoint": "https://report.cybertip.org/", "apiKey": "${NCMEC_API_KEY}", "automatic": true }Document report contents and transmission controls, require review for report submission unless legally and technically verified, and log all external disclosures.
Highly sensitive content and user data may be retained long-term in ways the user cannot easily audit or reverse.
The skill proposes persistent evidence storage with indefinite retention, but does not provide clear limits, deletion rules, access controls, or handling procedures for highly sensitive material.
"preserveEvidence": true, "evidenceRetention": "indefinite"
Define legally reviewed retention periods, encryption and access controls, secure deletion procedures, and strict limits on what evidence is preserved.
One erroneous or mis-scoped detection could produce irreversible consequences across accounts, stored data, and outside organizations.
A single detection can cascade into permanent account action, all-content preservation, external reporting, and authority notification without clear containment.
"onHashMatch": ["immediate_block", "auto_report_ncmec", "permanent_ban", "preserve_all_user_content", "notify_authorities"]
Limit blast radius, use staged escalation, require independent confirmation for severe actions, and make reversibility and appeal processes explicit.
Installing the referenced package would run code that was not included in this review.
The reviewed artifact is instruction-only, while the documentation points to an external npm package for the actual implementation.
npm install @raghulpasupathi/csam-shield
Verify the npm package source, maintainers, integrity, and code behavior before installation, especially because the claimed functions are high-impact.
