Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Csam Shield

v1.0.0

Detects, blocks, and reports Child Sexual Abuse Material using AI-driven image, video, and behavior analysis with automatic NCMEC reporting and evidence pres...

0· 514·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for raghulpasupathi/csam-shield.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Csam Shield" (raghulpasupathi/csam-shield) from ClawHub.
Skill page: https://clawhub.ai/raghulpasupathi/csam-shield
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install csam-shield

ClawHub CLI

Package manager switcher

npx clawhub@latest install csam-shield
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The SKILL.md describes a full CSAM detection/reporting system (PhotoDNA/PDQ integration, NCMEC reports, evidence preservation). Registry metadata claims no required env vars or install, yet the instructions reference installing @raghulpasupathi/csam-shield via npm and using process.env.NCMEC_API_KEY and process.env.EVIDENCE_ENCRYPTION_KEY. PhotoDNA/PDQ integrations normally require formal agreements and credentials; those are not declared in the registry metadata. This inconsistency (declared nothing vs instructions requiring credentials/providers) does not add up.
!
Instruction Scope
SKILL.md instructs analyzing arbitrary file paths, checking hashes, gathering user_info/ip/device_info, preserving evidence to /secure/evidence/ and /secure/csam-hashes/, and automatically reporting to https://report.cybertip.org/. These operations read, store, and transmit highly sensitive PII and content; yet the skill provides no clear human-review gating, access controls, or declared required credentials. The instructions give broad discretion (automatic reporting, indefinite retention) which increases risk.
!
Install Mechanism
The registry contains no install spec or code files, but SKILL.md tells the user to npm install @raghulpasupathi/csam-shield and a ClawHub URL. Asking to install an external npm package to handle CSAM is high-risk unless the package source, ownership, and contents are vetted. There is no packaged code included for review here and no verification (checksums, release host), so following the instruction would run unreviewed third-party code on your system.
!
Credentials
Although registry metadata lists no required env vars, the instructions and sample code explicitly reference NCMEC_API_KEY and EVIDENCE_ENCRYPTION_KEY and a configuration placeholder ${NCMEC_API_KEY}. The skill also expects access to photo/video content and to collect PII (IP, device info, user info). Requesting and using such credentials and data is proportionate to the stated purpose only if declared up front and legally authorized; here those credentials are not declared or justified in metadata, which is a mismatch and a red flag.
!
Persistence & Privilege
The skill calls for preserving evidence indefinitely, encrypted storage, and writing to fixed secure paths. While retention is relevant to the purpose, the SKILL.md provides no guidance on access controls, where keys are stored, retention policy governance, or who reviews reports before automatic submission. The skill does not request 'always: true', which is good, but indefinite on-disk retention of sensitive material without controls is problematic.
What to consider before installing
Do not install or run this skill without further vetting. Specific steps to take before proceeding: 1) Ask the publisher for the exact npm package source (repository URL), license, and published package checksum so you can review the code and verify integrity. 2) Require the registry metadata to declare the exact environment variables and secrets (NCMEC_API_KEY, EVIDENCE_ENCRYPTION_KEY, any database/provider credentials) before installation. 3) Have legal/compliance review the automatic-reporting behavior (NCMEC reporting usually requires authorized processes and human review). 4) Only test in an isolated environment that cannot access real user data; do not point it at production uploads until code and procedures are audited. 5) Insist on a human-in-the-loop for any automatic reporting, strict access controls for evidence storage, documented retention/audit policies, and proof of legitimate PhotoDNA/PDQ/NCMEC integrations (contracts or API agreements). 6) If you cannot obtain the package source and credential requirements, treat the skill as untrusted: installing an unvetted npm package that will handle CSAM content risks accidental exposure or malicious exfiltration of extremely sensitive data.

Like a lobster shell, security has layers — review code before you run it.

latestvk978r7a980wx852vc6m6ana2px81ker3
514downloads
0stars
1versions
Updated 14h ago
v1.0.0
MIT-0

CSAM Shield

Metadata

  • ID: csam-shield
  • Version: 1.0.0
  • Category: safety
  • Priority: CRITICAL
  • Installation: npm
  • Package: @raghulpasupathi/csam-shield

Description

CRITICAL SAFETY SYSTEM for detecting and preventing Child Sexual Abuse Material (CSAM). Uses advanced computer vision, hash matching, age estimation, and behavior analysis to identify illegal content. Includes automatic NCMEC reporting, law enforcement coordination, and evidence preservation.

⚠️ CRITICAL WARNING

This skill handles the most serious form of online abuse. Implementation requires:

  • Zero tolerance policy - immediate action on detection
  • Legal compliance - mandatory reporting to NCMEC/law enforcement
  • Evidence preservation - secure logging for legal proceedings
  • Staff protection - mental health support for reviewers
  • Privacy protection - strict handling of detected content

Features

  • Hash Database Matching: PhotoDNA, PDQ, perceptual hashing against known CSAM
  • Age Estimation: AI-powered age detection with high accuracy
  • Anatomical Detection: Identify inappropriate imagery of minors
  • Context Analysis: Distinguish legitimate from exploitative content
  • Behavioral Analysis: Detect grooming patterns and predatory behavior
  • NCMEC Integration: Automatic CyberTipline reporting
  • Evidence Preservation: Secure storage for law enforcement
  • User Flagging: Immediate account suspension and investigation
  • Network Analysis: Identify distribution rings and patterns
  • Real-time Blocking: Prevent upload/distribution instantly

Installation

Via ClawHub

https://clawhub.ai/raghulpasupathi/csam-shield

Via npm

npm install @raghulpasupathi/csam-shield

Configuration

{
  "enabled": true,
  "settings": {
    "mode": "maximum-protection",
    "zeroTolerance": true,
    "thresholds": {
      "ageEstimation": {
        "childThreshold": 13,
        "teenThreshold": 18,
        "confidence": 0.85
      },
      "hashMatch": {
        "exactMatch": 0.95,
        "nearMatch": 0.85
      },
      "anatomicalDetection": {
        "sensitivity": "maximum",
        "blockThreshold": 0.70
      },
      "contextAnalysis": {
        "enabled": true,
        "legitimateExceptions": ["family", "medical", "educational"]
      }
    },
    "databases": {
      "photoDNA": {
        "enabled": true,
        "provider": "microsoft",
        "updateFrequency": "hourly"
      },
      "pdqHash": {
        "enabled": true,
        "provider": "facebook",
        "updateFrequency": "hourly"
      },
      "ncmec": {
        "enabled": true,
        "hashList": true,
        "updateFrequency": "hourly"
      },
      "custom": {
        "enabled": true,
        "path": "/secure/csam-hashes/"
      }
    },
    "detection": {
      "imageAnalysis": true,
      "videoAnalysis": true,
      "textAnalysis": true,
      "metadataAnalysis": true,
      "networkAnalysis": true,
      "behaviorAnalysis": true
    },
    "reporting": {
      "ncmec": {
        "enabled": true,
        "endpoint": "https://report.cybertip.org/",
        "apiKey": "${NCMEC_API_KEY}",
        "automatic": true
      },
      "lawEnforcement": {
        "enabled": true,
        "contacts": ["fbi_tips", "local_police"],
        "automatic": false,
        "requiresReview": true
      },
      "preserveEvidence": true,
      "evidenceRetention": "indefinite",
      "encryptEvidence": true
    },
    "actions": {
      "onDetection": [
        "block_content",
        "suspend_user",
        "preserve_evidence",
        "report_ncmec",
        "alert_security_team",
        "block_ip",
        "flag_related_accounts"
      ],
      "onHashMatch": [
        "immediate_block",
        "auto_report_ncmec",
        "permanent_ban",
        "preserve_all_user_content",
        "notify_authorities"
      ]
    },
    "security": {
      "accessControl": "restricted",
      "auditLogging": "complete",
      "encryption": "aes-256",
      "staffProtection": true,
      "limitedExposure": true
    }
  }
}

API / Methods

const CSAMShield = require('@raghulpasupathi/csam-shield');

// Initialize with strict security
const shield = new CSAMShield({
  mode: 'maximum-protection',
  ncmecApiKey: process.env.NCMEC_API_KEY,
  encryptionKey: process.env.EVIDENCE_ENCRYPTION_KEY
});

// ⚠️ CRITICAL: Analyze content (use with extreme caution)
const result = await shield.analyze('/path/to/content.jpg');
console.log(result);
/* Output:
{
  threat: 'CRITICAL',
  action: 'IMMEDIATE_BLOCK',
  detectionType: 'hash_match',
  confidence: 0.98,
  details: {
    hashMatch: {
      matched: true,
      database: 'photoDNA',
      matchConfidence: 0.99
    },
    ageEstimation: {
      estimatedAge: 10,
      confidence: 0.94,
      isMinor: true
    },
    anatomicalDetection: {
      inappropriate: true,
      severity: 'extreme'
    },
    context: {
      isLegitimate: false,
      category: 'exploitative'
    }
  },
  actions: {
    contentBlocked: true,
    userSuspended: true,
    evidencePreserved: true,
    ncmecReported: true,
    reportId: 'NCMEC-2026-xxxxx',
    authoritiesNotified: true
  },
  evidence: {
    caseId: 'CASE-2026-xxxxx',
    preservedData: [
      'content_hash',
      'user_info',
      'upload_metadata',
      'ip_address',
      'device_info'
    ],
    encryptedStorage: '/secure/evidence/CASE-2026-xxxxx/'
  },
  timestamp: '2026-02-20T10:30:00Z'
}
*/

// Check hash against known CSAM databases
const hashCheck = await shield.checkHash(contentHash);
console.log(hashCheck);
/* Output:
{
  isKnownCSAM: true,
  matchedDatabases: ['photoDNA', 'pdqHash', 'ncmec'],
  matchConfidence: 0.99,
  action: 'IMMEDIATE_BLOCK',
  reportRequired: true
}
*/

// Estimate age in image
const ageEstimation = await shield.estimateAge('/path/to/image.jpg');
console.log(ageEstimation);
/* Output:
{
  estimatedAge: 12,
  confidence: 0.91,
  ageRange: [10, 14],
  isMinor: true,
  certaintyLevel: 'high'
}
*/

// Analyze user behavior for grooming patterns
const behaviorAnalysis = await shield.analyzeBehavior(userId, {
  messages: userMessages,
  interactions: userInteractions,
  timeline: activityTimeline
});
console.log(behaviorAnalysis);
/* Output:
{
  isGrooming: true,
  confidence: 0.87,
  patterns: [
    'age_inquiries',
    'isolation_attempts',
    'gift_offering',
    'secrecy_requests',
    'progressive_boundary_crossing'
  ],
  riskLevel: 'extreme',
  recommendedAction: 'immediate_investigation'
}
*/

// Report to NCMEC CyberTipline
const ncmecReport = await shield.reportToNCMEC({
  content: contentDetails,
  user: userDetails,
  evidence: preservedEvidence
});
console.log(ncmecReport);
/* Output:
{
  success: true,
  reportId: 'NCMEC-2026-xxxxx',
  timestamp: '2026-02-20T10:30:00Z',
  status: 'submitted',
  followUp: 'pending_review'
}
*/

// Preserve evidence for legal proceedings
const evidence = await shield.preserveEvidence({
  contentId: 'content-123',
  userId: 'user-456',
  includeMetadata: true,
  includeRelatedContent: true,
  includeUserHistory: true
});

// Suspend user and related accounts
await shield.suspendUser(userId, {
  reason: 'CSAM_DETECTION',
  permanent: true,
  blockRelatedAccounts: true,
  preserveEvidence: true
});

// Network analysis to find related accounts
const network = await shield.analyzeNetwork(userId);
console.log(network);
/* Output:
{
  suspiciousAccounts: [
    { userId: 'user-789', riskScore: 0.92, connection: 'frequent_messages' },
    { userId: 'user-012', riskScore: 0.85, connection: 'content_sharing' }
  ],
  distributionRing: {
    detected: true,
    size: 7,
    accounts: [...]
  },
  recommendedActions: [
    'investigate_all_accounts',
    'preserve_all_evidence',
    'notify_authorities'
  ]
}
*/

// Secure hash generation (for reporting only)
const secureHash = await shield.generateSecureHash('/path/to/content.jpg');

// Update hash databases
await shield.updateHashDatabases();

// Event listeners (CRITICAL - requires immediate response)
shield.on('csam_detected', async (detection) => {
  console.error('🚨 CRITICAL: CSAM DETECTED');

  // Immediate actions
  await shield.blockContent(detection.contentId);
  await shield.suspendUser(detection.userId);
  await shield.preserveEvidence(detection);
  await shield.reportToNCMEC(detection);
  await shield.notifySecurityTeam(detection);
  await shield.alertAuthorities(detection);
});

shield.on('hash_match', async (match) => {
  console.error('🚨 CRITICAL: Known CSAM hash matched');

  // Automatic immediate actions
  await shield.executeEmergencyProtocol(match);
});

shield.on('grooming_detected', async (behavior) => {
  console.warn('⚠️ WARNING: Potential grooming behavior detected');

  // Investigation and monitoring
  await shield.flagForInvestigation(behavior.userId);
  await shield.enhanceMonitoring(behavior.userId);
});

// Secure audit logging
const auditLog = await shield.getAuditLog({
  type: 'csam_detection',
  timeRange: 'last_30_days',
  includeReports: true
});

// Staff protection - limited exposure mode
shield.enableStaffProtection({
  blurContent: true,
  limitedDetails: true,
  rotationSchedule: true,
  mentalHealthSupport: true
});

// Compliance reporting
const complianceReport = await shield.generateComplianceReport({
  period: 'monthly',
  includeStatistics: true,
  includeActions: true,
  format: 'legal'
});

Dependencies

  • @microsoft/photodna: ^2.0.0 - PhotoDNA hashing
  • pdq-hash: ^1.0.0 - Facebook PDQ hashing
  • @tensorflow/tfjs-node-gpu: ^4.0.0 - Age estimation models
  • opencv4nodejs: ^6.0.0 - Image analysis
  • ncmec-reporter: ^1.0.0 - NCMEC CyberTipline integration
  • crypto: Built-in - Evidence encryption

Performance

  • Hash Matching: <10ms (database lookup)
  • Age Estimation: 100-200ms per image
  • Full Analysis: 200-500ms per image
  • Video Analysis: Real-time frame scanning
  • Accuracy:
    • Hash matching: 99.9% (known CSAM)
    • Age estimation: 92% accuracy (±2 years)
    • Context analysis: 89% accuracy
    • False positive rate: <0.01% (strict to prevent abuse)

Legal Requirements

  1. Mandatory Reporting: Report all detected CSAM to NCMEC (18 USC § 2258A)
  2. Evidence Preservation: Retain evidence for law enforcement (90+ days minimum)
  3. No Distribution: Never distribute detected CSAM, even internally
  4. User Notification: Do NOT notify user of detection (obstruction warning)
  5. Law Enforcement Cooperation: Full cooperation with investigations
  6. International Compliance: Comply with local laws (IWF, INHOPE, etc.)

Use Cases

  • Social media platforms
  • Messaging applications
  • File sharing services
  • Cloud storage providers
  • Dating applications
  • Gaming platforms with UGC
  • Forum and community sites
  • Any platform allowing user uploads

Troubleshooting

False Positives

Problem: Legitimate content flagged as CSAM Solution:

  • Review context analysis results
  • Check for family/medical/educational context
  • Manual review by trained staff ONLY
  • Document false positive for model improvement
  • NEVER automatically ignore - always review
  • Consider legitimate use cases in detection logic

Missing Known CSAM

Problem: Hash databases not catching known content Solution:

  • Verify database updates are running hourly
  • Check all hash databases enabled
  • Ensure proper API keys configured
  • Test hash generation process
  • Verify network connectivity to update servers
  • Contact database providers for troubleshooting

NCMEC Reporting Failures

Problem: Reports not submitting to NCMEC Solution:

  • Verify API credentials
  • Check network connectivity
  • Queue reports for retry
  • Manual submission if automatic fails
  • Contact NCMEC technical support
  • Keep local evidence regardless of submission status

Age Estimation Inaccuracy

Problem: Age estimation giving unreliable results Solution:

  • Use as one signal, not sole determinant
  • Combine with other detection methods
  • Lower confidence threshold for safety
  • Update age estimation models regularly
  • Consider edge cases (appearing older/younger)
  • When in doubt, err on side of caution

Evidence Storage Issues

Problem: Evidence not being preserved correctly Solution:

  • Verify encryption keys configured
  • Check storage permissions and space
  • Test evidence retrieval process
  • Implement redundant storage
  • Regular backup verification
  • Consult legal team on retention requirements

Integration Example

// ⚠️ CRITICAL SYSTEM INTEGRATION
const express = require('express');
const multer = require('multer');
const CSAMShield = require('@raghulpasupathi/csam-shield');

const app = express();
const upload = multer({ dest: '/secure/temp/' });
const shield = new CSAMShield({
  mode: 'maximum-protection',
  ncmecApiKey: process.env.NCMEC_API_KEY
});

// Critical: Pre-upload hash check
app.post('/api/upload', upload.single('file'), async (req, res) => {
  const tempPath = req.file.path;

  try {
    // Generate hash immediately
    const contentHash = await shield.generateSecureHash(tempPath);

    // Check against known CSAM databases FIRST
    const hashCheck = await shield.checkHash(contentHash);

    if (hashCheck.isKnownCSAM) {
      // CRITICAL: Known CSAM detected
      console.error('🚨 CRITICAL: Known CSAM hash matched');

      // Preserve evidence
      await shield.preserveEvidence({
        contentHash,
        userId: req.user.id,
        ip: req.ip,
        uploadAttempt: true,
        timestamp: new Date()
      });

      // Automatic NCMEC report
      await shield.reportToNCMEC({
        type: 'known_csam_upload',
        hash: contentHash,
        user: req.user,
        ip: req.ip
      });

      // Suspend user immediately
      await shield.suspendUser(req.user.id, {
        reason: 'CSAM_UPLOAD',
        permanent: true
      });

      // Delete file securely
      await shield.secureDelete(tempPath);

      // DO NOT reveal reason to user
      return res.status(400).json({
        success: false,
        error: 'Upload failed. Please contact support.'
      });
    }

    // Perform full analysis
    const analysis = await shield.analyze(tempPath);

    if (analysis.threat === 'CRITICAL') {
      // New CSAM detected
      console.error('🚨 CRITICAL: Potential CSAM detected');

      // Execute emergency protocol
      await shield.executeEmergencyProtocol({
        content: tempPath,
        user: req.user,
        analysis: analysis
      });

      // DO NOT reveal reason to user
      return res.status(400).json({
        success: false,
        error: 'Upload failed. Please contact support.'
      });
    }

    // Content passed all checks
    const url = await uploadToStorage(tempPath);

    res.json({
      success: true,
      url: url
    });
  } catch (error) {
    console.error('CSAM Shield error:', error);

    // Fail closed - reject upload
    res.status(500).json({
      success: false,
      error: 'Upload failed. Please try again.'
    });
  } finally {
    // Always clean up temp file
    if (fs.existsSync(tempPath)) {
      await shield.secureDelete(tempPath);
    }
  }
});

// Background monitoring of existing content
async function scanExistingContent() {
  console.log('Starting periodic content scan...');

  const contentBatch = await getContentForScanning(1000);

  for (const content of contentBatch) {
    try {
      const hash = await shield.generateSecureHash(content.url);
      const check = await shield.checkHash(hash);

      if (check.isKnownCSAM) {
        console.error(`🚨 CRITICAL: Known CSAM found in existing content: ${content.id}`);

        // Execute emergency protocol
        await shield.executeEmergencyProtocol({
          contentId: content.id,
          userId: content.userId,
          discoveryMethod: 'periodic_scan'
        });
      }
    } catch (error) {
      console.error(`Error scanning content ${content.id}:`, error);
    }
  }
}

// Run hourly scans
setInterval(scanExistingContent, 60 * 60 * 1000);

// Admin dashboard (RESTRICTED ACCESS)
app.get('/admin/csam/dashboard', requireSecurityClearance, async (req, res) => {
  const stats = await shield.getStats({
    period: '30d',
    includeReports: true
  });

  res.json({
    success: true,
    stats: stats,
    warning: 'RESTRICTED: Security clearance required'
  });
});

// Compliance reporting (LEGAL TEAM ONLY)
app.get('/legal/csam/compliance-report', requireLegalAccess, async (req, res) => {
  const report = await shield.generateComplianceReport({
    period: req.query.period || 'monthly',
    format: 'legal'
  });

  res.json({
    success: true,
    report: report
  });
});

Best Practices (CRITICAL)

  1. Zero Tolerance: No exceptions, immediate action on detection
  2. Report Everything: When in doubt, report to NCMEC
  3. Preserve Evidence: Secure storage for law enforcement
  4. Staff Protection: Mental health support, limited exposure
  5. Never Distribute: Don't share detected content internally
  6. Legal Compliance: Follow all mandatory reporting laws
  7. User Privacy: Balance detection with legitimate user privacy
  8. Regular Updates: Keep hash databases current (hourly)
  9. Audit Everything: Complete logging for legal proceedings
  10. Encryption: Encrypt all evidence and sensitive data
  11. Access Control: Strict role-based access to systems
  12. Cooperation: Full cooperation with law enforcement

Emergency Contacts

  • NCMEC CyberTipline: 1-800-843-5678 / report.cybertip.org
  • FBI IC3: ic3.gov
  • Interpol: interpol.int/Crimes/Crimes-against-children
  • IWF (UK): iwf.org.uk
  • INHOPE: inhope.org

Mental Health Resources (for staff)

Working with CSAM detection is traumatic. Provide:

  • Regular counseling services
  • Rotation schedules
  • Debriefing sessions
  • Time off after exposure
  • Peer support groups
  • 24/7 crisis support

Comments

Loading comments...