ShieldCortex

ReviewAudited by ClawScan on May 10, 2026.

Overview

ShieldCortex is purpose-aligned, but it deserves review because it can automatically run an unpinned npm command and persist/reinject conversation memories by default.

Review this skill before installing. If you use it, install a trusted pinned ShieldCortex package yourself, verify the binary path, consider disabling auto-memory by default, periodically inspect or clear ~/.shieldcortex memories, and only enable cloud sync/API keys if you are comfortable sending the relevant data to that service.

Findings (5)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

A future or compromised npm package version could run during agent lifecycle use, not just during a manual setup command.

Why it was flagged

This is an automatic, unpinned npm install/run path during first use rather than a pinned, explicit install step, which creates supply-chain exposure.

Skill content
ShieldCortex installs automatically on first use via `npx -y shieldcortex`
Recommendation

Install a trusted, pinned ShieldCortex version yourself before enabling the hook, and prefer disabling automatic npx fallback or requiring explicit approval before first execution.

What this means

The skill can run local Node/npm tooling while handling memory and scanning events.

Why it was flagged

The runtime executes local CLI commands to call the ShieldCortex MCP server. This is expected for the integration, but users should understand that lifecycle hooks can launch these commands.

Skill content
resolvedServerCmd = "npx -y shieldcortex"; ... execFile("npx", cmdArgs, {
Recommendation

Review the installed binary path, keep npm tooling trusted, and avoid enabling the hook in environments where agents should not launch local commands.

What this means

Sensitive or misleading conversation content could be stored and influence later agent behavior across sessions.

Why it was flagged

Conversation-derived content can be persisted automatically and later reintroduced into future agent context, creating both sensitive-data retention and memory-poisoning risk.

Skill content
Auto-memory extraction is enabled by default... Captures the current session transcript... Saves memories... Injects them into the agent's bootstrap context
Recommendation

Disable auto-memory unless you need it, review stored memories regularly, and require user approval before writing or injecting high-impact memories.

What this means

Secrets in .env files or a cloud API key may be exposed to the local scanner and, if cloud sync is enabled, to the configured service.

Why it was flagged

The skill may read local environment files containing secrets and can use an optional cloud API key; this is disclosed and aligned with security scanning/cloud sync, but it is sensitive authority.

Skill content
$CWD/.env (env-scanner checks for leaked secrets — reads, never writes) ... SHIELDCORTEX_API_KEY: Cloud sync API key
Recommendation

Only enable cloud features when needed, use limited-scope API keys, and run scans in projects where reading .env is acceptable.

NoteHigh Confidence
ASI10: Rogue Agents
What this means

The memory/scanning system may keep running on future agent events until disabled or uninstalled.

Why it was flagged

The skill establishes persistent lifecycle hooks. This is disclosed and purpose-aligned, but it means the integration continues operating after setup.

Skill content
Lifecycle event handlers... registered in `~/.claude/settings.json` during setup and can be removed at any time.
Recommendation

Know where the lifecycle hooks are registered and remove or disable them if you no longer want persistent memory/scanning behavior.