Sharedintellect Quorum

ReviewAudited by ClawScan on May 10, 2026.

Overview

Prompt-injection indicators were detected in the submitted artifacts (ignore-previous-instructions, system-prompt-override); human review is required before treating this skill as clean.

This skill is reasonable for its stated purpose, but treat it like a tool that can read selected files, call external LLM providers, write local reports/memory, and optionally edit artifacts. Verify the GitHub source before installation, use a virtual environment, provide only needed API keys, run it on intended non-sensitive files, and keep version-control backups before enabling fixer or thorough workflows. ClawScan detected prompt-injection indicators (ignore-previous-instructions, system-prompt-override), so this skill requires review even though the model response was benign.

Findings (6)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Installing the skill may run code and dependencies obtained from GitHub/PyPI, so a compromised upstream source could affect the local environment.

Why it was flagged

The skill tells users or the platform to install code and Python dependencies from an external GitHub repository. This is expected for a Python CLI, but it depends on external source and package provenance.

Skill content
git clone https://github.com/SharedIntellect/quorum.git /tmp/quorum-install && cd /tmp/quorum-install/reference-implementation && pip install -r requirements.txt
Recommendation

Install in a virtual environment, verify the repository and requirements before running, and prefer pinned or packaged releases when available.

What this means

The skill can spend tokens or make requests under the configured provider account.

Why it was flagged

The skill uses AI-provider API keys to run its critics. This is purpose-aligned, and the artifacts do not show hardcoded keys or credential leakage, but provider credentials are still sensitive.

Skill content
"env":["ANTHROPIC_API_KEY","OPENAI_API_KEY"] ... export ANTHROPIC_API_KEY=sk-ant-... # or export OPENAI_API_KEY=sk-...
Recommendation

Use least-privilege/project-scoped API keys where possible, set budgets or provider limits, and avoid exposing keys in files being reviewed.

What this means

Files submitted for validation may be exposed to the configured LLM provider and may include private code, documents, credentials, or PII if the user selects them.

Why it was flagged

The skill reviews user-selected artifacts with LLM critics, implying artifact contents may be sent to configured model providers. This is central to the validation purpose, but it is a sensitive data flow.

Skill content
Run a quorum check on any file ... All depth profiles include the deterministic pre-screen ... before any LLM critic runs.
Recommendation

Run it only on files you intend to share with the configured model provider, and review or redact sensitive content before validation.

What this means

Future validation results may be influenced by previous findings stored locally, which could be misleading if the memory is polluted or not project-scoped.

Why it was flagged

The framework describes persistent learning memory that can influence future checks. This is purpose-aligned for recurring validation patterns, but persistent memory can carry stale or poisoned patterns across runs.

Skill content
Learning memory tracks recurring patterns and promotes high-frequency findings to mandatory checks ... captures new failure patterns in `known_issues.json`
Recommendation

Keep learning memory project-scoped, review `known_issues.json`, and reset or disable it when validating unrelated or untrusted artifacts.

What this means

If fixer loops are enabled, local files may be changed based on model-generated remediation suggestions.

Why it was flagged

The framework includes an optional fixer that can apply changes to reviewed artifacts. This is disclosed and related to validation, but it is more impactful than read-only review.

Skill content
Fixer Agent — Proposes and applies fixes for CRITICAL/HIGH findings (optional, 1-2 loops max)
Recommendation

Use fixer/thorough modes only in a clean git working tree or on copies, review diffs before accepting changes, and avoid enabling automatic fixes for critical production files.

What this means

A malicious file being reviewed could attempt to distract, redirect, or bias the validation agents, potentially weakening the review.

Why it was flagged

A provided scan snippet notes that user artifact text is inserted into critic prompts. For an LLM-based validator this is expected, but adversarial artifacts can try to influence critic behavior if not treated strictly as data.

Skill content
The `{artifact_text}` placeholder injects the entire contents of a user-provided file directly into critic prompts.
Recommendation

Treat reviewed artifacts as untrusted input, keep strong prompt/data boundaries, and have users verify high-impact verdicts manually.