Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Sharedintellect Quorum

v0.7.3

Multi-agent validation framework — 6 independent AI critics evaluate artifacts against rubrics with evidence-grounded findings.

0· 579·1 current·1 all-time
byDaniel@dacervera
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (multi-agent validation) align with the included code and instructions: the repository contains a full Python reference implementation, CLI, rubrics, and prompt templates for multiple critics. Requiring python3/pip is appropriate. However the registry metadata and the SKILL.md are inconsistent about whether this is instruction-only vs. an installable package (SKILL.md includes an install command that clones the repo and pip-installs requirements). Also the skill declares both ANTHROPIC_API_KEY and OPENAI_API_KEY as required; that matches the project's multi-provider support but may be unnecessary if you only intend to use one provider.
!
Instruction Scope
Runtime instructions include cloning the repo and pip installing requirements (downloads and executes third‑party code), running the CLI which will run deterministic prescreens and then multiple LLM-based critics, and the codebase documents a Fixer agent that 'proposes and applies fixes' (may modify files). The pipeline also references executing shell tools, running linters (Ruff/Bandit/DevSkim/PSScriptAnalyzer), and performing web searches; these are within the claimed purpose but expand the agent's authority to run local commands and network I/O and to modify artifacts — a material permission that should be explicitly controlled by the user.
Install Mechanism
SKILL.md's frontmatter includes an install step that runs 'git clone https://github.com/SharedIntellect/quorum.git /tmp/quorum-install && cd ... && pip install -r requirements.txt'. Cloning from GitHub is a well-known host, but pip-installing remote requirements will install arbitrary packages and may execute installation hooks. This is a standard but non-trivial install vector and should be audited (inspect requirements.txt and the repository) before running.
Credentials
The skill requests ANTHROPIC_API_KEY and OPENAI_API_KEY. The project supports multiple model providers, so requesting both keys is explainable, but the registry lists both as required even though SKILL.md suggests you can set one provider in config. Requiring multiple high-privilege API keys by default increases exposure; prefer providing only the provider(s) you will actually use and follow least-privilege practices (use separate, scoped accounts where available).
!
Persistence & Privilege
always:false (good), and the skill is user-invocable/autonomous invocation is allowed (default). The implementation writes run outputs (prescreen.json, verdict.json, report.md) and quorum-config.yaml, and includes an optional Fixer that can apply edits to targets. The ability to modify files and to persist a learning memory (known_issues.json) increases the risk profile — you should confirm and control whether automatic edits are enabled and where run artifacts are written.
Scan Findings in Context
[prompt-injection:ignore-previous-instructions] unexpected: The SKILL.md / prompt files include patterns flagged as 'ignore-previous-instructions'. The repository contains many prompt templates for critics — such patterns might appear in prompt engineering artifacts, but they are the exact tokens used to override agent/system instructions and therefore worth auditing. Verify that prompts do not instruct external LLM calls to override safety or system-level controls.
[prompt-injection:system-prompt-override] unexpected: Pattern detected that could be used to attempt system-prompt overrides. This is plausible in a project that ships many agent prompts, but it increases risk: prompts passed, verbatim, to a model could attempt to change expected agent behaviour. Review prompt templates (ports/*/prompts, critics/*.md) before use.
What to consider before installing
Summary of what to check and how to reduce risk before installing or running this skill: 1) Audit the repo and dependencies first: do a manual git clone and inspect requirements.txt, CLI entry points, and any setup/install scripts before running pip install. Consider installing into an isolated virtualenv or disposable container/VM. 2) Review prompts and agent templates: the package includes many prompt files; search for phrases like 'ignore previous' or 'system prompt' and confirm they are used only internally and not sent to models in ways that would elevate privileges or leak secrets. 3) Limit API key exposure: provide only the provider key(s) you intend to use (Anthropic or OpenAI), and use scoped/ephemeral keys where possible. Treat these keys as sensitive — the tool will send artifact contents to those model endpoints. 4) Control automatic edits: the Fixer component can apply proposed fixes. Before running, check configuration options (or run in a 'dry-run' / --no-fixer mode) so the tool does not modify source files without explicit approval. 5) Sandbox runs when validating sensitive artifacts: the tool will run local linters, may invoke shell commands, and will call LLM APIs (network traffic). Run the tool on non-sensitive examples first and consider network-restricted testing for artifacts containing secrets. 6) Inspect prescreen/outputs: Quorum writes prescreen.json, verdict.json, report.md, and known_issues.json — review these outputs for unexpected data collection. Remove or redact secrets from artifacts before validation. 7) If you need stronger assurance: ask the maintainer for a signed release (PyPI release or GitHub release tag), or prefer installing the published PyPI package (quorum-validator) rather than cloning main, after verifying release provenance. If you want, I can: (a) point out specific files that mention automatic 'apply' behavior, (b) fetch and summarize requirements.txt for further scrutiny, or (c) list the prompt/template files that contain the prompt-injection patterns so you can inspect them.
golden-test-set/tests/test_score.py:197
Dynamic code execution detected.
reference-implementation/quorum/critics/code_hygiene.py:136
Dynamic code execution detected.
reference-implementation/quorum/critics/security.py:216
Dynamic code execution detected.
reference-implementation/tests/fixtures/bad/code-god-function.py:34
Dynamic code execution detected.
reference-implementation/tests/test_models.py:389
Dynamic code execution detected.
!
reference-implementation/tests/test_prescreen_properties.py:106
Potential obfuscated payload detected.
!
docs/architecture/IMPLEMENTATION.md:122
Prompt-injection style instruction pattern detected.
!
ports/claude-code/quorum-validation-skill/completeness-findings.md:50
Prompt-injection style instruction pattern detected.
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

agent-toolsvk97dvh6m3qqdn4drc1p407era581rj8vcode-reviewvk97dvh6m3qqdn4drc1p407era581rj8vcriticsvk97dvh6m3qqdn4drc1p407era581rj8vlatestvk9704w3yx9m8exjnfrre02fk1s83151hmulti-agentvk97dvh6m3qqdn4drc1p407era581rj8vqualityvk97dvh6m3qqdn4drc1p407era581rj8vresearchvk97dvh6m3qqdn4drc1p407era581rj8vrubricsvk97dvh6m3qqdn4drc1p407era581rj8vtestingvk97dvh6m3qqdn4drc1p407era581rj8vvalidationvk97dvh6m3qqdn4drc1p407era581rj8v

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

Binspython3, pip
EnvANTHROPIC_API_KEY, OPENAI_API_KEY

Comments