Hermes Agent Health Check

v1.1.2

Audit a NousResearch/hermes-agent checkout or fork for Hermes-specific runtime-contract drift, command-surface splits, memory/skill/gateway health, and agent...

0· 49·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for huangrichao2020/hermes-agent-health-check.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Hermes Agent Health Check" (huangrichao2020/hermes-agent-health-check) from ClawHub.
Skill page: https://clawhub.ai/huangrichao2020/hermes-agent-health-check
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install hermes-agent-health-check

ClawHub CLI

Package manager switcher

npx clawhub@latest install hermes-agent-health-check
Security Scan
Capability signals
CryptoRequires OAuth tokenRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name, description, README, and SKILL.md all consistently describe an architecture-and-health scanner for NousResearch/hermes-agent checkouts. The instructions (install hermescheck and run it against a repo path) are aligned with that stated purpose; nothing in the package requires unrelated credentials or binaries.
Instruction Scope
The runtime instructions are narrowly focused: install the hermescheck package and run it against a Hermes Agent checkout, producing local report files (audit_results.json, audit_report.md). The instructions do not request unrelated env vars or system-wide reads. However, running the recommended commands will cause third-party code to read the target repo contents (intended) and write report files; those reports can contain sensitive evidence (e.g., discovered secrets), so you should not run it directly against production repositories with unredacted secrets.
Install Mechanism
The skill is instruction-only (no install spec embedded), but the Quick Start tells users to 'pip install hermescheck' (PyPI) and run it. Installing and executing a PyPI package runs third-party code on your system — a normal and expected behavior for developer tools but carries standard supply-chain risk. The README points to a GitHub origin which helps verification. Risk is moderate: verify package ownership, inspect source, or run in an isolated VM/virtualenv.
Credentials
The skill declares no required env vars, binaries, or config paths, which is proportional to a static/structural code scanner. Be aware that hermescheck scanners look for patterns related to network calls, hidden LLM invocations, exec/eval, etc.; the scanner itself could be extended to make network calls or require credentials in some profiles, but nothing in SKILL.md requests unrelated secrets.
Persistence & Privilege
The skill does not request persistent presence (always:false), does not declare config paths, and is user-invocable. There is no evidence it attempts to modify other skills or system-wide agent settings. Autonomous invocation is allowed by platform default but is not combined with other red flags here.
Assessment
This skill is coherent and appears to do what it says: run the hermescheck scanner against a Hermes Agent repo. The main operational risk is installing and executing a third‑party Python package from PyPI. Before running: (1) inspect the hermescheck source on its GitHub repo and/or pin a known-good release; (2) install and run it in an isolated environment (virtualenv, container, or VM); (3) run it on a copy of the repo or a sanitized snapshot if your repo contains secrets (scan output can include evidence of secrets); (4) prefer running from a local clone (python -m hermescheck ./path) instead of blindly pip-installing system-wide; and (5) if you plan to let an autonomous agent invoke this skill, restrict that agent’s scope and review any generated report files before sharing externally. If you want a higher assurance, provide the hermescheck package source for manual review or run the tool in a fully offline, sandboxed environment.

Like a lobster shell, security has layers — review code before you run it.

agent-auditvk972bcszmsgzdc3wy8z1kng5wx85kq6hhermes-agentvk972bcszmsgzdc3wy8z1kng5wx85kq6hlatestvk972bcszmsgzdc3wy8z1kng5wx85kq6h
49downloads
0stars
1versions
Updated 2d ago
v1.1.2
MIT-0

Hermes Agent Health Check

Audit the architecture and health of a Hermes Agent checkout, fork, or deployment support repo.

Hermes Agent has a connected runtime: agent loop, command registry, CLI, TUI, gateway, skills, memory, cron, tools, plugins, and terminal environments. hermescheck helps keep those surfaces aligned.

When to Use

  • You are preparing a Hermes Agent PR and want a repeatable architecture review
  • A Hermes fork works in CLI but not gateway, TUI, skills, cron, or plugins
  • A new slash command risks drifting across surfaces
  • A tool or environment change needs clearer capability boundaries
  • Memory, session search, or skill behavior regressed after a refactor
  • Startup paths or background jobs became hard to reason about

Quick Start

pip install hermescheck
hermescheck /path/to/hermes-agent

Produces audit_results.json and audit_report.md.

The 12-Layer Stack

#LayerWhat Goes Wrong
1System promptConflicting instructions, instruction bloat
2Session historyStale context from previous turns
3Long-term memoryPollution across sessions
4DistillationCompressed artifacts re-entering as pseudo-facts
5Active recallRedundant re-summary layers wasting context
6Tool selectionWrong tool routing, model skips required tools
7Tool executionHallucinated execution — claims to call but doesn't
8Tool interpretationMisread or ignored tool output
9Answer shapingFormat corruption in final response
10Platform renderingUI/API/CLI mutates valid answers
11Hidden repair loopsSilent fallback/retry agents running second LLM pass
12PersistenceExpired state or cached artifacts reused as live evidence

Audit Scanners

#ScannerSeverityWhat It Catches
1Hardcoded SecretscriticalAPI keys, tokens, credentials in source code
2Tool Enforcement Gaphigh"Must use tool X" in prompt but no code validation
3Hidden LLM CallshighSecret second-pass LLM calls in fallback/repair loops
4Unrestricted Code Executioncriticalexec(), eval(), subprocess(shell=True) without sandbox
5Static Bug InferencehighCode-level bug patterns inferred without runtime execution
6Token Usage BudgethighLarge default context windows, full-history prompts, missing thrift controls
7Memory Lifecycle GovernancemediumMemory without types, lifecycle, retrieval budgets, decay, or evidence pointers
8RAG Pipeline GovernancemediumRetrieval without chunk, top-k, rerank, ingestion, or context budget controls
9Self-Evolution CapabilityhighLearning loops without external signals, source reading, constraint fit, safe landing, or verification
10Loop Safety BudgethighTool/agent loops without max-iteration, retry budget, stuck-job, or duplicate-call controls
11Plugin / Remote Tool BoundaryhighExecutable plugins and MCP/OpenAPI tools without sandbox, schema, allowlist, or approval boundaries
12Output Pipeline MutationmediumResponse transformation corrupting correct answers
13Missing ObservabilitymediumNo tracing, logging, cost tracking, or audit trail

Severity Model

LevelMeaning
criticalAgent can confidently produce wrong operational behavior
highAgent frequently degrades correctness or stability
mediumCorrectness usually survives but output is fragile or wasteful
lowMostly cosmetic or maintainability issues

Fix Strategy

Default fix order (code-first, not prompt-first):

  1. Code-gate tool requirements — enforce in code, not just prompt text
  2. Remove or narrow hidden repair agents — make fallback explicit with contracts
  3. Reduce context duplication — same info through prompt + history + memory + distillation
  4. Tighten memory admission — user corrections > agent assertions
  5. Tighten distillation triggers — don't compress what shouldn't be compressed
  6. Reduce rendering mutation — pass-through, don't transform
  7. Convert to typed JSON envelopes — structured internal flow, not freeform prose

Report Schema

Reports follow a formal JSON Schema (see references/report-schema.json) with:

  • overall_health: critical_risk | high_risk | medium_risk | low_risk
  • findings: array of severity-ranked issues with evidence refs
  • maturity_score: positive signal ledger, penalty ledger, score formula, and expected recovery directions
  • ordered_fix_plan: prioritized fix steps with rationale

Anti-Patterns to Avoid

  • ❌ Saying "the model is weak" without falsifying the wrapper first
  • ❌ Saying "memory is bad" without showing the contamination path
  • ❌ Letting a clean current state erase a dirty historical incident
  • ❌ Treating markdown prose as a trustworthy internal protocol
  • ❌ Accepting "must use tool" in prompt text when code never enforces it

Related

Comments

Loading comments...