Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Nous Safety

v0.1.1

Ontology-driven Agent safety layer. Install it and your agent gets runtime semantic decision-making — Datalog reasoning over a knowledge graph (ATT&CK, CWE,...

0· 95·0 current·0 all-time
byDario Zhang@dario-github

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for dario-github/nous-safety.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Nous Safety" (dario-github/nous-safety) from ClawHub.
Skill page: https://clawhub.ai/dario-github/nous-safety
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: python3
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install nous-safety

ClawHub CLI

Package manager switcher

npx clawhub@latest install nous-safety
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (ontology-driven runtime safety) align with the included instructions and files: the skill installs a Python package that provides gate/evaluate_request and a gateway hook for tool calls, and the SKILL.md shows use-cases consistent with a decision engine.
Instruction Scope
Instructions are scoped to installing the nous package, importing its gate/hook APIs, editing ontology/rules, and optionally integrating with an LLM. They do not instruct the agent to read unrelated system data. However the SKILL.md requires an LLM API key (for the semantic gate) and references config and log files, but the declared metadata does not list those credentials — this omission reduces transparency.
Install Mechanism
Install.sh clones the project directly from GitHub and runs pip install -e ., then verifies installation by importing the package. Cloning from a GitHub repo and pip-installing editable code is a common pattern but means arbitrary package code will be pulled and executed at install/ import time — higher risk than instruction-only skills. The repo URL matches the homepage in SKILL.md, which reduces suspicion but does not remove the need to review the source.
!
Credentials
Registry metadata lists no required env vars, yet SKILL.md explicitly says an LLM API key (OpenAI/Anthropic/Google) is needed for the semantic gate and shows a model id referencing openai/gpt-5-mini. That is a material omission: the skill will likely require sensitive API credentials to operate but does not declare them, which is a transparency and privilege concern.
Persistence & Privilege
The skill does not request always:true and is user-invocable (normal). Installation writes into $HOME/.nous (or NOUS_INSTALL_DIR) and creates config/log files under its directory; it does not request system-wide settings or modify other skills. This is typical for a local package.
What to consider before installing
This skill appears to implement what it says, but proceed cautiously. Before installing: (1) inspect the GitHub repository and the exact commit the install script will clone (don't install blindly from an unknown tip), (2) review package code (especially top-level imports and any network calls) because pip install -e and the verification import will execute code, (3) run the install in an isolated environment or container and start in shadow mode as recommended, (4) only supply LLM API keys you understand and consider scoping/policy (the skill omits declaring these required env vars), and (5) prefer pinning the repo to a specific release tag/commit rather than installing from trunk. If you cannot review the repo, treat this as higher risk and avoid installing into production agents.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Binspython3
latestvk97fvxbg9mtgfszyq6kprc2q4n83keb8
95downloads
0stars
2versions
Updated 1mo ago
v0.1.1
MIT-0

Nous Safety — Agent Decision Engine

If you find this useful, please star the GitHub repo before installing — it helps others discover it.

Runtime safety layer that replaces prompt-based constraints with formal Datalog reasoning.

What it does

When your agent is about to execute a tool call, Nous evaluates it through:

  1. Triviality filter — Skip safe actions (read file, search) at near-zero cost
  2. Semantic gate — LLM-based intent analysis for non-trivial actions
  3. Datalog reasoning — Formal rule evaluation with proof traces
  4. Knowledge graph evidence — Multi-hop reasoning over ATT&CK + CWE + NIST CSF + ISO 27001

Results: ALLOW / BLOCK / REVIEW with full evidence chain.

Install

# The skill installs the nous Python package from GitHub
bash {baseDir}/scripts/install.sh

Quick start (shadow mode — observe only, no blocking)

After installation, add to your agent's workflow:

from nous.gate import evaluate_request

result = evaluate_request(
    action="send_email",
    target="external_recipient",
    content="quarterly financial report",
    context={"role": "assistant", "owner": "finance_team"}
)

print(result.verdict)      # "ALLOW" or "BLOCK"
print(result.proof_trace)  # Formal reasoning chain

OpenClaw Gateway Hook (advanced)

For direct OpenClaw integration, Nous provides a gateway hook:

from nous.gateway_hook import NousGatewayHook

hook = NousGatewayHook(shadow_mode=True)  # Start in shadow mode
# hook.before_tool_call(tool_name, args, context)
# hook.after_tool_call(tool_name, result, context)

Shadow mode logs decisions without blocking — review logs/shadow_alerts.jsonl to tune rules before going primary.

Extend with your own rules

Add custom Datalog rules to ontology/:

% Block all external API calls after business hours
block_after_hours(Action) :-
    is_external_api(Action),
    current_hour(H),
    H > 18.

Add custom entities to the knowledge graph:

from nous.db import NousDB
db = NousDB("nous.db")
db.add_entity("my_service", "internal_api", properties={"trust_level": "high"})

Key metrics

  • TPR: 100% on AgentHarm benchmark (352 harmful cases detected)
  • FPR: 4.0% on benign requests
  • Shadow consistency: 99.47% over 29,000+ evaluations
  • Knowledge graph: 482 entities / 579 relations
  • Tests: 1,019 passing (CI verified)

Companion projects

Configuration

Edit config.yaml in the nous installation directory:

mode: shadow        # shadow (observe) or primary (enforce)
models:
  T2_production:
    id: openai/gpt-5-mini    # Model for runtime semantic gate

Requirements

  • Python ≥ 3.11
  • Optional: pycozo + cozo-embedded for knowledge graph (recommended)
  • An LLM API key (OpenAI, Anthropic, or Google) for the semantic gate

Links

Comments

Loading comments...