Nvidia Model Config

v1.0.5

Add the NVIDIA provider to OpenClaw with SecretRef apiKey (no plaintext in openclaw.json). Documents shell vs systemd gateway env so the key actually resolve...

0· 154·0 current·0 all-time
byWei Li@0xli

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for 0xli/nvidia-model-config.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Nvidia Model Config" (0xli/nvidia-model-config) from ClawHub.
Skill page: https://clawhub.ai/0xli/nvidia-model-config
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install nvidia-model-config

ClawHub CLI

Package manager switcher

npx clawhub@latest install nvidia-model-config
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the included script and models. The script inserts a providers.nvidia block, uses a SecretRef-style env id, and includes the listed model entries and NVIDIA API endpoint — all directly relevant to the stated purpose.
Instruction Scope
Instructions and the script operate only on the target openclaw.json, optional env file (e.g., ~/.config/openclaw/gateway.env), and a user systemd override directory — all in-scope. Caution: using --inline-key or running with --dry-run while using inline-key will print plaintext keys to stdout; the SKILL.md warns about inline-key but does not call out the dry-run printing risk explicitly.
Install Mechanism
No install spec; this is an instruction-only skill with a small local Python script. Nothing is downloaded or written outside user-controlled files and typical user config locations.
Credentials
The skill requests no platform credentials and does not declare required env vars. It optionally writes a local env file and a user systemd override to expose NVIDIA_API_KEY to the OpenClaw gateway — appropriate for the task. Users should avoid --inline-key for long-lived/shared configs and be mindful that dry-run will display inline keys if used.
Persistence & Privilege
always:false and no autonomous invocation settings are unusual. The script may create a user-level systemd override for openclaw-gateway (user scope) and writes a per-user env file; this is expected for configuring a gateway service and does not modify other skills or system-wide configuration.
Assessment
This skill appears coherent and implements exactly what it claims, but review and follow these safety steps before use: - Prefer the secure default (SecretRef) over --inline-key. Only use --inline-key for short-lived local tests and never commit configs with inline keys. - Beware: running --dry-run while using --inline-key will print the config (and the inline key) to stdout. Avoid dry-run when testing inline keys or ensure your terminal/stdout is not being captured. - If you use --setup-env the script will write an env file (mode 600) to the path you supply; confirm the path and permissions. The script sets 0600 which is good practice. - The script will create a user systemd override in ~/.config/systemd/user/... for the openclaw-gateway service. Verify the correct service name and that you want to reload/restart that user service. - Back up your openclaw.json (use --backup or manual copy) before running in non-dry-run mode. - Inspect the script yourself (it is included) and run it locally — it performs only local file edits and writes; it contains no network-exfiltration code. If you run it on a machine with remote logging/monitoring, be mindful of where stdout/stderr may be captured. If you want additional assurance, request a short code review of scripts/merge_nvidia_config.py for your environment or run it in a sandboxed/test workspace first.

Like a lobster shell, security has layers — review code before you run it.

latestvk97f9w1e1664sef0symhbg5f5s84acxs
154downloads
0stars
6versions
Updated 3w ago
v1.0.5
MIT-0

NVIDIA Model Config Skill

Overview

This skill packages three reusable pieces:

  1. A script (scripts/merge_nvidia_config.py) that inserts the NVIDIA provider block into any openclaw.json file and configures apiKey as a SecretRef by default.
  2. Model entries for Mixtral, Moonshot Kimi, Kimi K2.5, Nemotron Super (1M ctx), Llama 3.1 Nemotron Ultra 253B (128K ctx), and MiniMax M2.5 (204.8K ctx) — delete extras or add more from openclaw models list --provider nvidia --all.
  3. Instructions for backups, secrets, and where NVIDIA_API_KEY must be set so the gateway can resolve it (this is not only openclaw.json).

Use the skill whenever you want to replicate the NVIDIA models.providers.nvidia entry without guessing which keys or nested objects to copy.

Quick start

  1. Copy or download this skill (e.g., rsync -av skills/nvidia-model-config /path/to/other/workspace/skills/).
  2. Obtain your NVIDIA API key and keep it secret (do not commit it).
  3. Run the script from the target workspace:
python skills/nvidia-model-config/scripts/merge_nvidia_config.py \
  --config openclaw.json --key "YOUR_KEY" --setup-env ~/.config/openclaw/gateway.env --setup-systemd --backup
  • --config defaults to openclaw.json in the current directory.
  • --key provides the API key (alternatively, set NVIDIA_API_KEY in your shell).
  • --setup-env writes the key to a dedicated environment file (e.g., ~/.config/openclaw/gateway.env).
  • --setup-systemd creates a systemd user override to load the environment file for the gateway.
  • --backup saves the original file as openclaw.json.bak before overwriting.
  • By default, the script writes models.providers.nvidia.apiKey as:
    • {"source":"env","provider":"default","id":"NVIDIA_API_KEY"}

Manual Environment Setup

If you prefer not to use --setup-systemd, you must set your key in the runtime environment where the OpenClaw gateway runs.

Interactive shell / CLI only (e.g. testing openclaw in a terminal):

export NVIDIA_API_KEY="$YOUR_KEY"

Gateway under systemd (typical on Linux) — the service does not read ~/.bashrc. Put the key in a file the unit loads, for example:

  • File: ~/.config/openclaw/gateway.env (mode 600):
NVIDIA_API_KEY=your_key_here
  • User unit drop-in ~/.config/systemd/user/openclaw-gateway.service.d/override.conf:
[Service]
Environment=NVIDIA_API_KEY=
EnvironmentFile=-/home/YOUR_USER/.config/openclaw/gateway.env

The empty Environment=NVIDIA_API_KEY= clears any inherited value so EnvironmentFile is the single source of truth. Then:

systemctl --user daemon-reload
systemctl --user restart openclaw-gateway.service

You can also keep a personal ~/.config/openclaw/secrets.env and source it from ~/.bashrc for CLI-only use; that does not replace the gateway env above.

If you want to preview the changes before writing, add --dry-run and capture the printed JSON.

What the script does

  1. Removes legacy plaintext copies of NVIDIA_API_KEY from config (env.vars.* and env.*) when present.
  2. Creates or updates the models.providers.nvidia block with bundled NVIDIA models (Nemotron Super 1M ctx, Nemotron Ultra 253B ~128K ctx, MiniMax M2.5 ~204.8K ctx, plus Mixtral/Kimi entries). NVIDIA may return 403 if your key is not entitled to a model; pick a model that matches your account and catalog.
  3. Keeps the api/baseUrl values in sync with NVIDIA’s integrate.api.nvidia.com endpoint.
  4. Supports an explicit legacy mode when needed:
NVIDIA_API_KEY="$YOUR_KEY" \
  python skills/nvidia-model-config/scripts/merge_nvidia_config.py \
  --config openclaw.json --inline-key

Use --inline-key only for short-lived local tests.

Optional adjustments

  • Set default model with openclaw models set nvidia/<model-id> (full id is nvidia/ + provider model id, e.g. nvidia/nvidia/nemotron-3-super-120b-a12b when the provider entry id is nvidia/nemotron-3-super-120b-a12b).
  • If the target install manages agent defaults manually, add fallback entries under agents.defaults.model.fallbacks so clients can recover if the primary model fails.
  • Double-check other agents’ models lists if they need aliases.

Distribution tips

  1. Bundle this skill directory and any instructions or scripts you use into a .zip/.skill file to share with teammates.
  2. In your documentation, point operators to this SKILL so Codex can reload it and the script automatically when they ask to “add NVIDIA models.”
  3. Keep real API keys outside of Git. Use environment variables or SecretManagers and rely on the script to merge them at runtime.

Comments

Loading comments...