Local Inference Context
PassAudited by VirusTotal on May 5, 2026.
Overview
Type: OpenClaw Skill Name: local-inference-context Version: 1.0.0 The skill provides purely instructional guidance for an AI agent to manage context constraints when using local inference backends like Ollama or llama.cpp. It uses standard diagnostic commands (nvidia-smi, curl) and efficient file-reading practices (sed, grep) to prevent VRAM overflows and 503 errors, with no evidence of malicious intent or data exfiltration.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The agent may run local diagnostic commands or query a local inference server when helping manage context pressure.
The skill suggests local shell commands and a localhost API check to measure GPU and inference-server state. This is aligned with the local-inference context purpose, but it is still local tool use users should recognize.
nvidia-smi --query-gpu=memory.used,memory.free,memory.total ... curl -s http://localhost:8081/slots | python3 -m json.tool
Use this only if you expect the agent to inspect local GPU/backend status, and review commands before running them in sensitive environments.
Private project details or operational context could remain in the agent's memory if included in checkpoints.
The skill intentionally uses persistent memory/checkpoints to survive compaction. That is useful for the stated purpose, but it may retain local paths, configuration details, or other task information.
Write key values to memory immediately after each tool call ... Critical values: [file paths, ports, error codes, config keys]
Avoid saving secrets, tokens, full logs, credentials, or sensitive file contents to memory; keep checkpoints limited to non-sensitive summaries.
