Astrai Inference Router
AdvisoryAudited by Static analysis on Apr 30, 2026.
Overview
No suspicious patterns detected.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Astrai receives provider keys for services such as Anthropic, OpenAI, Google, and others; mishandling or compromise could enable unauthorized usage or charges on those provider accounts.
The plugin collects provider API keys from environment variables and sends the full collected key set to the Astrai gateway on intercepted requests.
self.provider_keys = _collect_provider_keys() ... headers["X-Astrai-Provider-Keys"] = json.dumps(self.provider_keys)
Install only if you intentionally want to delegate provider credentials to Astrai. Use restricted or dedicated keys, provider-side spend caps, and prefer an implementation that sends only the specific key needed with explicit disclosure.
Prompts may be sent to Astrai and downstream providers without the local PII stripping that users are told to expect, exposing sensitive prompt content if included.
The plugin redirects the original LLM request to the Astrai gateway and appears to rely on a privacy-mode header; the supplied code does not show local redaction before the payload leaves the machine.
request_kwargs["base_url"] = ASTRAI_BASE_URL ... headers["X-Astrai-Privacy"] = self.privacy_mode ... return payload
Avoid sending sensitive prompts unless the maintainer provides and verifies local redaction code and clear data-retention/provider-boundary guarantees.
A user could overtrust the skill’s privacy and credential-safety claims and enable it for sensitive prompts or valuable provider accounts without understanding the actual data flow.
These strong privacy and credential assurances are not matched by the supplied code, which forwards provider keys and does not implement visible local PII stripping.
PII stripping runs locally before any data leaves your machine (enhanced/max modes) ... No credentials are stored by the skill — only your API key in environment variables
Require the documentation to explicitly state that provider keys and prompts are sent to Astrai, and require the code to implement or remove the claimed local PII stripping.
All model requests may go through Astrai instead of directly to the original model provider, affecting privacy, reliability, provider choice, and cost tracking.
Broad rerouting of all LLM traffic is the advertised purpose and is not hidden, but it is still a high-impact behavior users should notice before enabling the skill.
Done — all LLM calls now route through Astrai
Enable only if you want Astrai to mediate all LLM calls, and monitor costs, provider usage, and privacy settings after installation.
