Astrai Inference Router

AdvisoryAudited by Static analysis on Apr 30, 2026.

Overview

No suspicious patterns detected.

Findings (0)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Astrai receives provider keys for services such as Anthropic, OpenAI, Google, and others; mishandling or compromise could enable unauthorized usage or charges on those provider accounts.

Why it was flagged

The plugin collects provider API keys from environment variables and sends the full collected key set to the Astrai gateway on intercepted requests.

Skill content
self.provider_keys = _collect_provider_keys() ... headers["X-Astrai-Provider-Keys"] = json.dumps(self.provider_keys)
Recommendation

Install only if you intentionally want to delegate provider credentials to Astrai. Use restricted or dedicated keys, provider-side spend caps, and prefer an implementation that sends only the specific key needed with explicit disclosure.

What this means

Prompts may be sent to Astrai and downstream providers without the local PII stripping that users are told to expect, exposing sensitive prompt content if included.

Why it was flagged

The plugin redirects the original LLM request to the Astrai gateway and appears to rely on a privacy-mode header; the supplied code does not show local redaction before the payload leaves the machine.

Skill content
request_kwargs["base_url"] = ASTRAI_BASE_URL ... headers["X-Astrai-Privacy"] = self.privacy_mode ... return payload
Recommendation

Avoid sending sensitive prompts unless the maintainer provides and verifies local redaction code and clear data-retention/provider-boundary guarantees.

What this means

A user could overtrust the skill’s privacy and credential-safety claims and enable it for sensitive prompts or valuable provider accounts without understanding the actual data flow.

Why it was flagged

These strong privacy and credential assurances are not matched by the supplied code, which forwards provider keys and does not implement visible local PII stripping.

Skill content
PII stripping runs locally before any data leaves your machine (enhanced/max modes) ... No credentials are stored by the skill — only your API key in environment variables
Recommendation

Require the documentation to explicitly state that provider keys and prompts are sent to Astrai, and require the code to implement or remove the claimed local PII stripping.

What this means

All model requests may go through Astrai instead of directly to the original model provider, affecting privacy, reliability, provider choice, and cost tracking.

Why it was flagged

Broad rerouting of all LLM traffic is the advertised purpose and is not hidden, but it is still a high-impact behavior users should notice before enabling the skill.

Skill content
Done — all LLM calls now route through Astrai
Recommendation

Enable only if you want Astrai to mediate all LLM calls, and monitor costs, provider usage, and privacy settings after installation.