Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Src

v1.0.0

Configure OpenAI Codex CLI to use Vertex AI Gemini models via LiteLLM. A guide for translating strict Codex requests for Gemini compatibility.

0· 24·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for bhrum/litellm-vertex-codex.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Src" (bhrum/litellm-vertex-codex) from ClawHub.
Skill page: https://clawhub.ai/bhrum/litellm-vertex-codex
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install litellm-vertex-codex

ClawHub CLI

Package manager switcher

npx clawhub@latest install litellm-vertex-codex
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's stated purpose (configure Codex to talk to Vertex Gemini via LiteLLM) is coherent with the runtime instructions. However, the registry metadata and requirements fields claim no env vars or credentials are required while the SKILL.md clearly requires GCP/Vertex authentication, a GCP project ID, a running litellm instance, and that OPENAI_API_KEY be set in the shell. That omission is an inconsistency (unexpected undeclared requirements).
Instruction Scope
The SKILL.md's runtime instructions are focused on configuring LiteLLM and Codex and verifying the flow. They instruct creating/editing /app/config.yaml, ~/.codex/config.toml, and updating shell profiles (~/.bashrc or ~/.zshrc). Those file changes and the instruction to run a local proxy and set environment variables are within the stated task, but they do involve modifying user config files and require credentials (Vertex ADC) that are not declared in the skill metadata. The instructions also assume the user will run a locally listening service that will forward content to Vertex — which has privacy and auth implications that are not surfaced in the metadata.
Install Mechanism
This is an instruction-only skill with no install spec and no bundled binaries or downloads. That reduces installer risk: nothing is written or fetched by the skill itself. The SKILL.md does assume external tools (codex CLI, litellm) are installed by the user, but the skill does not attempt to install them.
!
Credentials
The SKILL.md requires environment/config-level credentials (GCP Project ID, Vertex AI auth via ADC, and the user to set OPENAI_API_KEY in their shell) but the skill metadata lists no required env vars or primary credential. Requiring cloud credentials and modifying shell config is proportionate to connecting to Vertex AI, but failing to declare them in the metadata is an incoherence and a security/visibility problem: users may not realize the skill needs access to sensitive credentials or that data will be proxied to Google.
Persistence & Privilege
The skill is not always:true and uses the platform default of being user-invocable and allowed to be invoked autonomously. It does instruct persistent changes to user config files (~/.codex/config.toml and shell profiles) which is expected for this setup, but it does not request elevated system privileges or attempt to modify other skills or global agent settings.
Scan Findings in Context
[no_code_files] expected: The regex scanner had nothing to analyze because the skill is instruction-only (SKILL.md). That's expected for a configuration guide, but it means the SKILL.md is the primary security surface to review.
What to consider before installing
This skill appears to do what it says (configure Codex → Gemini via LiteLLM) but the metadata omits key requirements. Before installing or following it: 1) Back up ~/.codex/config.toml and your shell profile. 2) Verify you have legitimate copies of the codex CLI and litellm binaries and install them from trusted sources. 3) Understand that you must provide GCP credentials (Application Default Credentials) and a GCP project — these credentials will be used to call Vertex AI and will be sent to Google's APIs via the local proxy. 4) The SKILL.md asks you to set OPENAI_API_KEY in your shell (it suggests a dummy value); avoid reusing real OpenAI keys unless intended. 5) If you need higher assurance, ask the skill author to update registry metadata to declare required env vars (OPENAI_API_KEY, GCP_PROJECT/ADC) and describe precisely what data is proxied to Vertex so you can perform a risk assessment. If you cannot verify the provenance of litellm or the guidance author, proceed cautiously.

Like a lobster shell, security has layers — review code before you run it.

latestvk978fd7fy46sa9e4rwa22828dh85m21e
24downloads
0stars
1versions
Updated 5h ago
v1.0.0
MIT-0

LiteLLM to Vertex AI Setup for Codex

This skill describes how to configure the OpenAI Codex CLI agent to communicate with Google's Vertex AI Gemini models using LiteLLM as a protocol translation proxy.

Codex requires a strict OpenAI response format and specific roles (user, assistant/model) which native Gemini or lightweight proxies (like CLIProxyAPI) do not perfectly support. LiteLLM is required to strip unsupported parameters and format the requests.

Prerequisites

  • codex CLI installed (npm install -g @openai/codex)
  • litellm installed and running locally
  • Google Cloud Platform (GCP) Project ID with Vertex AI API enabled
  • Vertex AI authentication configured (e.g., Application Default Credentials)

1. LiteLLM Configuration

You need to create a config.yaml for LiteLLM that drops complex parameters and sets content to simple strings, then routes a codex model alias to your Gemini Vertex endpoint.

Create or update your config.yaml (e.g., /app/config.yaml):

litellm_settings:
  drop_params: true
  set_content_to_str: true # Crucial for Codex: forces complex system prompts into simple strings

model_list:
  - model_name: gemini-3.1-pro-preview
    litellm_params:
      model: vertex_ai/gemini-3.1-pro-preview
      vertex_project: your-gcp-project-id
      vertex_location: global
      drop_params: true
      
  # Create aliases for Codex to use
  - model_name: codex
    litellm_params:
      model: vertex_ai/gemini-3.1-pro-preview
      vertex_project: your-gcp-project-id
      vertex_location: global
      drop_params: true

Run LiteLLM with this config (e.g., litellm --config /app/config.yaml --port 4000).

2. Codex Configuration

Codex stores its configuration in ~/.codex/config.toml. You must configure it to point to your local LiteLLM instance and specifically request the responses wire API, as Codex has deprecated the chat wire API.

Update ~/.codex/config.toml:

# Use the custom LiteLLM provider
model_provider = "litellm"
# The model name here MUST match the `model_name` in your LiteLLM config
model = "gemini-3.1-pro-preview" 
model_reasoning_effort = "high"

[model_providers.litellm]
name = "litellm"
# Point to your local LiteLLM instance
base_url = "http://127.0.0.1:4000/v1"
# Crucial: Codex will error out if this is set to "chat"
wire_api = "responses" 

[projects."/path/to/your/workspace"]
trust_level = "trusted"

3. Shell Environment

Codex expects OPENAI_API_KEY to be set, even when using a custom proxy that doesn't require an actual OpenAI key.

Add this to your shell profile (~/.bashrc or ~/.zshrc):

export OPENAI_API_KEY="sk-litellm"

4. Verification

To verify the setup is working, run Codex in a temporary git repository (Codex refuses to run outside a git repo):

cd $(mktemp -d) && git init && codex exec 'hello'

If successful, Gemini will respond via the Codex CLI interface.

Troubleshooting

  • wire_api = "chat" is no longer supported: Ensure wire_api = "responses" is set in ~/.codex/config.toml.
  • Please use a valid role: user, model.: This means your proxy isn't correctly translating the OpenAI assistant or system roles to Gemini's expected format. Ensure you are using LiteLLM (not CLIProxyAPI) and that set_content_to_str: true is enabled in LiteLLM config.
  • Hanging Commands / No Output: Ensure pty=true is used if calling Codex programmatically via Hermes, or that you are running it in an interactive terminal.

Comments

Loading comments...