Back to skill
v1.0.3

Google Vertex Ai

ReviewClawScan verdict for this skill. Analyzed Apr 30, 2026, 2:58 PM.

Analysis

The skill matches its Google Vertex AI purpose, but it delegates broad cloud access through Membrane, installs an unpinned global CLI, and lacks clear approval and scope guardrails.

GuidanceInstall only if you trust Membrane and need Vertex AI automation. Before using it, pin and verify the CLI version, authenticate with the least-privileged Google account or project possible, avoid sending secrets, and require explicit confirmation before any action that creates, cancels, deploys, deletes, or proxies requests to Vertex AI.

Findings (9)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

Abnormal behavior control

Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.

Agent Goal Hijack
SeverityMediumConfidenceHighStatusConcern
SKILL.md
`clientAction.agentInstructions` (optional) — instructions for the AI agent on how to proceed programmatically.

The skill allows instructions returned from the Membrane connection flow to guide the agent programmatically, but does not state that those remote instructions must be validated against the user’s goal or treated as untrusted.

User impactRemote connection output could steer the agent toward actions the user did not explicitly request.
RecommendationOnly follow external agent instructions when they directly support the user’s current request, and ask the user before any sensitive or state-changing action.
Tool Misuse and Exploitation
SeverityHighConfidenceHighStatusConcern
SKILL.md
`membrane action run <actionId> --connectionId=CONNECTION_ID --json`

The skill enables running Membrane actions against a Google Vertex AI connection, including listed state-changing actions such as creating or canceling tuning jobs, but does not define approval or safety limits.

User impactA mistaken or overbroad action could modify Google Cloud AI resources, cancel work, create jobs, or incur costs.
RecommendationRequire explicit user confirmation for create, update, cancel, deploy, delete, proxy, or other cost-impacting actions, and constrain actions to a named project and location.
Agentic Supply Chain Vulnerabilities
SeverityMediumConfidenceHighStatusConcern
SKILL.md
npm install -g @membranehq/cli@latest

The skill instructs installation of a global npm package using the floating @latest tag, so the executed CLI version can change over time and is not pinned by the skill artifacts.

User impactThe local environment may run a different CLI version than the one reviewed, and a compromised or changed package could affect the user’s machine or cloud account.
RecommendationInstall a specific reviewed version of the CLI, verify the package source, and avoid global installation where a local or sandboxed install is sufficient.
Unexpected Code Execution
SeverityMediumConfidenceHighStatusConcern
SKILL.md
npx @membranehq/cli connection get <id> --wait --json

The instructions rely on shell execution of npm/npx tooling even though the registry describes the skill as instruction-only with no install spec.

User impactRunning the skill’s setup commands can execute external package code and alter the local environment.
RecommendationReview the CLI package before running it, prefer pinned versions, and get user approval before executing npm, npx, or global install commands.
Cascading Failures
SeverityMediumConfidenceHighStatusConcern
SKILL.md
Create Tuning Job | create-tuning-job | Create a new tuning job to fine-tune a Gemini model with your custom data.

The skill can initiate cloud ML jobs and lists other Vertex AI resources such as endpoints, models, datasets, and deployed models; a bad action can propagate into cost, deployment, or data impacts.

User impactOne incorrect command could start expensive jobs, affect ML workflows, or disrupt cloud resources beyond the immediate chat session.
RecommendationUse dry-run or read-only actions where possible, confirm project/location/model names, and require explicit approval before changes that affect cloud resources.
Human-Agent Trust Exploitation
SeverityLowConfidenceMediumStatusNote
SKILL.md
Membrane handles authentication and credentials refresh automatically — so you can focus on the integration logic rather than auth plumbing.

The wording emphasizes convenience and may cause users to overlook the security significance of delegating and refreshing credentials through Membrane.

User impactA user may proceed with authentication without fully considering third-party credential handling and persistence.
RecommendationMake credential delegation, refresh behavior, and revocation steps explicit before asking the user to authenticate.
Permission boundary

Checks whether tool use, credentials, dependencies, identity, account access, or inter-agent boundaries are broader than the stated purpose.

Identity and Privilege Abuse
SeverityHighConfidenceHighStatusConcern
SKILL.md
Membrane handles authentication and credentials refresh automatically

The skill relies on delegated authentication handled by Membrane, but the artifacts do not specify OAuth scopes, project restrictions, credential lifetime, or revocation expectations.

User impactThe user may grant broad or persistent access to Google Vertex AI resources through a third-party service without clear least-privilege boundaries.
RecommendationUse the narrowest possible Google account, project, and scopes; verify the Membrane connection details; and revoke the connection when it is no longer needed.
Sensitive data protection

Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.

Memory and Context Poisoning
SeverityLowConfidenceMediumStatusNote
SKILL.md
Embed Content | embed-content | Generate embeddings for text content using Vertex AI embedding models.

The skill exposes embedding functionality that processes user text through Vertex AI; this is purpose-aligned, but embeddings can encode sensitive content.

User impactText submitted for embeddings may reveal sensitive information through the external AI processing path.
RecommendationAvoid embedding secrets, credentials, regulated data, or private user content unless the user has approved that data flow.
Insecure Inter-Agent Communication
SeverityMediumConfidenceHighStatusConcern
SKILL.md
send requests directly to the Google Vertex AI API through Membrane's proxy

The skill uses Membrane as a proxy/gateway between the agent and Google Vertex AI, but the artifacts do not define data handling boundaries, identity guarantees, or permission limits for proxy requests.

User impactPrompts, inputs, outputs, and API requests may pass through Membrane and Google services with unclear data exposure boundaries.
RecommendationConfirm the Membrane tenant and connection, review privacy and retention terms, and avoid sending sensitive data through proxy requests unless necessary.