External Ai Integration

PassAudited by ClawScan on May 10, 2026.

Overview

The skill is coherent for calling external AI services, but it may use logged-in AI accounts or Hugging Face tokens and send prompts to third-party providers.

This skill appears purpose-aligned, not malicious. Install it only if you are comfortable letting the assistant send selected prompts to external AI providers through your logged-in accounts or Hugging Face token. Avoid sending sensitive data unless you have approved that provider, and review or clear any generated memory logs if they contain private details.

Findings (5)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

The assistant may type prompts and click controls in logged-in AI web apps on the user’s behalf.

Why it was flagged

The skill intentionally uses browser automation to operate external AI websites. This is expected for the purpose, but browser automation should remain user-directed and scoped.

Skill content
Use Chrome Relay to automate interactions with ChatGPT, Claude, Gemini, or any other web‑based LLM that requires a browser interface.
Recommendation

Use this only with AI accounts you intend the assistant to access, and avoid allowing external model content to trigger unrelated browser actions.

What this means

Prompts may be submitted using the user’s ChatGPT, Claude, Gemini, or similar account, potentially consuming quota or creating account history.

Why it was flagged

The skill relies on existing logged-in browser sessions for third-party AI services, meaning actions are performed under the user’s accounts.

Skill content
The target LLM website (e.g., `chatgpt.com`, `claude.ai`) already logged in (session cookies present).
Recommendation

Confirm which browser profile is attached and avoid sending private or regulated content unless you are comfortable with that provider receiving it.

What this means

The skill can make Hugging Face API requests using the user’s token, which may affect account usage and billing/quota.

Why it was flagged

The code retrieves a Hugging Face token from 1Password, an environment variable, or a local token file. This is expected for the Hugging Face API integration, but it is credential use.

Skill content
["op", "read", "op://Personal/HuggingFace/api_token"] ... token = os.getenv("HF_TOKEN") ... "~/.huggingface/token"
Recommendation

Use a limited-scope token where possible, keep token storage secure, and do not run the manual tests in shared terminals if token presence should stay private.

What this means

Any text included in the prompt may be shared with the external AI provider, and the provider’s response may influence the assistant’s answer.

Why it was flagged

The skill sends task prompts to external AI providers and brings their responses back into the assistant workflow. This is the core purpose, but it creates a third-party data boundary.

Skill content
Type the prompt into the input field and submit ... Extract the response text ... Return the response to the assistant's workflow.
Recommendation

Ask for confirmation before sending sensitive, proprietary, personal, or regulated information to external AI services, and treat returned content as untrusted until checked.

What this means

Failure details or context passed to the logger may persist across sessions.

Why it was flagged

The skill includes persistent failure logging into memory files. This appears limited and disclosed, but stored context can be reused later.

Skill content
Log an external AI failure to memory/YYYY‑MM‑DD.md.
Recommendation

Keep failure logs free of secrets and clear related memory files if they contain sensitive task details.