External Ai Integration
PassAudited by ClawScan on May 10, 2026.
Overview
The skill is coherent for calling external AI services, but it may use logged-in AI accounts or Hugging Face tokens and send prompts to third-party providers.
This skill appears purpose-aligned, not malicious. Install it only if you are comfortable letting the assistant send selected prompts to external AI providers through your logged-in accounts or Hugging Face token. Avoid sending sensitive data unless you have approved that provider, and review or clear any generated memory logs if they contain private details.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The assistant may type prompts and click controls in logged-in AI web apps on the user’s behalf.
The skill intentionally uses browser automation to operate external AI websites. This is expected for the purpose, but browser automation should remain user-directed and scoped.
Use Chrome Relay to automate interactions with ChatGPT, Claude, Gemini, or any other web‑based LLM that requires a browser interface.
Use this only with AI accounts you intend the assistant to access, and avoid allowing external model content to trigger unrelated browser actions.
Prompts may be submitted using the user’s ChatGPT, Claude, Gemini, or similar account, potentially consuming quota or creating account history.
The skill relies on existing logged-in browser sessions for third-party AI services, meaning actions are performed under the user’s accounts.
The target LLM website (e.g., `chatgpt.com`, `claude.ai`) already logged in (session cookies present).
Confirm which browser profile is attached and avoid sending private or regulated content unless you are comfortable with that provider receiving it.
The skill can make Hugging Face API requests using the user’s token, which may affect account usage and billing/quota.
The code retrieves a Hugging Face token from 1Password, an environment variable, or a local token file. This is expected for the Hugging Face API integration, but it is credential use.
["op", "read", "op://Personal/HuggingFace/api_token"] ... token = os.getenv("HF_TOKEN") ... "~/.huggingface/token"Use a limited-scope token where possible, keep token storage secure, and do not run the manual tests in shared terminals if token presence should stay private.
Any text included in the prompt may be shared with the external AI provider, and the provider’s response may influence the assistant’s answer.
The skill sends task prompts to external AI providers and brings their responses back into the assistant workflow. This is the core purpose, but it creates a third-party data boundary.
Type the prompt into the input field and submit ... Extract the response text ... Return the response to the assistant's workflow.
Ask for confirmation before sending sensitive, proprietary, personal, or regulated information to external AI services, and treat returned content as untrusted until checked.
Failure details or context passed to the logger may persist across sessions.
The skill includes persistent failure logging into memory files. This appears limited and disclosed, but stored context can be reused later.
Log an external AI failure to memory/YYYY‑MM‑DD.md.
Keep failure logs free of secrets and clear related memory files if they contain sensitive task details.
