Upstage Studio
AdvisoryAudited by Static analysis on May 6, 2026.
Overview
No suspicious patterns detected.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Anyone using the skill must provide an API key that can upload documents and perform actions allowed by that Upstage account.
The skill needs an Upstage API key to act in the user's Upstage account. This is expected for the integration, but it is sensitive account authority and the registry metadata does not declare a required credential.
**API Key**: Always use `os.environ["UPSTAGE_API_KEY"]`. Get your key at [console.upstage.ai/api-keys](https://console.upstage.ai/api-keys)
Use a least-privileged Upstage key if available, keep it in an environment variable, do not paste it into chats or files, and revoke or rotate it if exposed.
If invoked carelessly, the agent could publish a workflow to the public library or remove Upstage resources from the user's account.
The API reference includes actions that can publish an agent publicly or delete account resources. These are purpose-adjacent management functions, but they are higher-impact than simply running a document job.
PUT /v2/agents/{agent_id}/visibility — Publish / Unpublish Agent ... `visibility`: `"public"` ... DELETE /v2/agents/{agent_id} — Delete AgentRequire explicit user confirmation before publish, unpublish, delete, clone-with-jobs, or other account-mutating actions; prefer private visibility unless the user clearly asks to publish.
Sensitive document contents or extracted results may remain available through provider-side caching or retention for a period of time.
Job results can be cached and reused by the provider. That is disclosed and useful for performance, but it means document-derived data may persist beyond the immediate run.
**Caching:** Identical file combination + identical step settings → reuses previous results (7-day TTL).
Avoid uploading documents that are not allowed to leave your environment, configure expiry where possible, and delete uploaded files or jobs after use when handling sensitive material.
Outputs from untrusted documents could be biased or manipulated by instructions embedded in the document text.
When arbitrary documents are parsed and then passed into an LLM instruction step, malicious or misleading text inside the document could influence the generated answer.
`instruct` — Free-form Instructions ... Automatically uses previous Step results as context ... `document-parse` → `instruct`: Parsed document text passed as context
Treat document contents as untrusted input, review instruct-step outputs before acting on them, and avoid connecting those outputs directly to sensitive actions without human review.
