suspicious.exposed_secret_literal
- Location
- tests/evals.json:6
- Finding
- File appears to expose a hardcoded API secret or token.
AdvisoryAudited by Static analysis on May 10, 2026.
Detected: suspicious.exposed_secret_literal
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Running the setup can modify your Auth0 tenant and create or update local credential files.
The automated setup uses the user's Auth0 account to select or create an application and appends Auth0 configuration to an environment file.
auth0 login ... auth0 apps create --name "${PWD##*/}-flask" --type regular ... AUTH0_CLIENT_SECRET='YOUR_CLIENT_SECRET'Use a test tenant or app when possible, confirm the target env file, fill in the client secret yourself, and keep env files out of source control.
If such a value were real, it would expose an Auth0 application secret.
A secret-like Auth0 client secret is packaged in an eval fixture. It appears synthetic, but it is still a credential-shaped value in the artifact.
"Client Secret: `secret_xyz789abc123def456ghi789jkl012mno345pqr678stu901vwx234yz`"
Keep test credentials synthetic placeholders only, and rotate the credential immediately if this value corresponds to a real Auth0 app.
If you run that setup path, you execute a remote installer on your machine.
The optional setup can download and execute the latest Auth0 CLI installer from GitHub rather than a pinned local dependency.
curl -sSfL https://raw.githubusercontent.com/auth0/auth0-cli/main/install.sh | sh -s -- -b /usr/local/bin
Prefer a trusted package manager such as Homebrew where available, or inspect and verify the installer before running it.
If someone runs the eval harness in a workspace containing real secrets, those secrets could be included in an LLM grading prompt.
The eval runner treats .env files as source files and includes collected source excerpts in prompts sent to a Claude CLI judge when the tests are run.
".env", ... const judgePrompt = `You are evaluating code quality...${fileSummary}` ... $`claude ${judgeArgs}`Run evals only in disposable workspaces with dummy credentials, and exclude env files before sending code to external judging tools.