Product Opportunity Research

v1.0.0

Use when conducting deep product opportunity research or feature prioritization. 6 specialized agents (User JTBD, Workflow, Tech Feasibility, System Integrat...

0· 145·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (deep product opportunity research using six specialized agents) align with the provided artifacts: SKILL.md plus six reference docs define agent roles, phases, scoring, and outputs. There are no unexplained required binaries, environment variables, or config paths.
Instruction Scope
SKILL.md contains detailed, self‑contained instructions for running the 6 agents and producing deliverables. It does not instruct the agent to read unrelated system files, environment variables, or contact external endpoints beyond the workflow described. All file references are to included reference docs.
Install Mechanism
No install spec and no code files: instruction‑only skill. This minimizes disk writes and arbitrary code execution risk.
Credentials
The skill declares no required environment variables, credentials, or config paths. There is no disproportionate or unrelated secret access requested.
Persistence & Privilege
Flags show default behavior (always: false, model invocation allowed). Autonomous invocation is allowed by platform default (not a red flag here), and the skill does not request permanent presence or modify other skills. If you are concerned about autonomous runs, you can restrict agent permissions or disable autonomous invocation in your agent settings.
Assessment
This skill appears coherent and low‑risk: it’s an instruction‑only multi‑agent framework with no installs, no required creds, and internal reference files. Before installing: 1) consider provenance — the package has no homepage or author/contact details (owner ID only), so verify you trust the publisher; 2) avoid feeding sensitive/proprietary data into the skill unless you trust its origin and retention policy; 3) run initial tests on public or synthetic prompts to validate outputs and check for hallucinations or unexpected behavior; 4) if you prefer tighter control, disable autonomous invocation for agents or require explicit user approval before each run. If you need deeper assurance, ask the publisher for a README/license or a verifiable homepage/source repository.

Like a lobster shell, security has layers — review code before you run it.

latestvk977krb5xd18dn9xab214mb3gd834kav

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments