Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

atypica-user-interview

v1.0.0

Run AI-simulated user interviews and focus group discussions using atypica.ai's library of human-like personas. Each persona is an AI that behaves like a rea...

0· 87·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for owenrao/atypica-user-interview.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "atypica-user-interview" (owenrao/atypica-user-interview) from ClawHub.
Skill page: https://clawhub.ai/owenrao/atypica-user-interview
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install atypica-user-interview

ClawHub CLI

Package manager switcher

npx clawhub@latest install atypica-user-interview
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The skill's name/description (AI-simulated interviews) matches the included docs, API reference, and helper script which call atypica.ai endpoints. However, the registry metadata declares no required environment variables while the SKILL.md and the provided script clearly require an API token (ATYPICA_TOKEN / atypica_xxx) and an endpoint — this metadata omission is an incoherence that could mislead users about secret requirements.
!
Instruction Scope
SKILL.md instructs editing third-party client config files (e.g., Claude Desktop JSON in user AppData / Library paths) and exporting ATYPICA_TOKEN. It also provides a bash helper that will POST JSON to https://atypica.ai/mcp/universal and may write API responses to files. These steps reference and modify user-level application config and require storing an API token in environment or config files — actions outside a purely ephemeral, read-only skill scope and worth user scrutiny.
Install Mechanism
No install spec is present; the skill is instruction-heavy and ships a small helper script. There is no remote code download or extract step, and the script is plain bash that uses curl/jq. This is lower risk than fetching arbitrary binaries.
!
Credentials
The skill effectively requires an atypica API token (ATYPICA_TOKEN / 'atypica_xxx') but the registry lists no required env vars or primary credential. Requesting a bearer token that gives access to the service is proportionate to the functionality, but the omission in metadata and the instruction to persist the token (in env or client config JSON) are problematic: users need to know this up front and consider token scope/permissions before use.
!
Persistence & Privilege
always:false and the skill does not demand platform-level privileges, which is good. However, the runtime docs explicitly tell users to add atypica as an MCP server inside other client config files (e.g., Claude Desktop), which modifies another application's configuration. That cross-application config change increases persistence/privilege beyond a self-contained skill and should be done deliberately by the user.
What to consider before installing
Before installing or using this skill: - Expect to create an atypica.ai account and obtain an API key (format 'atypica_xxx'); the skill's metadata does not list this but the docs and script require it. - Prefer creating a dedicated, least-privilege API key (or a throwaway account) rather than reusing a high-privilege token. - Be aware the SKILL.md suggests storing the token in an environment variable or adding it to other apps' config files (e.g., Claude Desktop JSON). Storing tokens in plaintext config files grants that app access to your atypica account — review those files and their access permissions. - Review atypica.ai's privacy, data retention, and sharing behavior (reports produce public share URLs and signed CDN links). Don’t send sensitive or PII in prompts unless you accept those sharing/retention properties. - The included scripts use curl/jq and may write API responses to disk; inspect any output files before sharing. - If you decide to proceed, verify the endpoint (https://atypica.ai/mcp/universal) and consider testing with limited data or an account with constrained privileges first. If you want, I can extract the exact places the SKILL.md instructs you to edit (file paths and JSON snippets) and highlight every location the token would be stored or transmitted.

Like a lobster shell, security has layers — review code before you run it.

latestvk97catv2rr0jt9r0hjh5wbxgy583jypv
87downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

atypica User Interview & Discussion

Run one-on-one interviews or group discussions with AI personas that simulate real users. atypica.ai maintains a library of AI models trained to behave like specific types of real people — each with a name, background story, personality, and authentic opinions. You ask the research question, the AI finds fitting personas, plans the research, conducts the interviews, and produces a synthesized report.

No recruiting. No scheduling. Results in minutes.

What this does

  • Interviews — the AI conducts deep one-on-one conversations with 3–8 AI personas, each responding as a distinct real person would
  • Group discussions — the AI runs a focus group where personas debate and react to each other
  • Report generation — the AI synthesizes everything into a structured research report with key findings

Typical use cases:

  • "How would different age groups react to this pricing model?"
  • "Interview 5 potential customers about their pain points"
  • "Run a focus group on this product concept"
  • "What would Gen Z users think about this feature?"

Prerequisites

IMPORTANT: This skill works in two modes depending on your setup.

Option 1: MCP Server (Recommended for AI assistants)

If tools starting with atypica_universal_ are already available in your environment, you're ready. Otherwise, configure the MCP server:

Configuration parameters:

Example: Claude Desktop — edit the config file at:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "atypica-universal": {
      "transport": "http",
      "url": "https://atypica.ai/mcp/universal",
      "headers": {
        "Authorization": "Bearer atypica_xxx"
      }
    }
  }
}

Restart Claude Desktop to load. For other MCP clients, the syntax may differ.

Option 2: Direct Bash Script (Works anywhere)

No MCP setup needed — just curl and jq:

export ATYPICA_TOKEN="atypica_xxx"
scripts/mcp-call.sh atypica_universal_create '{"content":"Interview users about coffee preferences"}'

See scripts/mcp-call.sh for full options (-t, -o, -f, -v, -h).


Quick Start

Here's the full flow from question to report:

// Step 1: Start a session with your research question
const session = await callTool("atypica_universal_create", {
  content: "I want to interview 5 users about their morning coffee routine and spending habits"
});
const userChatToken = session.structuredContent.token;

// Step 2: Kick off the research
await callTool("atypica_universal_send_message", {
  userChatToken,
  message: {
    role: "user",
    lastPart: { type: "text", text: "Run one-on-one interviews" }
  }
});

// Step 3: Poll until the AI finishes (interviews take 1–5 minutes)
let result;
do {
  await wait(30000); // Wait 30 seconds between polls
  result = await callTool("atypica_universal_get_messages", {
    userChatToken,
    tail: 5
  });

  // The AI may pause to ask you to confirm its research plan
  const lastMsg = result.structuredContent.messages.at(-1);
  if (lastMsg?.role === "assistant") {
    const pending = lastMsg.parts.find(p =>
      p.state === "input-available" && p.type.startsWith("tool-")
    );
    if (pending) {
      // Handle the interaction (see "Interactions" section below)
      break;
    }
  }
} while (result.structuredContent.isRunning);

// Step 4: Retrieve the final report
const reportPart = result.structuredContent.messages
  .flatMap(m => m.parts)
  .find(p => p.type === "tool-generateReport" && p.state === "output-available");

if (reportPart?.output?.reportToken) {
  const report = await callTool("atypica_universal_get_report", {
    token: reportPart.output.reportToken
  });
  console.log(report.structuredContent.title);
  console.log(report.structuredContent.shareUrl); // Public shareable link
  console.log(report.structuredContent.content);  // Full HTML report
}

Core Workflow

  1. Create a session with your research question
  2. Send a message instructing the type of research (interview vs. discussion)
  3. Poll get_messages — the AI runs in the background; check isRunning
  4. Handle any interactions the AI pauses for (plan confirmation, clarifying questions)
  5. Retrieve the report once complete

Understanding Personas

Personas are AI models that simulate real people. Each has:

  • A name and background story (e.g., "Emma, 28, UX designer in NYC")
  • Consistent personality traits, opinions, and communication style
  • Domain knowledge and life experience relevant to their profile

The AI automatically selects relevant personas for your topic. You can also search the library:

// Search for personas matching your target users
const results = await callTool("atypica_universal_search_personas", {
  query: "millennial parents concerned about screen time",
  limit: 10
});

// Get a persona's full profile
const persona = await callTool("atypica_universal_get_persona", {
  personaId: results.structuredContent.data[0].personaId
});
console.log(persona.structuredContent.prompt); // Full character description

Research Types

One-on-One Interviews (interviewChat)

The AI interviews each persona separately — deep, focused conversations that surface individual perspectives and nuance.

Best for: Understanding personal motivations, pain points, decision journeys, emotional reactions.

await callTool("atypica_universal_send_message", {
  userChatToken,
  message: {
    role: "user",
    lastPart: {
      type: "text",
      text: "Conduct individual interviews with 5 personas — focus on how they make purchase decisions"
    }
  }
});

Group Discussion (discussionChat)

3–8 personas discuss a topic together, reacting to each other's opinions. More dynamic — surfaces disagreements, consensus, and social dynamics.

Best for: Testing concepts, exploring group norms, understanding debates within a user segment.

await callTool("atypica_universal_send_message", {
  userChatToken,
  message: {
    role: "user",
    lastPart: {
      type: "text",
      text: "Run a focus group with 5 participants to discuss their reactions to this product concept: [describe it]"
    }
  }
});

Let the AI decide

Just describe what you want to learn — the AI will choose the right approach:

const session = await callTool("atypica_universal_create", {
  content: "I want to understand why young professionals churn from fitness apps after 30 days"
});

Interactions

The AI occasionally pauses to ask for your input before proceeding. Check getMessages for parts with state === "input-available".

Confirm Research Plan (confirmPanelResearchPlan)

The AI presents its plan — which personas it selected, how many interviews, what questions to focus on — and asks for your approval. You can confirm as-is or edit.

Detect:

{
  "type": "tool-confirmPanelResearchPlan",
  "state": "input-available",
  "toolCallId": "call_xyz",
  "input": {
    "question": "Why do users churn from fitness apps?",
    "plan": "# Research Plan\n...",
    "personas": [
      { "id": 1, "name": "Alex, 26, casual gym-goer" },
      { "id": 2, "name": "Maria, 31, busy mom" }
    ]
  }
}

Confirm it (or pass editedPlan / editedQuestion to adjust):

{
  "userChatToken": "...",
  "message": {
    "id": "<original messageId>",
    "role": "assistant",
    "lastPart": {
      "type": "tool-confirmPanelResearchPlan",
      "toolCallId": "call_xyz",
      "state": "output-available",
      "input": { "...copy original input..." },
      "output": {
        "confirmed": true,
        "plainText": "Confirmed — looks good, proceed"
      }
    }
  }
}

Answer a Question (requestInteraction)

Sometimes the AI asks a clarifying question before proceeding (e.g., "Which age group should I focus on?").

Detect:

{
  "type": "tool-requestInteraction",
  "state": "input-available",
  "toolCallId": "call_abc",
  "input": {
    "question": "Which age group should I prioritize?",
    "options": ["18-24", "25-34", "35-44"],
    "maxSelect": 1
  }
}

Submit your answer:

{
  "userChatToken": "...",
  "message": {
    "id": "<original messageId>",
    "role": "assistant",
    "lastPart": {
      "type": "tool-requestInteraction",
      "toolCallId": "call_abc",
      "state": "output-available",
      "input": { "...copy original input..." },
      "output": {
        "answer": "25-34",
        "plainText": "User selected: 25-34"
      }
    }
  }
}

Monitoring Progress

After send_message, the AI works in the background. Monitor via get_messages:

Tool Call You'll SeeWhat's Happening
searchPersonas, buildPersonaFinding the right personas
confirmPanelResearchPlanWaiting for your plan approval
interviewChatInterviewing a persona (runs per persona)
discussionChatRunning the group discussion
reasoningThinkingAnalyzing and synthesizing findings
generateReportWriting the final report
// Example: Check progress and handle all states in a loop
async function runResearch(userChatToken) {
  while (true) {
    await wait(30000);
    const { isRunning, messages } = (
      await callTool("atypica_universal_get_messages", { userChatToken, tail: 5 })
    ).structuredContent;

    if (isRunning) continue; // Still working

    const lastMsg = messages.at(-1);
    if (!lastMsg) break;

    // Check for interactions needing your input
    const pending = lastMsg.parts?.find(p =>
      p.state === "input-available" && p.type.startsWith("tool-")
    );
    if (pending) {
      await handleInteraction(userChatToken, lastMsg.messageId, pending);
      continue;
    }

    // Check if report is ready
    const reportPart = messages.flatMap(m => m.parts)
      .find(p => p.type === "tool-generateReport" && p.state === "output-available");

    if (reportPart?.output?.reportToken) {
      return reportPart.output.reportToken; // Done!
    }

    // Stopped without completing — nudge it forward
    await callTool("atypica_universal_send_message", {
      userChatToken,
      message: { role: "user", lastPart: { type: "text", text: "Please continue" } }
    });
  }
}

Getting the Report

Once generateReport completes, retrieve the full report:

const report = await callTool("atypica_universal_get_report", {
  token: reportToken
});

console.log(report.structuredContent.title);       // e.g., "Fitness App Churn: User Perspectives"
console.log(report.structuredContent.description); // 1-paragraph summary
console.log(report.structuredContent.content);     // Full HTML report
console.log(report.structuredContent.shareUrl);    // https://atypica.ai/artifacts/report/{token}/share

The shareUrl is a public link you can share directly.


Error Handling

Quota exceeded — the sendMessage response will have status: "saved_no_ai" with reason: "quota_exceeded". Top up tokens at https://atypica.ai/account/tokens.

AI failedstatus: "ai_failed". The message is saved; send another message to retry.

Connection timeout — if sendMessage times out, call getMessages to check isRunning. The AI may still be working in the background.


Performance

OperationTypical Duration
Persona search< 2 seconds
Research plan generation5–15 seconds
Interview (per persona)20–40 seconds
Group discussion (5 personas)30–90 seconds
Report generation30–60 seconds
Full interview study (5 people)2–5 minutes

Full API Reference

See references/api-reference.md for complete input/output schemas, error codes, and additional workflow examples.

Comments

Loading comments...