Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Interview Question Gen

v1.0.0

Generate structured WePlay activity operations interview questions from a resume and append a detailed evaluation using the interview transcript in a Feishu...

0· 220·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for funkeyyou/interview-question-gen.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Interview Question Gen" (funkeyyou/interview-question-gen) from ClawHub.
Skill page: https://clawhub.ai/funkeyyou/interview-question-gen
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install interview-question-gen

ClawHub CLI

Package manager switcher

npx clawhub@latest install interview-question-gen
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's name/description match the instructions: it converts resumes → Feishu interview docs and appends evaluations from transcripts. However, the SKILL.md refers to using PyMuPDF and a local script (feishu_bot_doc.mjs) that are not declared or provided. Also the instructions hard-code a Feishu folder ID and a default collaborator (ou_8b357150...), which is plausible for an internal tool but is unexpected for a generic skill and could cause documents to be saved/shared without explicit user consent.
!
Instruction Scope
Instructions ask the agent to read local resume files (PDF → /tmp/*.png) and to fetch a Feishu wiki URL — appropriate for the task. Concern: the doc creation step relies on an external node script (feishu_bot_doc.mjs) and agent actions like feishu_doc append/read. The behavior will write candidate data into a specific default folder and add a specific collaborator by default; that is a potential privacy/data-sharing surprise. The SKILL.md does not instruct any unrelated file/credential reads, but the unspecified node script could itself perform additional actions (not visible here).
Install Mechanism
This is an instruction-only skill with no install spec (low disk-write risk). However it references third-party libraries (PyMuPDF via import fitz) and a local node script. Because they are not bundled or declared, the agent/platform must already provide them — if not, the steps will fail or the user/operator might install ad-hoc tools. Missing dependency documentation is a practical risk and should be clarified.
!
Credentials
The skill declares no required environment variables or credentials, but its actions require Feishu access (feishu_doc actions / node script). The hard-coded default folder ID and collaborator imply that outputs will be stored/shared to a specific organization/person; that is a disproportionate/opaque sharing decision for a general-purpose interview generator. No other unrelated credentials are requested.
Persistence & Privilege
always:false and no install spec means the skill does not request persistent, forced inclusion or system-level privileges. It does request write access to a Feishu document (expected for the described function). There is no evidence it modifies other skills or system configuration.
What to consider before installing
Before installing or running this skill, confirm these points: (1) where will generated documents be stored and who will be added as collaborator? The SKILL.md hard-codes a Feishu folder ID and a collaborator (ou_8b357150...), so verify you want candidate data shared there. (2) The runtime expects PyMuPDF (fitz) and a local script named feishu_bot_doc.mjs — these are not included. Ask the author or your platform operator for the exact dependencies and inspect the feishu_bot_doc.mjs source to ensure it doesn't exfiltrate data or call unexpected endpoints. (3) Ensure your Feishu integration credentials are scoped appropriately (limit write scope to intended folder) and test the skill with non-sensitive sample data first. (4) If you need a generic/public skill, request removal of hard-coded folder/collaborator defaults or make them configurable/prompted at runtime. If the maintainer cannot supply the missing scripts/dependencies or justify the default collaborator, treat the skill as untrusted.

Like a lobster shell, security has layers — review code before you run it.

latestvk975pv6bfkq4g4s51s0xzts02x834djp
220downloads
0stars
1versions
Updated 10h ago
v1.0.0
MIT-0

Interview Question Generator & Evaluator

Two-phase workflow for WePlay activity operations (活动运营) interviews using Feishu docs.

Phase 1: Resume → Interview Question Document

Step 1: Read the Resume

If the resume is a PDF attachment, render each page as an image (/tmp/resume_p{n}.png) using PyMuPDF and read them visually:

import fitz
doc = fitz.open("/path/to/resume.pdf")
for i, page in enumerate(doc):
    page.get_pixmap(matrix=fitz.Matrix(1.5, 1.5)).save(f"/tmp/resume_p{i+1}.png")

Extract key info: work experience, skills, education, highlights.

Step 2: Read WePlay Product Context

Before generating questions, fetch the WePlay product framework doc to understand product positioning:

Step 3: Generate Interview Questions

Structure the document into these sections. See references/question-template.md for the full question template and scoring rubrics.

Document sections:

  1. 破冰与自我介绍 (2 questions)
  2. 结合简历的深挖问题 (4–6 questions, grouped by employer)
  3. 活动运营能力考察 (4 questions: scenario planning, data, cross-team collaboration)
  4. 日语与本地化能力 (3 questions, tailored to Japanese market)
  5. WePlay 产品体验问题 (5 questions — require candidate to pre-download WePlay)
  6. 价值观与潜力考察 (4 questions including open Q&A)
  7. 日本語口頭試問 (6 questions, all in Japanese — no Chinese)

Tailor questions to the specific candidate's background. Reference their actual projects, metrics, and employers by name.

Step 4: Create Feishu Document

Use feishu_bot_doc.mjs to create the document:

cat /tmp/interview_questions.md | node scripts/feishu_bot_doc.mjs create \
  --title "【AI生成】{候选人姓名} {岗位} 面试题集" \
  --stdin \
  --folder AZ3nfFtial4bHTdOFahcdcfxnub \
  --collaborator ou_8b357150cff930fca19a733461a32526

Reply with the document URL. Tell the user to send the interview transcript when ready.


Phase 2: Interview Transcript → Evaluation

Step 1: Read the Transcript

Accept the transcript as:

  • A Feishu doc link → use feishu_doc read action
  • A pasted text block → read directly

Step 2: Write Evaluation

Append the evaluation to the existing interview question document (not a new doc). Use feishu_doc append action on the same doc_token.

See references/evaluation-template.md for the full evaluation structure and scoring rubrics.

Evaluation structure:

  1. 总体印象 (1–2 sentences, overall rating: 优秀/良好/中等/中等偏下/不建议录用)
  2. 各维度评价 with ⭐ ratings (1–5 stars each):
    • 过往经验匹配度
    • 活动策划思维
    • 数据分析能力
    • 产品认知与洞察
    • 日本市场理解
    • 表达与沟通
  3. 亮点 (bullet list)
  4. 主要风险 (bullet list)
  5. 结论 (录用 / 待定 / 不建议录用, with reasoning)

Be specific: quote actual interview moments, not generic observations.


Notes

  • Japanese oral exam questions (Section 7) must be written entirely in Japanese — no Chinese text.
  • Always add 【AI生成】 prefix to document titles.
  • Default save folder: AZ3nfFtial4bHTdOFahcdcfxnub
  • Default collaborator: ou_8b357150cff930fca19a733461a32526 (吴柏庆)
  • If the interview transcript doc is auto-generated by Feishu (智能纪要), the bot has no write permission — append to the question doc instead.

Comments

Loading comments...