Interview System Designer

ReviewAudited by ClawScan on May 1, 2026.

Overview

The skill appears to be a coherent local toolkit for interview process design, but it can process sensitive hiring and demographic data that should be handled carefully.

This skill looks appropriate for designing and calibrating interview processes, but use it carefully with real hiring data. Prefer anonymized candidate and interviewer IDs, secure any generated reports, and keep human HR/legal review in the loop for employment decisions.

Findings (3)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Running the examples will execute included scripts and create analysis outputs in user-selected paths.

Why it was flagged

The skill documents user-run local Python tools that read input files and generate output files; this is central to the stated purpose, but it is still local code execution and file handling.

Skill content
python hiring_calibrator.py --input interview_data.json --output calibration_report.json --analysis-type full
Recommendation

Run the scripts only from the intended skill directory, review input and output paths, and avoid using sensitive real data unless you are authorized to do so.

What this means

Calibration reports or saved inputs could expose candidate identities, protected demographic attributes, interviewer behavior, or hiring recommendations if shared or stored carelessly.

Why it was flagged

The calibration workflow is explicitly designed to process candidate scores, interviewer feedback, and demographic attributes, which are sensitive employment-related data.

Skill content
Interview results data (candidate scores, interviewer feedback, demographics)
Recommendation

Use anonymized identifiers where possible, limit access to input and output files, store reports securely, and follow company HR/legal policies for protected demographic data.

What this means

Users could over-rely on generated rubrics or recommendations in a high-stakes hiring process.

Why it was flagged

The generated artifacts include hiring recommendation categories, which can influence real employment decisions even though they are purpose-aligned.

Skill content
"overall_recommendation": { "options": [ "Strong Hire", "Hire", "No Hire", "Strong No Hire" ]
Recommendation

Treat outputs as decision-support materials, not final hiring decisions; validate them with trained interviewers, HR policy, and legal/compliance review.