Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Training Report

v1.0.0

Produce a professional training/workshop report as a .docx file. Use this skill whenever the user mentions "training report", "workshop report", "compte rend...

0· 54·0 current·0 all-time
bySamuel Berthe@samber

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for samber/training-report.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Training Report" (samber/training-report) from ClawHub.
Skill page: https://clawhub.ai/samber/training-report
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install training-report

ClawHub CLI

Package manager switcher

npx clawhub@latest install training-report
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name and description match the instructions: this is an instruction-only skill to draft training/workshop reports and produce a .docx. There are no unrelated environment variables, binaries, or installs declared. One mismatch to note: the SKILL.md's trigger guidance is very broad (many natural-language triggers like 'training') which risks activation for non-document tasks that use the word 'training' (the evals explicitly test for an ML training pipeline trap). Functionality requested (asking for session details, templates, generating .md/.docx) is proportionate to the stated purpose.
!
Instruction Scope
The runtime instructions are detailed and prescriptive (structured interview, step-by-step draft-and-approve flow, docx-template unpack/inject/repack). That scope is appropriate for generating reports, but the SKILL.md repeatedly instructs to 'Always use this skill' for many triggers which is too permissive. The risk: the skill's workflow could be applied in contexts where the user's intent is not to write a report (e.g., ML model training), causing unnecessary or confusing questions and potential disclosure of participant-sensitive information. The instructions ask for participant names, behaviors, and contact details (expected for reports) — these are sensitive PII and the skill does not include explicit safeguards beyond diplomatic guidance. The skill also instructs the agent to seek and use other skills (docx, humanizer) which is expected but broadens its runtime surface.
Install Mechanism
This is instruction-only with no install spec, no downloaded code, and no external package installs — lowest risk for install-time compromise.
Credentials
The skill declares no required environment variables, no credentials, and no config paths. It does request user-supplied content (templates, participant feedback) which is reasonable for its purpose, but users should treat that content as potentially sensitive.
Persistence & Privilege
The skill is not 'always: true' and does not request elevated persistence. However, it is marked non-user-invocable (user-invocable: false) while allowing autonomous invocation by the agent (disable-model-invocation: false). Combined with the broad trigger guidance in the description, this increases the risk of unexpected autonomous activation. Autonomous invocation alone is normal, but here it amplifies the over-trigger concern.
What to consider before installing
This skill appears to be a coherent, instruction-only writer for training/workshop reports, but take three precautions before installing or enabling it: 1) Trigger sensitivity: the skill's description and SKILL.md encourage activation on many natural-language 'training' phrases. If you also use the agent for non-document 'training' tasks (e.g., ML pipelines), the skill may misactivate and begin an interview workflow. Ask the platform owner to narrow or review the activation triggers or to make the skill user-invocable so it runs only when explicitly requested. 2) Data sensitivity: the interview asks for participant names, behaviors, and contact details. Only provide PII you are permitted to share. Prefer anonymized/group-level feedback where appropriate and confirm any named negative feedback is acceptable to include before uploading or entering it. 3) Docx dependency and file handling: the skill will try to load a separate 'docx' skill to build the .docx and may ask you to upload a template file. Confirm how your platform handles file unpacking/repacking and where output files will be written. Because this skill delegates .docx work to another skill, review that docx skill's permissions before proceeding. If you want lower risk: keep this skill disabled for autonomous invocation and call it manually, or ask the maintainer to refine triggers (e.g., require explicit 'write a training report' rather than any occurrence of 'training').

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

📚 Clawdis
latestvk97fqv3423qaj5af1jx3gryjm185fe56
54downloads
0stars
1versions
Updated 3d ago
v1.0.0
MIT-0

Training Report

Iterate the full report in Markdown first. Generate the .docx last, once, when the content is final. The .md is the canonical artifact; the .docx is a terminal derivative.

Discipline-agnostic: coding workshop, leadership seminar, safety training, onboarding, creative workshop — all apply equally.

Voice mode: this conversation may be conducted by voice. Transcription can introduce homophones, missing punctuation, or ambiguous proper nouns (names, company names, tool names). If any answer is unclear after transcription, ask a short clarifying question before moving on — do not guess.

Reference files

Load these files at the steps indicated. Do not load them all upfront.

FileLoad at
references/tone-of-voice.mdStep 1 (after language + audience confirmed)
references/markdown-draft.mdStep 5 (before writing the draft)
references/docx-generation.mdStep 6 (before generating the .docx)

Step 0 — Check dependencies

Before asking the user anything, verify skill availability.

docx skill (required for Step 6)

  • If found: note it; load it at Step 6
  • If not found: warn the user — it is required to generate the final Word document and can be installed from Anthropic's official skill library. Offer to proceed with the Markdown draft in the meantime.

Humanizer skill (recommended)

  • After Step 1, look for a humanizer skill matching the chosen language
  • If found: load it and apply it during the humanization pass in Step 5
  • If not found: tell the user once, then fall back to inline humanization rules (Step 5b). Suggest installing a humanizer skill for the chosen language.

Step 1 — Language & audience

Ask:

  1. "In what language should I write the report? (French / English / other)"
  2. "Who is the primary reader? (executive / HR / direct management / external client / internal archive)"

Then load references/tone-of-voice.md and apply its guidance throughout.

Step 2 — Template

Ask:

"Do you have a Word (.docx) template for this report? (company header/footer, logo, branded fonts, color scheme)"

  • Yes → ask them to upload it; use it as the base at Step 6 (unpack/inject/repack)
  • No → proceed with a clean document; ask for a brand color before defaulting to blue #2E75B6

Step 3 — Interview

Conduct a structured interview in batches. Wait for answers before moving on. Extract what the user already told you from the conversation before asking.

Batch A — Session metadata

  • Trainer name and role
  • Date, location, company/team name
  • Duration
  • Total number of participants
  • Confirm or refine: who is the document for?

Batch B — Session context

  • Stated goal of the training
  • Subject, topic, tool, or material used as practical support
  • Any rules or constraints set at the start
  • Materials, accounts, licenses, or equipment provided to participants

Batch C — Starting levels

  • Distribution of familiarity across the group (any beginners? any experts?)
  • Notable outliers at either end

Batch D — Session walkthrough

Walk through the session step by step. For each step:

  • Objective
  • What participants actually did
  • Materials, tools, or exercises involved
  • First exposure to this concept or not
  • How it landed; any difficulties

Probe until complete: "What happened next?", "Did anything go differently than planned?", "Were there any pivots?"

Batch E — Deliverables

Ask: "Did participants produce anything during the session?"

Probe for:

  • Documents, files, diagrams, prototypes, or any output created during exercises
  • Collaborative work produced as a group
  • Individual work produced autonomously
  • Anything left incomplete or started but not finished

These may appear in the Annexes and/or be referenced in the Session Walkthrough.

Batch F — General observations

  • Overall energy and engagement of the group
  • Any incidents, surprises, or notable moments
  • Schedule: did it hold, or were sections cut/extended?
  • Logistical issues (room, materials, setup)

General Observations is optional. If the trainer has nothing notable to add beyond the walkthrough, skip this section entirely.

Batch G — Individual feedback

Ask: "Do you have specific observations for any individual participant?"

For each named participant, extract:

  • Role or background
  • Starting level
  • Behavior/engagement (positive and negative)
  • Notable evolution, breakthrough, or resistance
  • How they ended the session

Individual Feedback is optional. Only write it if the trainer explicitly provides meaningful observations. Do not prompt for feedback on every participant.

Be diplomatic. Describe behaviors, not character. Name problems factually; do not editorialize. When writing for an external client about a team you don't know, consider whether naming individuals is appropriate at all.

See references/tone-of-voice.md — Diplomatic framing section.

Batch H — Recommendations & next steps

Ask: "What would you recommend to the direction/client to build on this session?"

Probe for:

  • Resources and access to provide (licenses, books, platforms, communities)
  • Practices to anchor in daily work
  • What to pace carefully — basics before advanced material
  • Follow-up sessions (refresher, coaching, Q&A after a few weeks)
  • Assessment and validation (quiz, practical challenge, peer review, checklist)
  • Knowledge-sharing rituals (Slack/Teams channel, recurring meeting, Loom demos, buddy system, monthly show-and-tell)
  • Management involvement (protect practice time, 1:1 check-ins, celebrate wins)
  • External resources (books, courses, certifications) for self-driven participants
  • Specific warnings or caveats for management

Batch I — Annexes

Ask: "Do you have any annexes to attach to the report?"

Annexes can include:

  • Photos from the session
  • Satisfaction survey results (NPS, ratings, verbatim comments)
  • Slides or handouts distributed during the session
  • Work produced by participants (exercises, prototypes, documents, diagrams)
  • Reference documents used during the session
  • Any other supporting material

For each annex:

  • Image → attempt auto-embed at Step 6
  • File (PDF, slides, spreadsheet) → reference in the Annexes section; do not embed
  • Survey data → synthesize in Step 4, then include as a dedicated section in the doc
  • Participant deliverable → reference in the relevant Walkthrough step AND in Annexes

Batch J — Closing & contact

Ask: "May I include a closing note thanking the team for the invitation, and your contact details for future collaboration? (email + phone)"

If yes: collect name, email, phone. The closing is written in the document language, personal in tone, brief. See references/markdown-draft.md — Closing paragraph section.

Step 4 — Feedback synthesis (if survey data provided)

Produce a synthesis in the conversation before drafting:

  • Overall score / NPS
  • Rating distribution
  • Top 3 positive themes
  • Top 3 areas for improvement
  • Any outlier responses

Ask the user to confirm before it enters the document.

Step 4b — Confirm outline

Here's what I'll draft:
1. Context
2. Starting Levels
3. Session Walkthrough (N steps)
4. General Observations         [optional — include if trainer provided content]
5. Participant Satisfaction     [only if survey data provided]
6. Individual Feedback          [optional — include if trainer provided feedback]
7. Recommendations & Next Steps
8. Annexes                      [only if annexes provided]
[Closing + contact]

Language: [language] | Audience: [target] | Template: [yes/no]

Ask: "Anything to adjust before I start the draft?"

Step 5 — Markdown draft

Load references/markdown-draft.md before writing. It contains the full section-by- section writing guide, Markdown limitations, HTML table workarounds, and closing paragraph guidance.

Humanization pass

Before presenting the draft, apply the humanizer skill (loaded in Step 0). If no humanizer skill is available, apply these rules inline:

  • Cut all AI throat-clearing openers and sentence starters
  • Cut adjective doublets — pick the more precise word
  • Replace passive voice with active wherever natural
  • Replace vague praise or criticism with specific behaviors or facts
  • Short sentences over long ones
  • Adapt to the document language (see references/tone-of-voice.md)

Do not present an un-humanized draft.

Iteration loop

Present the draft inline in the conversation. Let the user lead. Update the .md file for every change. One canonical file, no versions. Only move to Step 6 when the user explicitly confirms the content is final.

Step 6 — Final .docx generation

Load references/docx-generation.md and the docx skill before starting.

This step runs once. It is terminal: if the user requests changes after the .docx is generated, update the .md and regenerate from scratch.

Deliver both files. If the environment supports inline file delivery (e.g. present_files on Claude.ai), use it. Otherwise, print the absolute paths to both files.

Pitfalls

  • Don't fabricate details — only document what the trainer explicitly provided
  • Don't editorialize in General Observations — factual only
  • Don't write Individual Feedback unless explicitly provided — and stay diplomatic
  • Don't pad recommendations — 6 sharp ones beat 12 vague ones
  • Always include a Pacing recommendation in the Next Steps
  • This skill is not developer-specific — adapt vocabulary to the discipline
  • Never generate the .docx mid-conversation — Markdown is the draft stage
  • Never skip an annex or image — embed, reference, or placeholder

Comments

Loading comments...