Work Application

v1.0.10

CV generation, job scraping, offer analysis, and application tracking. Use when: (1) generating or adapting a CV/resume for a job offer, (2) scraping job off...

1· 301·1 current·1 all-time
byRomain@romain-grosos
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match what the package does: HTML CV rendering, job scraping (optional via Playwright), analysis, and candidature tracking. Required resources (no env vars, optional Playwright for network scraping, optional Nextcloud integration delegated to another skill) are appropriate for those features.
Instruction Scope
SKILL.md explicitly documents commands, file paths (~/.openclaw/...), and a restrictive config.json that disables destructive/network capabilities by default. Instructions only reference expected artifacts (profiles, job markdown, reports); they do not instruct reading unrelated system files or exfiltrating data.
Install Mechanism
This is instruction/code bundle with no install spec. Optional dependency (Playwright and a stealth plugin) is documented and used only for scraping/PDF export. No downloads from untrusted URLs or obscure install mechanisms are present in metadata.
Credentials
The skill declares no required environment variables or credentials. Nextcloud integration is explicitly delegated to a separate skill (credentials not handled here). The requested permissions (writes to its own ~/.openclaw paths) are proportional to its tracking and storage functionality.
Persistence & Privilege
The skill is not forced-always; it runs when invoked. It stores data under its own config/data paths and provides a readonly_mode kill-switch and fine-grained allow_* flags. It does not modify other skills or global agent settings.
Assessment
This skill appears to do what it says. Before enabling features: (1) keep allow_scrape=false unless you want network scraping — enabling it installs/uses Playwright and will make HTTP requests to job sites (Glassdoor/Indeed etc.) and could be subject to site blocking or TOS issues; (2) keep allow_write=false/readonly_mode=true if you don't want the skill to modify your master profile files; (3) Nextcloud use is optional and credentials are managed by a separate nextcloud skill — install that only if you trust it; (4) review scripts/setup.py and scripts/init.py when first installing to confirm interactive prompts and file locations; (5) run Playwright-related features in an environment you control (e.g., VM/container) if you are concerned about automated browser network activity. Overall the package is internally consistent, but only enable network/write capabilities if you trust the source and understand the consequences.

Like a lobster shell, security has layers — review code before you run it.

latestvk97fsg3t0fbz3yq78psa3ew2ns82sjbr

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

📄 Clawdis

SKILL.md

Work Application Skill

Job search automation with CV generation, scraping, analysis, and tracking. 4 HTML CV templates (classic, modern-sidebar, two-column, creative), job scraping across 5 French platforms (Free-Work, WTTJ, Apec, HelloWork, LeHibou), keyword-based scoring, deep page analysis, full multi-dimension report (skills/company/location/salary with market data), and candidature tracking via markdown table. Stdlib only (except Playwright for scraping/analysis). Supports local and Nextcloud storage.

Profiles: ~/.openclaw/data/work-application/ · Config: ~/.openclaw/config/work-application/config.json

Trigger phrases

Load this skill when the user says anything like:

  • "generate my CV", "create a resume", "render my CV"
  • "adapt my CV for this job offer", "tailor my resume"
  • "scrape job offers", "find jobs", "search for devops positions"
  • "analyze job offers", "rank these jobs", "score job matches"
  • "deep analyze these jobs", "scrape and analyze job pages"
  • "analyze this job offer", "generate a report for this URL", "report on this job"
  • "track this application", "log candidature", "update application status"
  • "list my applications", "show candidatures"
  • "validate my CV", "check CV formatting"
  • "show my profile", "what's in my master profile"

Quick Start

python3 scripts/work_application.py profile show
python3 scripts/work_application.py render --template classic --output cv.html
python3 scripts/work_application.py report "https://example.com/job-offer"
python3 scripts/work_application.py track list

Setup

python3 scripts/setup.py       # interactive: profile + permissions + scraper config
python3 scripts/init.py        # validate configuration

config.json - behavior restrictions (all destructive capabilities disabled by default):

KeyDefaultEffect
allow_writefalseallow modifying master profile
allow_exporttrueallow generating HTML/PDF
allow_scrapefalseallow running job scraper + network requests (requires Playwright)
allow_trackingtrueallow logging candidatures
default_template"classic"CV template when not specified
default_color"#2563eb"accent color
default_lang"fr"language (fr/en)
report_mode"analysis"report output: analysis or cv+analysis
readonly_modefalseoverride: block all writes regardless of above

Storage & data

PathWritten byPurpose
~/.openclaw/config/work-application/config.jsonsetup.pyPermissions and defaults. No secrets.
~/.openclaw/data/work-application/profile-master.jsonsetup.pyComplete master profile (all experiences, skills, etc.)
~/.openclaw/data/work-application/profile.jsonAgentAdapted profile for current job offer
~/.openclaw/data/work-application/candidatures.mdAgentApplication tracking table
~/.openclaw/data/work-application/jobs/jobs-found.mdScraperRaw scraped job offers
~/.openclaw/data/work-application/jobs/jobs-ranked.mdAnalyzerScored and ranked offers
~/.openclaw/data/work-application/jobs/jobs-selected.mdAnalyzerTop selection (CDI + Freelance)
~/.openclaw/data/work-application/market-data.jsonReportMarket salary references (auto-generated, editable)
~/.openclaw/data/work-application/reports/report-*.mdReportFull analysis reports (markdown)
~/.openclaw/data/work-application/reports/report-*.jsonReportFull analysis reports (JSON)

Nextcloud storage (optional): when storage.backend = "nextcloud" is set in config, files are stored on the user's Nextcloud instance instead of locally. Authentication is delegated entirely to the openclaw-skill-nextcloud skill - this skill never handles Nextcloud credentials. The nextcloud skill must be installed separately.

Cleanup: python3 scripts/setup.py --cleanup

Uninstall: rm -rf ~/.openclaw/data/work-application/ ~/.openclaw/config/work-application/

Security model

Capability isolation

All capabilities are disabled or restricted by default. The agent cannot perform actions until explicitly enabled in config.json:

CapabilityDefaultWhat it gates
allow_writefalseModify master profile
allow_exporttrueGenerate HTML/PDF output
allow_scrapefalseRun Playwright browser, make network requests
allow_trackingtrueAppend to candidature log
readonly_modefalseMaster kill-switch - blocks all writes

Credential isolation

This skill stores no secrets and requires no environment variables. config.json contains only behavioral flags and defaults - never credentials.

  • Local storage (default): reads/writes files under ~/.openclaw/data/work-application/. No authentication needed.
  • Nextcloud storage (optional): delegates all authentication to the openclaw-skill-nextcloud skill, which manages its own credentials (NC_URL, NC_USER, NC_APP_KEY in ~/.openclaw/secrets/nc_creds). This skill never reads, stores, or handles Nextcloud credentials directly - it imports NextcloudClient from the nextcloud skill at runtime. The nextcloud skill must be installed and configured separately (clawhub install nextcloud-files).
  • Scraping (optional, allow_scrape=true): Playwright runs without authentication. Job pages are fetched as anonymous HTTP requests. Company review scraping (Glassdoor/Indeed) also runs unauthenticated - no login or API key is used.

Path traversal protection

All storage operations (local and Nextcloud) validate filenames through _validate_name():

  • Rejects absolute paths (/etc/passwd, C:\...)
  • Rejects traversal sequences (../, ..\\, bare ..)
  • Rejects null bytes (\x00)
  • Normalizes slashes and collapses redundant separators
  • LocalStorage additionally resolves the final path and verifies it remains inside the storage root directory

HTML output safety

All profile fields are passed through html.escape() before HTML rendering. No raw user content is inserted into templates. This prevents XSS if the generated CV is served or shared.

Network boundaries

  • No network calls by default - CV generation, analysis, tracking are fully offline
  • Network calls only happen when allow_scrape=true, exclusively via Playwright (headless Chromium)
  • Domains contacted when scraping: free-work.com, welcometothejungle.com, apec.fr, hellowork.com, lehibou.com (job platforms), plus the specific job offer URL provided by the user
  • Domains contacted by report (optional, allow_scrape=true): glassdoor.fr, fr.indeed.com (company reviews - unauthenticated, anonymous)
  • Nextcloud storage (optional): contacts the user's own Nextcloud instance via the nextcloud skill - no third-party server
  • URLs built from user data (company names for review lookup) are properly URL-encoded
  • No background network activity, no telemetry, no phone-home

File output safety

  • Reports and CVs are written only to the configured storage directory (~/.openclaw/data/work-application/ or Nextcloud remote path)
  • Filenames derived from user data (company names) are strictly sanitized to ASCII alphanumeric + hyphens
  • Subdirectories (reports/, jobs/) are created inside the storage root only

Module usage

from scripts._profile import load_master_profile, load_adapted_profile, save_adapted_profile
from scripts._cv_renderer import render_cv
from scripts._validators import validate_profile
from scripts._tracker import log_application, list_applications, update_status
from scripts._report import generate_report, format_report_markdown, save_report

CLI reference

# Profile
python3 scripts/work_application.py profile show
python3 scripts/work_application.py profile validate

# CV Rendering
python3 scripts/work_application.py render --template classic --output cv.html
python3 scripts/work_application.py render --template modern-sidebar --color "#e63946" --lang en

# Job Scraping
python3 scripts/work_application.py scrape
python3 scripts/work_application.py scrape --platforms free-work,wttj

# Job Analysis
python3 scripts/work_application.py analyze

# Deep Analysis (scrape job pages for detailed matching)
python3 scripts/work_application.py deep-analyze
python3 scripts/work_application.py deep-analyze --max 10

# Report (full multi-dimension analysis of a single offer)
python3 scripts/work_application.py report "https://example.com/job-offer"

# Application Tracking
python3 scripts/work_application.py track list
python3 scripts/work_application.py track list --status en_attente
python3 scripts/work_application.py track add "Thales" "DevOps Engineer" --location Paris --salary "55-65k"
python3 scripts/work_application.py track update "Thales" entretien

# Config
python3 scripts/work_application.py config

Templates

Adapt CV for a job offer

from scripts._profile import load_master_profile, save_adapted_profile
from scripts._cv_renderer import render_cv
from scripts._validators import validate_profile

# 1. Load master profile
master = load_master_profile()
# 2. Agent adapts profile for the job (select relevant skills, rewrite summary, etc.)
adapted = adapt_for_job(master, job_description)  # agent logic
# 3. Validate
report = validate_profile(adapted)
if not report["valid"]:
    print("Errors:", report["errors"])
# 4. Save and render
save_adapted_profile(adapted)
html = render_cv(adapted, template="classic", color="#2563eb")

Scrape → Analyze → Track

# 1. Scrape jobs
from scripts._scraper import JobScraper, filter_jobs, deduplicate
scraper = JobScraper(config)
jobs = asyncio.run(scraper.scrape_all())
jobs = deduplicate(filter_jobs(jobs, config["scraper"]["filters"]))

# 2. Analyze
from scripts._analyzer import rank_jobs, select_top
ranked = rank_jobs(jobs, master_profile)
selected = select_top(ranked)

# 3. Track top matches
from scripts._tracker import log_application
for job in selected["cdi"][:5]:
    log_application(job["company"], job["title"], location=job.get("location",""))

Quick candidature update

from scripts._tracker import update_status, list_applications
update_status("Thales", "entretien")
active = list_applications(status="en_attente")

CV Templates

TemplateDescriptionBest for
classicSingle-column, ATS-optimizedMost applications, ATS systems
modern-sidebarSidebar (35%) + main (65%)Tech companies, startups
two-columnTwo-column grid (38/62)Creative roles, design-aware
creativeTimeline, gradient headerPersonal branding, portfolios

Status icons

StatusIconMeaning
en_attenteWaiting for response
entretien📞Interview scheduled
negociation🤝In negotiation
offreOffer received
refusRejected
desistement🚫Withdrawn

Scraper platforms

PlatformURLTypesNotes
Free-Workfree-work.comFreelanceTJM parsing, remote detection
WTTJwelcometothejungle.comCDISalary ranges, company pages
Apecapec.frCDI/CadreExecutive positions
HelloWorkhellowork.comCDIBroad coverage
LeHiboulehibou.comFreelanceIT freelance missions

Ideas

  • Set allow_scrape: false + allow_export: true for a CV-only mode
  • Use scraper with allow_tracking: true to auto-log the best matches
  • Adapt CV per job offer, validate, render, then log the application
  • Use readonly_mode: true for a safe demo mode

Notes

  • Playwright dependency: Only needed for scraping. CV generation is stdlib only.
  • Profile structure: Master profile contains ALL data. Adapted profile is a filtered subset for a specific job.
  • Validators: Port of the JavaScript validators - same limits and thresholds.
  • i18n: CV rendering supports French and English section headings.
  • Print-ready: Generated HTML includes @media print rules and A4 page breaks.

Combine with

SkillWorkflow
ghostGenerate CV → publish as Ghost page
nextcloudSave rendered CV to Nextcloud
gmailSend CV as email attachment
veilleMonitor job market trends → adapt search queries

API reference

See references/api.md for CLI command details and profile schema.

Troubleshooting

See references/troubleshooting.md for common errors and fixes.

Files

19 total
Select a file
Select a file to preview.

Comments

Loading comments…