Install
openclaw skills install git-worktree-setupUse when the user explicitly asks to "generate / update a git worktree auto-setup script for this repo." Note this skill is NOT triggered when a new worktree is created — the script and hook it produces are. Workflow is audit-the-repo-first, propose a draft plan, ask the user only the questions you can't infer, then write the tailored setup-worktree script + matching agent-tool hook config (Claude Code SessionStart / WorktreeCreate, Codex/Cursor manual + git hook, Gemini CLI, etc.). Also used to update an existing script as the project structure evolves.
openclaw skills install git-worktree-setupProduces: a tailored scripts/setup-worktree.sh (or similar) for the current repo + the matching agent-tool hook config that auto-invokes it + a manual entry-point as a safety net.
Is NOT: the thing that runs every time a new worktree is created. That's the hook. This skill only runs when the user explicitly asks for it:
git worktree add requires me to manually install / copy .env — automate it"Core working style: audit the repo yourself → put a concrete draft on the table → ask only the questions you couldn't infer → land it.
Don't dump 7 questions on the user upfront — that's annoying. Read the code, read the configs, infer everything inferable, walk in with a draft proposal, and let the user adjust.
This is dynamic inference, not checking boxes off a static list. The table below is examples of common signals — you must expand the investigation based on what you actually find. Any tool, any stack, any project-specific convention is fair game. Read unfamiliar config files; grep for unknown CLI names; if the README / CONTRIBUTING / Makefile / justfile / Taskfile mentions "setup" / "bootstrap" / "install" steps, read them — they often hold the repo's own definition of "what a fresh machine needs."
Starting signals (examples, not exhaustive):
| What to look at | What to infer |
|---|---|
package.json (root + workspaces field) | npm/pnpm/yarn? monorepo? what workspace globs? |
pnpm-workspace.yaml / lerna.json / nx.json / turbo.json | confirms monorepo tooling |
pyproject.toml / Pipfile / uv.lock / requirements.txt | Python? poetry / uv / pip? share .venv? |
Cargo.toml (with workspace section) | Rust? share target/? |
go.mod | Go? usually nothing to share |
Gemfile / mix.exs / composer.json / pubspec.yaml etc. | other ecosystems — infer their deps dirs analogously |
.gitignore | hunt ignored entries: node_modules / .venv / .env / dist / *.state etc. — these are the share/copy candidates; don't skip unfamiliar ignores either, they're often project-specific cache |
.env.example / .dev.vars.example / apps/*/.dev.vars.example / config/*.example | hints at which secret / config files need Copy |
docker-compose*.yml / compose.yml / Dockerfile.dev | stateful services list (pg/redis/mysql) + volume paths + host port bindings |
wrangler.toml / fly.toml / serverless.yml / terraform/* etc. | various IaC / platform configs often imply a local state directory |
Existence of .claude/ / .cursor/ / .codex/ / .aider* / .opencode/ etc. | infer the user's current agent tooling |
Makefile / justfile / Taskfile.yml / bin/setup / script/bootstrap | project's own setup entry point — its install / link / copy steps are gold for worktree bootstrap |
"Getting started" / "Local dev" sections in README.md / CONTRIBUTING.md / docs/setup*.md | the human-language "what a fresh machine needs" list |
.github/workflows/*.yml / .gitlab-ci.yml etc. | how CI sets up the env ≈ what local probably needs |
scripts/setup-worktree.sh / scripts/setup-worktree.sh / bin/worktree-* etc. | already exists? → decide new-build vs update mode |
Whether main repo's node_modules/, apps/*/node_modules/, .venv/ etc. actually exist on disk | validates the inference + tells you what's currently linkable |
| Any unfamiliar top-level directory | e.g. references/, vendor/, third_party/, fixtures/ — could be deliberate shared resources; ls inside before asking (symlinks? large files? test data?) |
Expand actively: each of the above can lead to further investigation. Reading package.json, you spot husky → check whether .husky/ should be shared. You spot playwright → check whether browser cache (~/.cache/ms-playwright) should be shared. Reading pyproject.toml, you find [tool.uv] → where does uv put its cache. Don't skip just because something isn't in the table.
By the end of audit you should have:
Aggregate the audit into a resource list proposal, three tiers laid out. Give a concrete recommendation for what you can infer; mark "needs confirmation" for what you can't. Example:
I went through the repo: looks like an npm-workspaces monorepo (
apps/api,apps/web,packages/*), Cloudflare Workers + Vite..claude/settings.jsonexists, so Claude Code.Here's what I'd configure for worktree auto-init:
- Share (symlink):
node_modules,apps/*/node_modules,packages/*/node_modules- Copy:
apps/api/.dev.vars(saw.dev.vars.example)- Hook:
.claude/settings.jsonaddSessionStartcallingbash $(...)/scripts/setup-worktree.sh- Manual entry point:
bash scripts/setup-worktree.shalways worksA few I need to confirm:
.wrangler/state(Cloudflare local D1/R2 state): share across worktrees? Share = all worktrees see same local DB; isolated = each its own. Depends on whether you'll rundevin multiple worktrees concurrently.- Besides Claude Code, do you / your team also use Codex / Cursor / Gemini CLI? Want hooks for those too?
- Didn't see custom reference-repo / build-output sharing needs — confirm none?
Look right?
After the user answers, only ask the necessary follow-ups based on what they said. Common follow-ups:
pgdata, SQLite WAL), do you want concurrent isolation (Copy) or sharing (Share)? Default recommendation is Copy to prevent corruption."post-checkout."Don't open another round of questions — decide what you can decide.
setup-worktree.sh template into <repo>/scripts/hook-config.json) into .claude/settings.json (or whatever the tool's config location is; multiple tools = multiple hooks)dev / test end-to-endWhen updating an existing script: use Edit to diff-edit the resource declarations block. Don't touch the helper functions unless they're genuinely outdated.
| Tier | Examples | Why | How |
|---|---|---|---|
| Share (symlink) | node_modules, package-manager caches, stateful DB shared across worktrees | saves disk + install time; multiple worktrees see same data | ln -s $MAIN/<path> $WORKTREE/<path> |
| Copy | secrets / env files, signing keys, stateful files used concurrently | each worktree may diverge; mustn't break if main is deleted; mustn't get torn under concurrent writes | cp -R once (re-run skips) |
| Generate | dev port, COMPOSE_PROJECT_NAME, local socket | must differ per-worktree | hash(branch) % range for ports; clean_branch_name for container names |
Stateful data: in single-writer concurrent mode, must be Copy (Postgres, SQLite WAL are single-writer); for sequential use, can be Share. This must be in Step 2's "needs confirmation" list — don't decide it for the user.
| Agent tool | Trigger mechanism |
|---|---|
| Claude Code | SessionStart hook (most portable) / WorktreeCreate hook (only fires for claude --worktree, strict stdout contract: print only path, progress to /dev/tty) |
| Codex | No equivalent hook mechanism currently — manual script + post-checkout git hook |
| Cursor | Same as above |
| Gemini CLI | Not sure — ask the user for docs |
| Aider / others | Not sure — ask the user for docs |
| Multi-tool / no agent tool | Universal fallback: post-checkout git hook + manual bash scripts/setup-worktree.sh |
Key principles:
bash scripts/setup-worktree.sh. Any hook breaking or tool-switching shouldn't leave you stuck.| Resource | Default tier | Note |
|---|---|---|
Root node_modules/ | Share | npm workspaces hoist target |
apps/*/node_modules/, packages/*/node_modules/ | Share — don't skip | bundlers walk only one parent up from workspace; subpath exports like zod/v3 only exist in the workspace's local install |
.venv/, venv/ | Share (semi-readonly) | Python virtualenv shebangs are absolute-path-baked; same machine OK |
Build cache (.next/, target/, dist/) | Skip or Share | usually rebuilds fast enough |
.env*, .dev.vars | Copy | secrets can't be shared |
*.wrangler/state, pgdata/, redis-data/ | Depends on concurrency — list as needs-confirmation | share/copy depends on concurrent use |
| Dev port | Generate | hash branch |
COMPOSE_PROJECT_NAME | Generate | clean branch name |
| Custom (reference-repo symlinks, build artifacts, etc.) | Find during audit; if unsure list as needs-confirmation in Step 2 |
node_modules, missing workspaces: symptom Could not read from file: .../zod/v4. Loop both apps/* and packages/*.git worktree add polluting stdout (under WorktreeCreate hook): redirect with >/dev/null 2>&1, send progress to /dev/tty.dev corrupts. Step 2 must list as needs-confirmation.link_resource handles this.[ -e "$target" ] before linking.git rev-parse --git-common-dir..env: must be Copy (symlink edits leak across worktrees).setup-worktree.shhook-config.jsonrecipes.mdgit worktree add and exits 0npm run dev (or stack equivalent) actually starts, no "module not found"WorktreeCreate: stdout contains only the worktree pathbash scripts/setup-worktree.sh) also runs