Install
openclaw skills install client-onboarding-agentClient onboarding and business diagnostic framework for AI agent deployments. Covers 4-round diagnostic process, 6 constraint categories, deployment SOP with...
openclaw skills install client-onboarding-agentFramework for onboarding new clients into AI agent deployments. This isn't a sales process — it's a diagnostic process. You're figuring out what's broken, what can be automated, and what the constraints are before you promise anything.
Every client engagement starts with four rounds of structured discovery. Each round has a specific purpose and produces a specific artifact. Do not skip rounds. Do not combine rounds.
Purpose: Understand what hurts and what they're already using.
Duration: 30-60 minutes
Questions to ask:
What you're listening for:
Artifact: Pain Point Map
## Pain Point Map — [Client Name]
Date: [Date]
### Critical Pain Points (daily impact)
1. [Pain point]: Currently handled by [who] using [tool]. Takes [time].
2. [Pain point]: Currently handled by [who] using [tool]. Takes [time].
### Significant Pain Points (weekly impact)
1. [Pain point]: Currently handled by [who] using [tool]. Takes [time].
### Chronic Pain Points (ongoing frustration)
1. [Pain point]: No current solution / workaround is [description].
### Current Tool Stack
- [Tool 1]: Used for [purpose]. Satisfaction: [1-5]
- [Tool 2]: Used for [purpose]. Satisfaction: [1-5]
- [Tool 3]: Used for [purpose]. Satisfaction: [1-5]
Purpose: Map how information actually flows through the business. Not the org chart — the real flow.
Duration: 45-90 minutes
Questions to ask:
What you're mapping:
Artifact: Workflow Diagram
[Trigger] → [Step 1: who/tool] → [Decision?] → [Step 2: who/tool] → [Output]
↓
[Alternative path]
Create one diagram per major workflow. Mark each step with:
Purpose: Identify what will block or limit the deployment. This is where most onboardings fail — people skip constraint analysis and then hit walls during implementation.
Duration: 30-60 minutes
The 6 Constraint Categories:
Questions:
Questions:
Questions:
Questions:
Questions:
Questions:
Artifact: Constraint Matrix
## Constraint Matrix — [Client Name]
| Category | Constraint | Severity | Mitigation |
|----------------|-----------------------------------|----------|-------------------------------|
| Technical | Legacy payroll system, no API | High | Manual bridge or CSV export |
| Financial | $X/month max budget | Medium | Prioritize highest-ROI agents |
| Regulatory | HIPAA applies to patient data | High | On-premise only, no cloud API |
| Organizational | Owner travels 2 weeks/month | Medium | Async onboarding + mobile |
| Data | 3 years of data in spreadsheets | Low | One-time import project |
| Timeline | Tax season starts in 8 weeks | High | Deploy accounting agent first |
Purpose: Based on Rounds 1-3, design the actual deployment plan and prioritize what gets built first.
Duration: 60-90 minutes (may include follow-up)
Process:
Match pain points to automatable workflows
Prioritize by score
Design the deployment plan
Set completion contracts for each phase (see below)
Artifact: Deployment Plan
## Deployment Plan — [Client Name]
### Phase 1 (Weeks 1-2): [Name of automation]
- Pain point addressed: [from Round 1]
- Workflow automated: [from Round 2]
- Constraints mitigated: [from Round 3]
- Completion contract: [see below]
- Expected impact: [specific, measurable]
### Phase 2 (Weeks 3-4): [Name of automation]
[Same structure]
### Phase 3 (Weeks 5-6): [Name of automation]
[Same structure]
Every deliverable gets a completion contract. No ambiguity. No "it's mostly done." Done is binary.
## Completion Contract: [Deliverable Name]
### Done Criteria (ALL must be true)
1. [Specific, observable criterion]
2. [Specific, observable criterion]
3. [Specific, observable criterion]
### Observable Evidence
- [ ] [What you can see/verify to confirm criterion 1]
- [ ] [What you can see/verify to confirm criterion 2]
- [ ] [What you can see/verify to confirm criterion 3]
### Staged Approval
- Stage 1: Internal verification (we confirm it works)
- Stage 2: Client demo (client sees it work)
- Stage 3: Client independent use (client uses it without help)
### Timeout Bounds
- Expected completion: [date]
- Hard deadline: [date]
- If not complete by hard deadline: [what happens — usually rescope]
## Completion Contract: Automated Invoice Processing
### Done Criteria
1. Agent can read incoming invoices from email attachments (PDF, image)
2. Agent correctly extracts vendor, amount, date, and line items with >95% accuracy
3. Agent creates corresponding entry in QuickBooks with correct categorization
4. Agent flags anomalies (unusual amounts, new vendors) for human review
### Observable Evidence
- [ ] Process 20 test invoices with known correct values; >19 match
- [ ] New vendor triggers human review notification (test with 3 new vendors)
- [ ] QuickBooks entries match invoice data exactly (spot-check 10)
- [ ] Agent handles unreadable invoices gracefully (flags, doesn't guess)
### Staged Approval
- Stage 1: We process 50 historical invoices, verify accuracy
- Stage 2: Client watches live processing of 5 real invoices
- Stage 3: Client runs independently for 5 business days, reports issues
### Timeout Bounds
- Expected: 10 business days from deployment start
- Hard deadline: 15 business days
- If missed: Rescope to manual-assist mode (agent prepares, human confirms)
Not all automations are created equal. Some are safe to let the agent run unsupervised. Some should never run without a human in the loop. Use this tiering system for every automation.
| Tier | Risk Level | Supervision | Promotion Timeline | Example |
|---|---|---|---|---|
| Low | Low risk, easily reversible | Self-promote after 3 days of clean operation | 3 days | Email sorting, report generation, data lookups |
| Medium | Moderate risk, some consequences | Human approves each action for 2 weeks, then auto with audit log | 2 weeks | Invoice processing, appointment scheduling, client communications |
| High | High risk, significant consequences | Human approves for minimum 2 weeks, never fully unsupervised | 2 weeks minimum, always monitored | Financial transactions, legal documents, compliance filings |
| Restricted | Critical risk, irreversible consequences | Always draft-only, human executes | Never promotes | Tax filings, wire transfers, contract signing, regulatory submissions |
Low tier promotion (3 days):
Day 1-3: Agent performs task, human reviews every output
Day 4: If zero errors → agent runs autonomously with daily summary
If any errors → reset counter, fix issue, restart 3-day window
Medium tier promotion (2 weeks):
Week 1: Agent prepares action, human approves before execution
Week 2: Same, with audit log review at end of each day
Week 3+: Agent executes autonomously, human reviews audit log daily
If any error at any stage → drop back to full approval mode
High tier (never fully autonomous):
Week 1-2: Agent prepares, human approves every action
Week 3+: Agent prepares, human approves every action
Always: Human spot-checks are mandatory, not optional
Frequency of checks can decrease but never reach zero
Restricted tier (always draft-only):
Always: Agent prepares draft/recommendation
Always: Human reviews, modifies if needed, and executes
Agent never has credentials/access to execute directly
During Round 4 (Solution Design), assign a tier to every automation:
| Automation | Tier | Rationale |
|-----------|------|-----------|
| Email triage | Low | Easily reversible, low consequences |
| Invoice entry | Medium | Financial data, but correctable |
| Client billing | High | Direct financial impact on client |
| Tax filing | Restricted | Regulatory, irreversible, penalties |
Don't sell what the agent does on Day 1. Sell where the client will be after 6 weeks of compounding agent learning.
Day 1: The agent knows nothing about the client. It follows templates. It asks for approval on everything. It's slower than doing it yourself.
Week 2: The agent knows the client's preferences. It suggests before being asked. Approval rate is 80%+ on first try. It catches things humans miss.
Week 4: The agent handles routine tasks autonomously. It only escalates edge cases. The client forgot what it was like to do those tasks manually.
Week 6: The agent has built a memory of the business. It anticipates seasonal patterns. It cross-references data across systems. It's doing things the client never thought to automate because it sees patterns they can't.
Use this table in client conversations:
| Dimension | Day 1 | Week 6 |
|---|---|---|
| Knowledge | Template only | Deep client-specific memory |
| Speed | Slower than manual | 10-100x faster than manual |
| Accuracy | 80% (needs review) | 95%+ (exceeds human) |
| Autonomy | Everything needs approval | Routine tasks run independently |
| Scope | 1-2 narrow tasks | Expanding to adjacent workflows |
| Value | "Interesting experiment" | "Can't imagine going back" |
"I want to be honest with you — on Day 1, this agent is going to feel like a new employee who needs training. It'll be slower and it'll ask a lot of questions. That's normal. But unlike a human employee, this agent never forgets what it learns, it works 24/7, and every week it gets faster and more accurate. By Week 6, most clients tell us they can't imagine going back. That's what we're building toward."
Clients often ask: "Why not just use the most powerful AI for everything?"
Different tasks need different levels of AI capability:
Task Complexity → Model Tier
─────────────────────────────────────
Data lookups, formatting → Fast/cheap model (Haiku-class)
Email drafting, summaries → Mid-tier model (Sonnet-class)
Strategic analysis, complex reasoning → Top-tier model (Opus-class)
"Think of it like staffing. You wouldn't hire a senior partner to file paperwork, and you wouldn't ask an intern to negotiate a contract. We use the right level of AI for each task — fast and cheap for routine work, powerful and thoughtful for complex decisions. This keeps your API costs manageable while making sure the important stuff gets the best thinking."
Without staggering: All tasks use Opus → ~$X/month API costs
With staggering: 80% Haiku, 15% Sonnet, 5% Opus → ~$X/5 month API costs
Same quality for complex tasks, 80% cost reduction overall
Pre-deployment:
□ Round 1: Pain Points (Day -14)
□ Round 2: Workflow Mapping (Day -10)
□ Round 3: Constraints (Day -7)
□ Round 4: Solution Design (Day -5)
□ Agreement signed, hardware sourced (Day -3)
Deployment:
□ Layer 1-4 deployment (Day 0-1)
□ Layer 5: Day-1 onboarding (Day 2)
Post-deployment:
□ Daily check-in (Week 1)
□ Tier promotion reviews (Day 3, Week 2)
□ Twice-weekly check-in (Weeks 2-4)
□ Week-6 review and expansion planning