Install
openclaw skills install agent-seeConvert any website, SaaS product, or API into a live, discoverable, agent-executable integration. Use when the user asks to "convert a website", "turn this...
openclaw skills install agent-seeConvert any website, SaaS product, or API into a live, discoverable, agent-executable integration. This skill encodes the complete Agent-See workflow — from conversion through deployment, publication, backend connection, and ongoing maintenance.
Operating principle: Never stop at artifact generation. After every step, present the go-live status dashboard and proactively ask what's needed next. The conversion is the starting point, not the finish line.
Before running any agent-see command:
agent-see is installed:
agent-see --version 2>/dev/null
pip install git+https://github.com/Danielfoojunwei/Convert-any-SaaS-application-into-an-Agentic-interface.git --break-system-packages
uv:
uv python install 3.11
uv venv --python 3.11 .venv
source .venv/bin/activate
uv pip install git+https://github.com/Danielfoojunwei/Convert-any-SaaS-application-into-an-Agentic-interface.git
playwright install chromium
Transform a website URL, SaaS product URL, or OpenAPI specification into a grounded agent bundle.
| User provides | Source type | Example |
|---|---|---|
| A website URL | Website | https://example.com |
| A SaaS product URL | SaaS | https://app.example.com |
A local file path ending in .json, .yaml, or .yml | OpenAPI spec | ./openapi.json |
If the source type is ambiguous, ask the user to clarify.
Set the output directory. Default to ./agent-output unless the user specifies otherwise.
agent-see convert <source> --output <output-dir> --verbose
Read and summarize the key outputs:
agent_card.json — confirm identity and discovery metadataAGENTS.md — verify the agent/operator guidance is accurateopenapi.yaml — check the API contract was captured correctlyskills/ — enumerate the business action wrappers generatedOPERATIONAL_READINESS.md — review execution boundariesPresent a structured summary:
If conversion fails:
playwright install chromiumAfter conversion succeeds, immediately:
Run verification: agent-see verify <output-dir>/proof/proof.json
Present the go-live status dashboard showing what's done and what remains:
| Step | Status | What's needed |
|---|---|---|
| Conversion | ✅ Done | — |
| Verification | ⏳ Running | — |
| Launch layer | ❌ Not started | Business name, domain, contact info |
| Runtime deployment | ❌ Not started | Hosting platform choice |
| Discovery publishing | ❌ Not started | Website access |
| Backend connection | ❌ Not started | Real API/database details |
| Maintenance | ❌ Not started | Schedule preferences |
Ask: "Should we continue with generating the launch layer, or do you want to deploy the server first?"
Core bundle files:
| Artifact | Description |
|---|---|
mcp_server/ | Callable tool surface for agents. Contains server.py, deployment configs (Dockerfile, docker-compose.yml, fly.toml, railway.json, render.yaml), and runtime metadata (route_map.json, tool_metadata.json, runtime_state.json). |
openapi.yaml | Machine-readable API contract. All discovered endpoints, request/response schemas, auth requirements, rate limits. |
agent_card.json | Identity and discovery metadata. Agent name, description, capabilities, supported protocols, trust signals. |
AGENTS.md | Human and agent-readable guidance. What the integration does, how to use it, operational boundaries, caveats. |
OPERATIONAL_READINESS.md | Execution boundaries. Auth requirements, state-changing operations, rate limits, known limitations. |
skills/*.md | Individual business action wrappers (e.g., list_products.md, add_to_cart.md). |
skills/workflows/*.md | Composite workflow files chaining multiple skills (e.g., purchase_flow.md). |
proof/ | Grounding evidence: screenshots, DOM snapshots, API response samples, cross-validation reports. |
capability_graph.json | Structured graph of capabilities and their relationships. |
Assess conversion quality across coverage, fidelity, and hallucination metrics.
agent-see verify <output-dir>/proof/proof.json
| Metric | High | Medium | Low |
|---|---|---|---|
| Coverage | >80% — most actions captured | 50-80% — significant gaps | <50% — re-run required |
| Fidelity | Faithfully represents source | Simplified/approximated | Significant deviations |
| Hallucination | None detected | Weak grounding evidence | Fabricated capabilities — must remove |
Present a structured summary:
| Issue type | Action |
|---|---|
| Low coverage | Re-run conversion with adjusted scope or better access |
| Low fidelity | Re-run with verbose mode |
| Hallucinations | Remove fabricated entries from skills/ and update agent_card.json |
| Missing proof | Re-run conversion to regenerate grounding evidence |
Generate the public discovery and trust layer from an existing grounded agent bundle.
Confirm a grounded agent bundle exists:
ls <output-dir>/agent_card.json
Check for or generate a launch intake file:
agent-see launch init ./launch-intake.json \
--name "<business name>" \
--domain "https://<domain>" \
--business-type <type> \
--summary "<description>" \
--contact-email "<email>" \
--support-url "https://<domain>/support" \
--agent-see-output-dir <output-dir> \
--verbose
Key fields to confirm with the user: business name and URL, public page locations, trust signals, contact information.
agent-see launch sync ./launch-intake.json --verbose
agent-see launch check <launch-output> --bundle <output-dir>
Read and summarize:
| Artifact | What to check |
|---|---|
launch/llms.txt | Accurately describes public pages |
launch/agents.md | Truthful agent access instructions |
launch/reference_layer/ | Supporting usage, limitation, trust, policy pages |
launch/launch_report.md | Readiness warnings |
launch/surface_alignment.json | Public claims match runtime capabilities |
launch/update_register.md | Maintenance plan |
| Artifact | Purpose |
|---|---|
| llms.txt | Model-facing guide at website root. Tells LLMs which pages are most important. Follows the llms.txt convention. |
| agents.md | Canonical "how to use this integration" document. Actions, connection details, boundaries, contacts. |
| Reference layer | Usage guide, limitations, trust signals, policy page. |
| Launch report | Internal readiness assessment. Checks artifacts generated, claims supported. |
| Surface alignment JSON | Machine-readable comparison: each claim tagged aligned, partial, or misaligned. |
| Update register | Maintenance plan: trigger conditions, commands, expected outputs. |
After launch artifacts are generated, immediately:
Present the updated go-live status dashboard
Ask: "Launch layer is ready. These files need to be published on your website. Should we:"
Recommend this order: deploy runtime → publish discovery (with real endpoint URL) → connect real data
Wrap a grounded agent bundle as a distributable plugin for a target harness.
Confirm bundle exists:
ls <output-dir>/agent_card.json
Default to Claude workspace format. Supported targets:
| Harness | Description | Artifact mix |
|---|---|---|
| Claude | Claude workspaces / Cowork plugins | MCP runtime or OpenAPI, AGENTS guidance, plugin guide |
| Manus | Manus-style agents | MCP runtime, AGENTS guidance, skills, readiness outputs |
| OpenClaw | OpenClaw-like orchestrators | Runtime metadata, agent card, route map, connector guide |
| Generic | Any other agent system | OpenAPI, AGENTS guidance, plugin manifest, starter kit |
agent-see plugin sync <output-dir> --launch-output <launch-output> --verbose
| Artifact | Purpose |
|---|---|
plugins/plugin_manifest.json | Machine-readable inventory of the grounded bundle |
plugins/PLUGIN_GUIDE.md | Step-by-step usage for target harness |
plugins/connectors/ | Harness-specific connection guides (claude_workspace.md, manus.md, openclaw.md, generic.md) |
plugins/starter_kit/ | Templates: plugin_template.md, skill_template.md, connector_template.md |
Claude Workspaces / Cowork:
plugin-name/
├── .claude-plugin/
│ └── plugin.json
├── skills/
│ └── skill-name/
│ └── SKILL.md
├── .mcp.json (if runtime endpoint exists)
└── README.md
MCP integration in .mcp.json:
{
"mcpServers": {
"agent-see-runtime": {
"command": "python",
"args": ["${CLAUDE_PLUGIN_ROOT}/mcp_server/server.py"],
"env": { "API_BASE_URL": "${API_BASE_URL}" }
}
}
}
Manus-style agents: MCP endpoint URL + tool_metadata.json + skills + AGENTS.md
OpenClaw-like orchestrators: agent_card.json registered with discovery service + route_map.json + capability_graph.json + thin protocol connector
Generic harnesses: OpenAPI spec + AGENTS.md guidance + plugin manifest + starter kit templates
Proactive end-to-end orchestrator that guides a business owner from a completed conversion all the way to a live, discoverable, agent-executable integration.
The conversion is the starting point, not the finish line. A converted business needs to become easy for agents and LLMs to retrieve, understand, trust, and act on. This means operating two layers simultaneously: the human-facing website and the machine-facing agent integration surface.
Read the bundle directory and present a status dashboard:
| Step | Status | What's needed |
|---|---|---|
| Conversion | ✅/❌ | Source URL or OpenAPI spec |
| Verification | ✅/❌ | Run agent-see verify |
| Launch layer | ✅/❌ | Business name, domain, contact info |
| Runtime deployment | ✅/❌ | Hosting platform choice |
| Discovery publishing | ✅/❌ | Website access or hosting details |
| Backend connection | ✅/❌ | Real database/API credentials |
| Structured data | ✅/❌ | Website template access |
| Maintenance loop | ✅/❌ | Schedule preferences |
Ask: "Which of these do you want to tackle next, or should we go through them in order?"
For each extracted capability, suggest a corresponding URL:
| Capability | Suggested URL | User intent it answers |
|---|---|---|
| list_products | /products or /menu | "What do you sell?" |
| get_product_details | /products/{slug} | "Tell me about this item" |
| add_to_cart | /cart | "I want to buy this" |
| submit_checkout | /checkout | "I'm ready to pay" |
| get_order_status | /orders/{id} | "Where's my order?" |
Ask: "Do these URLs exist on your site? Which ones need to be created or updated?"
For each canonical URL, the top of the page must immediately answer: who the offer is for, what action can be completed, what inputs are required, what constraints exist, and what the next step is.
Ask: "Do you want help rewriting any of these pages? I can generate answer-first content based on your converted capabilities."
Trigger Skill 6: Deploy Runtime. Ask:
Trigger Skill 8: Publish Discovery. Ask:
Help the user create a public /agents page from launch-output/agents.md. Ask:
Generate JSON-LD snippets based on business type:
Organization for the homepageProduct for product pagesFAQPage for FAQBreadcrumbList for navigationAsk about Search Console verification, Google Business Profile, and contact detail consistency across web properties.
Trigger Skill 7: Connect Backend. Ask:
Trigger Skill 9: Maintain. Ask:
agent-see launch check for surface alignmentPresent a final go-live report.
After ANY skill completes (convert, verify, launch, package), immediately present this status dashboard and offer to continue with the next incomplete step. Do not stop at artifact generation.
Deploy the generated MCP server so agents can call it over the network.
Confirm mcp_server/server.py exists in the bundle output.
Ask the user for every required setting before deploying.
Mandatory:
| Setting | Environment variable | What to ask |
|---|---|---|
| Target API URL | TARGET_URL | "What's the base URL of your real API that this server should proxy to?" |
| Port | PORT | Default 8000 unless specified |
Authentication (ask which applies):
| Method | Variables | What to ask |
|---|---|---|
| Bearer token | AUTH_TOKEN | "Does your API use a bearer token?" |
| Custom header | AUTH_HEADER_NAME, AUTH_HEADER_VALUE | "Custom auth header name and value?" |
| None | — | "Is the API publicly accessible?" |
Operational limits:
| Setting | Variable | Default |
|---|---|---|
| Request timeout | REQUEST_TIMEOUT_MS | 30000 |
| Max retries | MAX_RETRIES | 3 |
| Session TTL | SESSION_TTL_SECONDS | 3600 |
| Max sessions | MAX_SESSIONS | 100 |
Docker (Local or VPS):
cd <output-dir>/mcp_server
cp .env.example .env
# User fills in .env values
docker-compose up -d
Fly.io:
cd <output-dir>/mcp_server
fly launch --no-deploy
fly secrets set TARGET_URL=<value> AUTH_TOKEN=<value>
fly deploy
Railway:
cd <output-dir>/mcp_server
railway init
railway up
Render: Push mcp_server/ to a Git repo and connect via Render dashboard.
curl <deployed-url>/healthcurl <deployed-url>/toolslist_products and verify real dataAfter deployment, record the live URL. Ask: "Deployment is live at <url>. Should we continue with publishing discovery files?"
Configuration:
TARGET_URL set and correctREQUEST_TIMEOUT_MS)MAX_RETRIES)SESSION_TTL_SECONDS)MAX_SESSIONS)Runtime Safety:
/healthExecution Resilience:
Session Management:
Approval Governance:
Known Limitations:
Wire the generated MCP server to real data sources.
Read route_map.json and tool_metadata.json to understand what endpoints exist.
Ask the user:
"Where does your real product/service data live?"
"Is there API documentation or an OpenAPI spec for your backend?"
"What authentication does your backend require?"
"Do you have a staging/test environment, or should we work against production?"
Existing REST API:
Update TARGET_URL to point to the real API
Map each MCP tool to corresponding real endpoint:
| MCP Tool | Generated route | Real API endpoint |
|---|---|---|
| list_products | POST /tools/list_products | GET /api/products |
| add_to_cart | POST /tools/add_to_cart | POST /api/cart/items |
Adjust request/response transformations if schemas differ
Set authentication headers
Database Direct:
server.pyE-commerce Platform (Shopify, WooCommerce, Square):
Spreadsheet or Static File:
For each capability:
After all tools connected, run the complete workflow end-to-end.
Place generated discovery and trust files on the business's actual web surface.
Ask before proceeding:
robots.txt and sitemap.xml?"1. robots.txt
Ask: "Do you want AI search crawlers to find your site?"
User-agent: *
Allow: /
User-agent: OAI-SearchBot
Allow: /
User-agent: ClaudeBot
Allow: /
User-agent: Applebot
Allow: /
Sitemap: https://<domain>/sitemap.xml
Ask: "Allow AI training crawlers too, or only search/user-directed access?"
2. sitemap.xml
Generate XML sitemap with all canonical task URLs and accurate lastmod dates.
<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url>
<loc>https://<domain>/</loc>
<lastmod>YYYY-MM-DD</lastmod>
<changefreq>weekly</changefreq>
</url>
</urlset>
3. llms.txt
Update the generated llms.txt with actual deployed runtime endpoint and real domain URLs. Place at https://<domain>/llms.txt.
4. /agents Page
Customize launch-output/agents.md with real endpoint URLs. Ask: "HTML page, markdown, or CMS-pasteable content?"
5. Reference Layer Pages
coverage.md, limitations.md, pricing_eligibility.md, support_escalation.md, change_policy.md
Ask: "Separate pages or combined into one reference page?"
6. Structured Data (JSON-LD)
Generate snippets based on business type:
Homepage — Organization:
{
"@context": "https://schema.org",
"@type": "Organization",
"name": "<business name>",
"url": "https://<domain>",
"logo": "https://<domain>/logo.png",
"contactPoint": {
"@type": "ContactPoint",
"email": "<support email>",
"contactType": "customer service"
}
}
Product pages — Product:
{
"@context": "https://schema.org",
"@type": "Product",
"name": "<name>",
"description": "<description>",
"offers": {
"@type": "Offer",
"price": "<price>",
"priceCurrency": "<currency>",
"availability": "https://schema.org/InStock"
}
}
7. IndexNow Setup
Ask: "Want search engines notified automatically when content changes?" If yes, generate key file and provide submission URL format.
| Platform | How to publish |
|---|---|
| Netlify/Vercel | Add files to public/ or static/, deploy normally |
| WordPress | File manager plugin or FTP for root files; WP admin for pages; structured data via plugin |
| Shopify | Theme editor for JSON-LD in theme.liquid; pages for /agents content |
| Custom server | Place in static/public directory; add routes for /agents and reference pages |
curl -I https://<domain>/robots.txt
curl -I https://<domain>/sitemap.xml
curl -I https://<domain>/llms.txt
curl -I https://<domain>/agents
Keep the agent-facing surface aligned with the real business.
Ask: "What changed in your business?"
| Change type | What needs refreshing |
|---|---|
| Products/menu added or removed | Re-run conversion, update sitemap, product pages, IndexNow |
| Prices changed | Update product pages, schema markup, sitemap lastmod, IndexNow |
| Policies changed | Update policy pages, reference layer, sitemap lastmod |
| Workflows changed | Re-run conversion, redeploy runtime, update /agents page |
| Auth or access changed | Re-run conversion, update MCP server config, update /agents page |
| Contact/support info changed | Update Organization markup, Business Profile, support pages |
| New capability added | Re-run conversion, refresh all downstream layers |
| Branding or domain change | Update all URLs in llms.txt, sitemap, agents page, structured data |
1. Re-run Conversion (if business logic changed):
agent-see convert <source> --output <output-dir> --verbose
agent-see verify <output-dir>/proof/proof.json
2. Refresh Launch Layer:
agent-see launch sync <launch-intake.json> --bundle <output-dir> --output <launch-output>
agent-see launch check <launch-output> --bundle <output-dir>
3. Refresh Plugin Layer:
agent-see plugin sync <output-dir> --launch-output <launch-output>
4. Redeploy Runtime (if server code changed)
5. Update Published Discovery Files:
6. Verify Alignment:
agent-see launch check| Cadence | What to review |
|---|---|
| Weekly | Broken links, stale prices, runtime uptime, support details, primary CTAs |
| Monthly | Search Console signals, sitemap freshness, robots.txt, schema validity |
| After every material change | Re-run Agent-See, redeploy runtime, update discovery files, IndexNow |
| Quarterly | Reassess customer prompts, add task pages, review competitor gaps, expand reference pages |
Signs the agent surface has drifted:
If drift detected, run the full re-sync protocol immediately.
A business becomes strong in the prompt economy when it wins four decisions inside a model pipeline: whether the business is retrieved, whether it is selected, whether it is trusted, and whether it can be executed immediately.
| Layer | Goal | Business owner must do | Agent-See provides |
|---|---|---|---|
| Discovery | Get retrieved | Publish crawlable, text-rich, task-shaped pages and discovery files | Runtime artifacts and machine-usable operational surface |
| Selection | Get recommended | Make use cases, fit, constraints, pricing, policies explicit | Structured description of actions and workflows |
| Trust | Get cited as safe | Maintain entity data, support info, policy pages, visible consistency | Clear workflow boundaries, auth notes, approval-sensitive actions |
| Execution | Let agents act | Deploy the runtime and expose clear connection guidance | MCP/OpenAPI/runtime outputs and harness-facing artifacts |
| Maintenance | Stay fresh | Update pages, feeds, schema, sitemaps, re-run conversion | Regeneration path for the executable surface |
Every high-value page must immediately answer: who the offer is for, what action can be completed, what inputs are required, what constraints exist, and what the next step is.
| File | Location | Purpose |
|---|---|---|
| robots.txt | Website root | Control crawler access intentionally |
| sitemap.xml | Website root | Complete URL inventory with accurate lastmod |
| llms.txt | Website root | Curated guide for models to find highest-value pages |
| /agents page | Public docs or site | Connection instructions for agents |
| Page type | Schema type | Why |
|---|---|---|
| Homepage | Organization | Official identity, logo, contacts |
| Product pages | Product | Price, availability, ratings, shipping |
| FAQ | FAQPage | Direct answers to recurring objections |
| Navigation | BreadcrumbList | Page hierarchy and topical relationships |
Do not invent capabilities in the launch or plugin layer. Extract the real business surface first, then wrap it with thin public guidance and thin harness-specific packaging.