Install
openclaw skills install nova-act-usabilityAI-orchestrated usability testing using Amazon Nova Act. The agent generates personas, runs tests to collect raw data, interprets responses to determine goal...
openclaw skills install nova-act-usabilityAI-orchestrated usability testing with digital twin personas powered by Amazon Nova Act.
This skill requires an Amazon Nova Act API key.
| Requirement | Details |
|---|---|
| API Key | Nova Act API key from AWS Console |
| Config Location | ~/.openclaw/config/nova-act.json |
| Format | {"apiKey": "your-nova-act-api-key-here"} |
| Dependencies | pip3 install nova-act pydantic playwright |
| Browser | playwright install chromium (~300MB download) |
What this skill accesses:
~/.openclaw/config/nova-act.json (your API key)./nova_act_logs/ (trace files with screenshots), ./test_results_adaptive.json, ./nova_act_usability_report.htmlWhat trace files contain:
Recommendations:
Agent-Driven Interpretation: The script no longer interprets responses. YOU (the agent) must:
raw_responsegoal_achieved and overall_successNo hardcoded regex. No extra API calls. The agent doing the work is already running.
When a user asks to test a website, YOU (the AI agent) must complete ALL 4 phases:
| Phase | What Happens | Who Does It |
|---|---|---|
| 1. Setup | Generate personas, run test script | Agent + Script |
| 2. Collect | Script captures raw Nova Act responses | Script |
| 3. Interpret | Read JSON, determine goal_achieved for each step | Agent |
| 4. Report | Generate HTML report with interpreted results | Agent |
⚠️ The script does NOT interpret responses or generate the final report. You must do phases 3-4.
You're already an AI (Claude) - use your intelligence to generate contextual personas!
import subprocess
import os
import sys
import json
import tempfile
# Step 1: Check dependencies
try:
import nova_act
print("✅ Dependencies ready")
except ImportError:
print("📦 Dependencies not installed. Please run:")
print(" pip3 install nova-act pydantic playwright")
print(" playwright install chromium")
sys.exit(1)
# Step 2: Verify Nova Act API key
config_file = os.path.expanduser("~/.openclaw/config/nova-act.json")
with open(config_file, 'r') as f:
config = json.load(f)
if config.get('apiKey') == 'your-nova-act-api-key-here':
print(f"⚠️ Please add your Nova Act API key to {config_file}")
sys.exit(1)
# Step 3: YOU (the AI agent) generate personas
# Example for https://www.pgatour.com/ (golf tournament site)
website_url = "https://www.pgatour.com/"
personas = [
{
"name": "Marcus Chen",
"archetype": "tournament_follower",
"age": 42,
"tech_proficiency": "high",
"description": "Avid golf fan who follows multiple tours and tracks player stats",
"goals": [
"Check current tournament leaderboard",
"View recent tournament results",
"Track favorite player performance"
]
},
{
"name": "Dorothy Williams",
"archetype": "casual_viewer",
"age": 68,
"tech_proficiency": "low",
"description": "Occasional golf viewer who watches major tournaments",
"goals": [
"Find when the next tournament is",
"See who won recently",
"Understand how to watch online"
]
}
]
# Step 4: Save personas and run test
with tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as f:
json.dump(personas, f, indent=2)
personas_file = f.name
skill_dir = os.path.expanduser("~/.openclaw/skills/nova-act-usability")
test_script = os.path.join(skill_dir, "scripts", "run_adaptive_test.py")
# Run with AI-generated personas
subprocess.run([sys.executable, test_script, website_url, personas_file])
# Clean up temp file
os.unlink(personas_file)
Persona Template:
{
"name": "FirstName LastName",
"archetype": "descriptive_identifier",
"age": 30,
"tech_proficiency": "low|medium|high",
"description": "One sentence about who they are",
"goals": [
"First goal relevant to this website",
"Second goal relevant to this website",
"Third goal relevant to this website"
]
}
If user specifies a persona description, pass it as a string:
# User: "Test PGA Tour site as a golf enthusiast"
website_url = "https://www.pgatour.com/"
user_persona = "golf enthusiast who follows tournaments closely"
subprocess.run([sys.executable, test_script, website_url, user_persona])
# Script will parse this and create personas automatically
Let the script guess personas based on basic category keywords:
# Generic, less contextual personas
subprocess.run([sys.executable, test_script, website_url])
✅ Advantages:
❌ What to avoid:
Analyze the website:
.gov → citizens, .edu → students/facultyCreate diverse personas:
Generate realistic goals:
Examples by industry:
Users can trigger this skill by saying:
The AI will automatically:
NEW in this version: The skill now tests complete user journeys, not just information-finding!
E-Commerce:
Flight/Hotel Booking:
Social Media:
Account Signup:
Form Submission:
The skill will NEVER:
The skill will ALWAYS:
You (the AI agent) must analyze test results! The script collects raw responses but does NOT interpret them.
The script returns raw Nova Act responses like:
"No" - Is there a pricing link?"I don't see any documentation" - Is there docs?"Amazon Nova Act" - What is the headline?You must determine if each response means the goal was achieved:
| Response | Goal Achieved? |
|---|---|
"No" | ❌ NOT achieved |
"I don't see..." | ❌ NOT achieved |
"Not found" | ❌ NOT achieved |
"Yes, I found..." | ✅ Achieved |
"Amazon Nova Act" (content) | ✅ Achieved |
"The pricing is $29/mo" | ✅ Achieved |
After the test script runs, read the JSON results. Each step contains:
{
"step_name": "check_nav_for_pricing",
"prompt": "Is there a pricing link in the navigation?",
"expected_outcome": "Find pricing in navigation",
"raw_response": "No",
"api_success": true,
"needs_agent_analysis": true,
"attempts": [
{
"prompt": "Is there a pricing link in the navigation?",
"response": "No",
"approach": "original"
}
]
}
Key fields you analyze:
raw_response: The actual Nova Act response - YOU determine what it meansapi_success: Did the API call work? (script handles this)needs_agent_analysis: Always true - your cue to interpretattempts: All attempts made (script tries up to 3 alternative approaches)For each step, determine:
goal_achieved: Did the response indicate success or failure?friction_level: How hard was it? (attempts.length > 1 = friction)observations: UX insights from the responseAnalysis example:
Step 1: "Is there a pricing link?"
→ Response: "No" (1 attempt)
→ Goal achieved: NO (explicit negative)
→ Friction: HIGH (not discoverable)
Step 2: "What is the headline?"
→ Response: "Amazon Nova Act" (1 attempt)
→ Goal achieved: YES (actual content)
→ Friction: LOW (immediately visible)
Step 3: "Find documentation"
→ Response: "I found a docs link in the footer" (3 attempts)
→ Goal achieved: YES (found eventually)
→ Friction: MEDIUM (required multiple approaches)
The response_interpreter.py provides helpers if you want structured prompts:
from scripts.response_interpreter import (
format_for_agent_analysis,
create_agent_prompt_for_interpretation,
create_agent_prompt_for_alternative
)
# Format all results for analysis
formatted = format_for_agent_analysis(results)
# Get interpretation prompt for one step
prompt = create_agent_prompt_for_interpretation(step_result)
# Get retry prompt when goal not achieved
retry_prompt = create_agent_prompt_for_alternative(
original_prompt="Is there a pricing link?",
failed_response="No",
attempt_number=2
)
The script does NOT generate the final report automatically. You (the agent) must:
test_results_adaptive.json with raw datagoal_achieved: true/false based on raw_responseoverall_success: true/false on each testStep-by-step code for the agent to execute:
import json
import os
import sys
# Add skill scripts to path
sys.path.insert(0, os.path.expanduser("~/.openclaw/skills/nova-act-usability/scripts"))
from enhanced_report_generator import generate_enhanced_report
# 1. Read raw results
with open('test_results_adaptive.json', 'r') as f:
results = json.load(f)
# 2. YOU (the agent) interpret each step
for test in results:
goals_achieved = 0
for step in test.get('steps', []):
raw = step.get('raw_response', '')
# AGENT INTERPRETS: Does this response indicate goal was achieved?
# You decide based on the response content and expected outcome
# Example interpretations:
# "No" → goal_achieved = False
# "Leaderboard, News, Schedule, Players" → goal_achieved = True (content found)
# "Yes" → goal_achieved = True
# "I don't see any..." → goal_achieved = False
step['goal_achieved'] = ??? # YOU set this based on your interpretation
if step['goal_achieved']:
goals_achieved += 1
# 3. Set overall success (e.g., >= 50% goals achieved)
total = len(test.get('steps', []))
test['goals_achieved'] = goals_achieved
test['overall_success'] = (goals_achieved / total >= 0.5) if total > 0 else False
# 4. Save interpreted results
with open('test_results_adaptive.json', 'w') as f:
json.dump(results, f, indent=2)
# 5. Generate report with interpreted data
page_analysis = {
'title': '...', # From your earlier analysis
'purpose': '...',
'navigation': [...]
}
all_traces = []
for r in results:
all_traces.extend(r.get('trace_files', []))
report_path = generate_enhanced_report(page_analysis, results, all_traces)
print(f"Report: {report_path}")
Why the agent must interpret:
Nova Act is a browser automation tool, NOT a reasoning engine.
The Claude agent (you) does all reasoning about:
Nova Act just:
# DON'T ask Nova Act to think about personas
nova.act("As a beginner user, can you easily find the documentation?")
nova.act("Would a business professional find the pricing clear?")
nova.act("Is this task accomplishable for someone with low technical skills?")
# Simple browser actions
nova.act("Click the Documentation link in the navigation")
nova.act("Find and click a link containing 'Pricing'")
nova.act_get("What text is displayed in the main heading?")
nova.act_get("List the navigation menu items visible on this page")
You (the AI) are the orchestrator. This skill provides:
references/nova-act-cookbook.md) - Best practices, workflow patterns, and safety guidelines (automatically loaded at test start)run_adaptive_test.py) - Main execution script with workflow detectionscripts/dynamic_exploration.py) - Generates workflow-appropriate test strategiesscripts/nova_session.py) - Nova Act wrapperenhanced_report_generator.py) - Auto-generated HTML reportsExecution Flow:
Before running ANY test, check if dependencies are installed:
# Check if nova-act is installed
python3 -c "import nova_act" 2>/dev/null
if [ $? -ne 0 ]; then
echo "Dependencies not installed. Please run:"
echo " pip3 install nova-act pydantic playwright"
echo " playwright install chromium"
exit 1
fi
# Check API key
if ! grep -q '"apiKey":.*[^"]' ~/.openclaw/config/nova-act.json; then
echo "⚠️ Please add your Nova Act API key to ~/.openclaw/config/nova-act.json"
exit 1
fi
Or use Python to check:
import sys
# Check if nova-act is installed
try:
import nova_act
print("✅ Dependencies already installed")
except ImportError:
print("📦 Dependencies not installed. Please run:")
print(" pip3 install nova-act pydantic playwright")
print(" playwright install chromium")
sys.exit(1)
When a user asks for usability testing:
# Find the skill directory
SKILL_DIR=~/.openclaw/skills/nova-act-usability
# Run the adaptive test script
python3 "$SKILL_DIR/scripts/run_adaptive_test.py" "https://example.com"
# This will:
# - Create nova_act_logs/ in current directory
# - Create test_results_adaptive.json in current directory
# - Create nova_act_usability_report.html in current directory
# - Provide 60-second status updates during test
Recommended timeout: 30 minutes (1800 seconds)
Full usability tests with 3 personas × 3 goals = 9 tests can take 10-20+ minutes depending on:
Graceful shutdown: If the test is interrupted (timeout, SIGTERM, SIGINT), it will:
test_results_adaptive.jsonFor shorter tests: Use fewer personas or goals:
# Quick test with 1 persona
personas = [{"name": "Test User", "archetype": "casual", ...}]
pip3 install nova-act pydantic playwright && playwright install chromiumWhen user requests usability testing:
import subprocess
import os
# Get skill directory
skill_dir = os.path.expanduser("~/.openclaw/skills/nova-act-usability")
if not os.path.exists(skill_dir):
# Try workspace location
skill_dir = os.path.join(os.getcwd(), "nova-act-usability")
script_path = os.path.join(skill_dir, "scripts", "run_adaptive_test.py")
# Run test
result = subprocess.run(
["python3", script_path, "https://example.com"],
env={**os.environ, "NOVA_ACT_SKIP_PLAYWRIGHT_INSTALL": "1"},
capture_output=True,
text=True
)
print(result.stdout)
The adaptive test script (run_adaptive_test.py) handles:
For each persona + task combination:
from scripts.nova_session import nova_session
from nova_act import BOOL_SCHEMA
import time
observations = []
with nova_session(website_url) as nova:
start_time = time.time()
# Initial navigation
observations.append({
"step": "navigate",
"action": f"Loaded {website_url}",
"success": True,
"notes": "Initial page load"
})
# Execute task step-by-step (AI-orchestrated)
# Break into small act() calls based on cookbook guidance
# Example: "Find pricing information" task
# Step 1: Look for pricing link
nova.act("Look for a link or button for pricing, plans, or subscription")
found = nova.act_get(
"Is there a visible pricing or plans link?",
schema=BOOL_SCHEMA
)
observations.append({
"step": "find_pricing_link",
"action": "Search for pricing navigation",
"success": found.parsed_response,
"notes": "Easy to find" if found.parsed_response else "Not immediately visible - UX friction"
})
if found.parsed_response:
# Step 2: Navigate to pricing
nova.act("Click on the pricing or plans link")
# Step 3: Analyze pricing page
is_clear = nova.act_get(
"Is the pricing information clearly displayed with prices and features?",
schema=BOOL_SCHEMA
)
observations.append({
"step": "view_pricing",
"action": "Accessed pricing page",
"success": is_clear.parsed_response,
"notes": "Clear pricing display" if is_clear.parsed_response else "Pricing unclear or confusing"
})
else:
# Alternative path - try search
nova.act("Look for a search function")
# ... continue orchestrating
duration = time.time() - start_time
# Document overall task result
task_success = all(obs["success"] for obs in observations if obs["success"] is not None)
results.append({
"persona": persona_name,
"task": task_description,
"success": task_success,
"duration": duration,
"observations": observations,
"friction_points": [obs for obs in observations if not obs.get("success")]
})
After all tests:
import json
from scripts.enhanced_report_generator import generate_enhanced_report
# Save results
with open("test_results_adaptive.json", "w") as f:
json.dump(results, f, indent=2)
# Generate HTML report
report_path = generate_enhanced_report(
page_analysis=page_analysis,
results=test_results
)
print(f"Report: {report_path}")
The AI should decide how to break down each task based on:
Low-tech persona example:
# More explicit, step-by-step
nova.act("Look for a button labeled 'Contact' or 'Contact Us'")
nova.act("Click on the Contact button")
result = nova.act_get("Is there a phone number or email address visible?")
High-tech persona example:
# Test efficiency features
nova.act("Look for keyboard shortcuts or quick access features")
nova.act("Try to use search (Ctrl+K or Cmd+K)")
After EVERY act() call, analyze:
Document friction immediately in observations.
Adapt act() prompts to persona characteristics:
references/nova-act-cookbook.mdMUST READ before starting any test. Contains best practices for:
references/persona-examples.mdTemplate personas with detailed profiles:
scripts/nova_session.pyThin wrapper providing Nova Act session primitive:
with nova_session(url, headless=True, logs_dir="./logs") as nova:
nova.act("action")
result = nova.act_get("query", schema=Schema)
scripts/enhanced_report_generator.pyCompiles observations into HTML usability report with trace file links.
assets/report-template.htmlProfessional HTML template for usability reports.
This skill requires dependencies that must be installed before use.
ALWAYS check if dependencies are installed before running tests:
# Quick dependency check
try:
import nova_act
print("✅ Dependencies installed")
except ImportError:
print("📦 Dependencies not installed. Please run:")
print(" pip3 install nova-act pydantic playwright")
print(" playwright install chromium")
print("")
print("This will take 2-3 minutes to download browsers (~300MB)")
Step 1: Install Python packages
pip3 install nova-act pydantic playwright
Step 2: Install Playwright browser
playwright install chromium
Step 3: Configure API key
mkdir -p ~/.openclaw/config
echo '{"apiKey": "your-key-here"}' > ~/.openclaw/config/nova-act.json
your-key-here with your actual Nova Act API keyUser request: "Test example.com for elderly users"
AI orchestration:
references/nova-act-cookbook.mdreferences/persona-examples.mdThe AI decides every step. The skill just provides tools and guidance.
nova-act-usability/
├── SKILL.md # This file
├── README.md # User documentation
├── skill.json # Skill manifest
├── scripts/
│ ├── run_adaptive_test.py # Main orchestrator (accepts URL arg)
│ ├── nova_session.py # Session wrapper
│ ├── enhanced_report_generator.py # HTML report generator
│ └── trace_finder.py # Extract trace file paths
├── references/
│ ├── nova-act-cookbook.md # Best practices
│ └── persona-examples.md # Template personas
└── assets/
└── report-template.html # HTML template
When you run a test, these files are created in your current working directory:
./
├── nova_act_logs/ # Nova Act trace files
│ ├── act_<id>_output.html # Session recordings
│ └── ...
├── test_results_adaptive.json # Raw test results
└── nova_act_usability_report.html # Final report
All paths are relative - works from any installation location!