Install
openclaw skills install curriculum-generatorIntelligent educational curriculum generation system with strict step enforcement and human escalation policies
openclaw skills install curriculum-generatorWhen the user includes "debug mode" or "show searches" in their curriculum request:
Enable verbose output:
Example debug output:
[DEBUG] Executing neo-ddg-search("Python basics tutorial for beginners")
[DEBUG] Search returned 10 results
[DEBUG] Extracting URLs...
[DEBUG] Found: https://www.youtube.com/watch?v=rfscVS0vtbw
[DEBUG] Found: https://www.freecodecamp.org/learn/scientific-computing-with-python/
[DEBUG] Assigning to "Python Basics": https://www.youtube.com/watch?v=rfscVS0vtbw
This skill requires the following other skills to be installed:
clawhub install neobotjan2026/neo-ddg-searchAt the start of curriculum generation, verify neo-ddg-search is available:
IF neo-ddg-search skill NOT found:
🚨 DEPENDENCY MISSING
The curriculum generator requires the neo-ddg-search skill for finding educational resources.
Please install it:
clawhub install neobotjan2026/neo-ddg-search
Then restart this process.
⚠️ GENERATION CANNOT PROCEED without search capability
STOP
Before starting resource research, perform a test search:
Test: neo-ddg-search("Python tutorial test")
IF successful:
✅ Search tool operational
Proceeding with resource research...
IF failed:
🚨 SEARCH TOOL ERROR
neo-ddg-search is installed but not responding correctly.
Error: {error_details}
Please check:
• neo-ddg-search skill is properly installed
• Internet connection is available
• No firewall blocking DuckDuckGo
⚠️ Cannot proceed with resource research
ESCALATE
This skill helps generate customized educational curricula for PODs (Points of Delivery) through a structured, step-enforced process with mandatory human escalation when needed.
~/.openclaw/skills/curriculum-generator/memory/~/.openclaw/skills/curriculum-generator/outputs/~/.openclaw/skills/curriculum-generator/templates/This skill activates when the user:
You MUST ask a human whenever you are forced to guess, infer, or trade off risk. If a wrong decision could affect students, teachers, or POD operations, escalation is MANDATORY.
You MUST stop and escalate to human if ANY of these occur:
A. Missing or Ambiguous Inputs
B. Teacher Capability Risk
C. Operational Infeasibility
D. High-Risk Curriculum Changes
E. Contradictory Stakeholder Signals
When escalating, use this EXACT format:
🚨 HUMAN INPUT REQUIRED
Reason: [specific trigger]
Impact if Unresolved: [clear consequence]
Options (if any):
1. [option 1]
2. [option 2]
Awaiting Decision From: [POD Leader / Curriculum Owner]
Before anything else, determine:
If unclear, STOP and ask user to confirm. Do NOT proceed without classification.
Collect ALL of the following using the structured form:
Section 0: Request Metadata
⚠️ If Scenario Type not selected → HARD STOP
Section 1: Target Audience Profile (MANDATORY)
⚠️ If age/grade missing → HARD STOP and escalate
Section 2: POD & Infrastructure Details (MANDATORY)
⚠️ If lab hours or computer count missing → HARD STOP and escalate
Section 3: Teacher Capability & Availability (MANDATORY)
⚠️ Any "No" in capability assessment → Potential escalation
Simulate structured stakeholder inputs based on provided data:
Then perform Teacher Capability Assessment:
Evaluate curriculum on these dimensions:
Then perform Operational Feasibility Check:
End with: Status: Draft Assessment – Pending Human Review
Explicitly define:
Generate:
Lab Planning (Mandatory):
END OF STEP B2 - MANDATORY ACTION BEFORE PROCEEDING:
BEFORE moving to Step B3, execute this command sequence:
1. Review the curriculum structure you just created
2. Identify ALL topics that will appear in the final output
3. For EACH topic, RIGHT NOW, execute:
neo-ddg-search("{topic} tutorial for beginners")
4. Extract the first valid educational URL from results
5. Store it in a resource_map dictionary:
resource_map["{topic}"] = "https://..."
6. Verify resource_map has entries for ALL topics
7. Only then proceed to Step B3
Example:
Topic: "Python Lists"
Execute: neo-ddg-search("Python Lists tutorial for beginners")
Result: Found https://www.youtube.com/watch?v=W8KRzm-HUcc
Store: resource_map["Python Lists"] = "https://www.youtube.com/watch?v=W8KRzm-HUcc"
DO NOT SKIP THIS. DO NOT PROCEED WITHOUT COMPLETING THIS.
## **Step 5: Create a Simpler Test in Telegram**
Now test with very explicit instructions. In Telegram, send:
Create a tiny test curriculum:
IMPORTANT INSTRUCTIONS:
Start now.
MANDATORY: End of Step B2 Resource Collection
Before proceeding to Step B3, you MUST complete this:
STOP HERE.
Before moving to Step B3, execute this sequence:
1. List all topics you just created: [topic1, topic2, topic3, ...]
2. Create an empty dictionary: resource_links = {}
3. FOR EACH topic:
a. Execute: result = neo-ddg-search("{topic} tutorial for beginners")
b. Look at the result text
c. Find all text that starts with "https://"
d. Extract the complete URL
e. Store: resource_links[topic] = that_url
f. Print: "Stored for {topic}: {that_url}"
4. Verify resource_links has ALL topics
5. Print the complete resource_links dictionary
6. ONLY THEN proceed to Step B3
Example for "HTML Basics":
Execute: neo-ddg-search("HTML Basics tutorial for beginners")
Result contains: "...https://www.youtube.com/watch?v=pQN-pnXPaVg..."
Extract: "https://www.youtube.com/watch?v=pQN-pnXPaVg"
Store: resource_links["HTML Basics"] = "https://www.youtube.com/watch?v=pQN-pnXPaVg"
Print: "Stored for HTML Basics: https://www.youtube.com/watch?v=pQN-pnXPaVg"
DO THIS FOR EVERY SINGLE TOPIC BEFORE MOVING ON.
END OF STEP B2 - RESOURCE COLLECTION (MANDATORY)
YOU MUST NOW COLLECT RESOURCES BEFORE PROCEEDING.
Execute this EXACT sequence:
1. Create empty dictionary: resource_links = {}
2. List all topics from Step B2
3. For EACH topic, execute:
bash_tool: python3 ~/.openclaw/skills/neo-ddg-search/scripts/search.py "{topic} tutorial for beginners" 5
4. From the output, find all text starting with "https://"
5. Take the first URL that contains "youtube.com", or if none, the first URL
6. Store it: resource_links[topic] = that_url
7. Print: "Collected for {topic}: {that_url}"
8. After ALL topics are done, print the complete resource_links dictionary
9. Verify every topic has a URL
10. ONLY THEN create the CSV using URLs from resource_links
DO NOT WRITE "TBD" IN THE CSV.
USE THE URLS FROM resource_links DICTIONARY.
If you cannot find a URL for a topic, STOP and ESCALATE.
Do not proceed to CSV generation without URLs for all topics.
## **Save and Test**
Save the file, then in Telegram:
reload skills
Then test with a VERY simple example:
Create curriculum:
After you build the structure:
Start.
## **What to Watch For**
You should see output like:
✅ Course structure complete
🔍 Starting resource search...
Topic: HTML Basics Executing: python3 ~/.openclaw/skills/neo-ddg-search/scripts/search.py "HTML Basics tutorial for beginners" 5
[Results shown] [1] HTML Tutorial | https://www.youtube.com/watch?v=... [2] Learn HTML | https://www.w3schools.com/html/
Found 2 URLs Selected: https://www.youtube.com/watch?v=... ✅ Stored for HTML Basics: https://www.youtube.com/watch?v=...
Resource Links Dictionary: HTML Basics: https://www.youtube.com/watch?v=...
📋 CSV Preview: Covered Topics | Resource Link HTML Basics | https://www.youtube.com/watch?v=...
Writing file...
After completing Step B2 structure, execute this EXACT process:
For EACH topic:
STEP 1: Search
python3 ~/.openclaw/workspace/skills/neo-ddg-search/scripts/search.py "{topic} tutorial for beginners" 5
STEP 2: Look at output and extract FIRST URL
https:// → copy everything from https:// until the next spaceSTEP 3: Store it
resource_links["{topic}"] = "the_url_you_found"
Then IMMEDIATELY move to next topic. Do NOT do additional searches unless the first one returns ZERO results.
RULE: Execute ONE search per topic. Extract ONE URL. Move on.
Do NOT:
DO:
Print: "🔍 Resource Research Starting..."
Print: ""
resource_links = {}
topics = [list of all topics from Step B2]
For topic in topics:
Print: f"Topic: {topic}"
# Execute search (ONE TIME ONLY)
result = bash_tool(f'python3 ~/.openclaw/workspace/skills/neo-ddg-search/scripts/search.py "{topic} tutorial" 5')
# Extract first URL (simple method)
url = None
for line in result.split('\n'):
if 'https://' in line:
start = line.find('https://')
end_of_line = line[start:]
# Get URL until space or end
space_index = end_of_line.find(' ')
if space_index > 0:
url = end_of_line[:space_index]
else:
url = end_of_line.strip()
break # Take FIRST URL and stop
if url:
resource_links[topic] = url
Print: f" ✅ {url}"
else:
resource_links[topic] = "MANUAL_RESEARCH_NEEDED"
Print: f" ⚠️ No URL found - marked for manual research"
# IMMEDIATELY continue to next topic
Print: ""
Print: "✅ Resource research complete"
Print: f"Collected {len(resource_links)} resource links"
Print: ""
Maximum time for resource research: 2 minutes total
If you're taking longer than 2 minutes for resource collection, you're doing something wrong. This should be fast:
# Good examples:
resource_links["Python Basics"] = "https://datascientest.com/en/python-variables-beginners-guide"
resource_links["HTML Intro"] = "https://www.w3schools.com/python/python_variables.asp"
# Acceptable if no URL found:
resource_links["Obscure Topic"] = "MANUAL_RESEARCH_NEEDED"
# NEVER acceptable:
resource_links["Topic"] = "TBD" # ❌
resource_links["Topic"] = "" # ❌
Do NOT pause or wait. Immediately proceed to CSV generation.
Print: "📄 Generating CSV with collected resources..."
csv_data = []
for topic in curriculum_structure:
resource_url = resource_links.get(topic, "MANUAL_RESEARCH_NEEDED")
csv_row = {
"Curriculum ID": curriculum_id,
"File Name": file_name,
"Target POD Type": pod_type,
"Clusters": clusters,
"Content Type": content_type,
"Covered Topics": topic,
"Owner": owner,
"Resource Link": resource_url, # ← Use collected URL
"Document Creation Date": date,
"Last Updated On": date
}
csv_data.append(csv_row)
write_csv(csv_data)
Print: "✅ CSV file generated"
Topics: ["Python Basics", "Python Functions"]
🔍 Resource Research Starting...
Topic: Python Basics
Executing search...
[Results received]
Found URL: https://datascientest.com/en/python-variables-beginners-guide
✅ https://datascientest.com/en/python-variables-beginners-guide
Topic: Python Functions
Executing search...
[Results received]
Found URL: https://www.w3schools.com/python/python_functions.asp
✅ https://www.w3schools.com/python/python_functions.asp
✅ Resource research complete
Collected 2 resource links
📄 Generating CSV with collected resources...
✅ CSV file generated: Python_Basics_v1.0.csv
Total time: ~15 seconds
Accept whatever URL you find in the first search.
Priority is:
If the first search returns W3Schools instead of YouTube, that's FINE. Use it and move on.
Only escalate if:
Do NOT escalate if:
If user requests DEBUG MODE:
[DEBUG] Topic: Python Basics
[DEBUG] Command: python3 ~/.openclaw/workspace/skills/neo-ddg-search/scripts/search.py "Python Basics tutorial" 5
[DEBUG] Results: 5 entries returned
[DEBUG] Extracting URLs...
[DEBUG] Line 1: Contains 'https://datascientest.com/...'
[DEBUG] Extracted: https://datascientest.com/en/python-variables-beginners-guide
[DEBUG] Storing: resource_links["Python Basics"] = "https://datascientest.com/..."
[DEBUG] ✅ Complete - moving to next topic
MANDATORY: After Step B2, execute resource collection IMMEDIATELY
After completing Step B2 course structure:
1. DO NOT pause
2. DO NOT ask for confirmation
3. IMMEDIATELY start resource collection
4. Use the Simple 3-Step Process above
5. Complete ALL topics within 2 minutes
6. Then IMMEDIATELY generate CSV
7. Do NOT wait between steps
This should be ONE CONTINUOUS FLOW:
Step B2 → Resource Collection → CSV Generation → Done
No breaks. No pauses. No waiting.
After Step B2 (course structure), do this:
Print: "🔍 Starting resource search for all topics..."
Print: ""
Initialize: resource_links = {}
For each topic in curriculum:
Print: "Topic: {topic}"
# Execute search
command = f"python3 ~/.openclaw/skills/neo-ddg-search/scripts/search.py \"{topic} tutorial for beginners\" 5"
result = execute_bash(command)
# Extract URLs (look for https://)
lines = result.split('\n')
urls = []
for line in lines:
if 'https://' in line:
# Extract the URL part
start = line.find('https://')
# Find end (space or newline)
rest = line[start:]
space_pos = rest.find(' ')
if space_pos > 0:
url = rest[:space_pos]
else:
url = rest.strip()
urls.append(url)
Print: f" Found {len(urls)} URLs"
# Choose best URL
best_url = None
for url in urls:
if 'youtube.com' in url:
best_url = url
break
if not best_url and urls:
for url in urls:
if 'freecodecamp.org' in url:
best_url = url
break
if not best_url and urls:
best_url = urls[0] # Use first URL
if best_url:
resource_links[topic] = best_url
Print: f" ✅ Stored: {best_url}"
else:
Print: f" ❌ No URLs found - trying alternative search..."
# Try one more time
alt_command = f"python3 ~/.openclaw/skills/neo-ddg-search/scripts/search.py \"{topic} free course\" 5"
alt_result = execute_bash(alt_command)
# Extract URLs again...
# [same extraction logic]
if alt_urls:
resource_links[topic] = alt_urls[0]
Print: f" ✅ Stored: {alt_urls[0]}"
else:
ESCALATE(f"No resources found for {topic}")
Print: ""
Print: "✅ Resource collection complete!"
Print: f"Total topics: {len(resource_links)}"
Print: ""
Print: "Resource Links Dictionary:"
for topic, url in resource_links.items():
Print: f" {topic}: {url}"
Specify:
Define:
Define:
If resource collection is taking longer than 3 minutes total:
STOP what you're doing and execute this:
Print: "⏱️ Resource collection timeout (3 min exceeded)"
Print: "Completing with available resources..."
For any topic without a resource:
resource_links[topic] = "MANUAL_RESEARCH_NEEDED"
Proceed immediately to CSV generation
Never get stuck searching indefinitely.
## **Test Again**
Save the file and test:
reload skills
Then:
Create curriculum:
DO NOT GET STUCK. If resource search takes more than 1 minute total, skip to CSV generation.
Show me when you start resource search and when you finish.
## **What Should Happen**
You should see:
🔍 Resource Research Starting...
Topic: Lesson 1 - Python Intro ✅ https://datascientest.com/en/python-variables-beginners-guide
Topic: Lesson 2 - Python Functions ✅ https://www.w3schools.com/python/python_functions.asp
✅ Resource research complete (15 seconds) Collected 2 resource links
📄 Generating CSV... ✅ Done
**NOT this:**
Topic: Lesson 1 Executing search... [Results] Trying alternative search... [More results] Evaluating quality... [STUCK HERE] ← Never gets to CSV
To search for resources, use this EXACT command:
python3 ~/.openclaw/skills/neo-ddg-search/scripts/search.py "YOUR QUERY HERE" 5
This returns search results with URLs that you must extract.
For EACH topic in the curriculum:
# Example for "HTML Basics"
python3 ~/.openclaw/skills/neo-ddg-search/scripts/search.py "HTML basics tutorial for beginners" 5
The output looks like this:
[1] Page Title | Year | Type | Site https://example.com/url1
Description text
[2] Another Title | Year | Type | Site https://another.com/url2
More description
Look for any text starting with https://
From the example above, extract:
https://example.com/url1https://another.com/url2Priority order:
youtube.com (first choice)freecodecamp.org (second choice)w3schools.com (third choice)Store in a simple format:
Topic: HTML Basics
Resource: https://www.youtube.com/watch?v=...
Topic: "Python Lists"
Step 1 - Search:
python3 ~/.openclaw/skills/neo-ddg-search/scripts/search.py "Python lists tutorial for beginners" 5
Step 2 - Output Received:
[1] Python Lists Tutorial | 2023 | Video | YouTube https://www.youtube.com/watch?v=W8KRzm-HUcc
Learn Python lists from scratch
[2] Python Lists Guide | 2024 | Article | W3Schools https://www.w3schools.com/python/python_lists.asp
Complete guide to Python lists
Step 3 - Extract URLs:
https://www.youtube.com/watch?v=W8KRzm-HUcchttps://www.w3schools.com/python/python_lists.aspStep 4 - Choose Best:
https://www.youtube.com/watch?v=W8KRzm-HUccStep 5 - Store:
resource_links["Python Lists"] = "https://www.youtube.com/watch?v=W8KRzm-HUcc"
MANDATORY CHECK:
Print: "🔍 Verifying resource links before CSV generation..."
Print: ""
csv_data = []
for row in curriculum_structure:
topic = row['topic']
# Get resource from resource_links dictionary
if topic in resource_links:
resource_url = resource_links[topic]
else:
Print: f"❌ ERROR: No resource link for '{topic}'"
STOP
# Verify it's a valid URL
if not resource_url.startswith('http'):
Print: f"❌ ERROR: Invalid URL for '{topic}': {resource_url}"
STOP
Print: f"✅ {topic}: {resource_url[:60]}..."
# Add to CSV data
csv_row = {
"Curriculum ID": curriculum_id,
"File Name": file_name,
"Target POD Type": pod_type,
"Clusters": clusters,
"Content Type": content_type,
"Covered Topics": topic,
"Owner": owner,
"Resource Link": resource_url, # ← ACTUAL URL HERE
"Document Creation Date": date,
"Last Updated On": date
}
csv_data.append(csv_row)
Print: ""
Print: "✅ All rows verified with valid URLs"
Print: "📄 Writing CSV file..."
write_csv_file(csv_data)
Show user the data:
Print: "📋 CSV Preview:"
Print: "=" * 80
Print: f"Covered Topics | Resource Link"
Print: "-" * 80
for row in csv_data:
topic = row["Covered Topics"]
url = row["Resource Link"]
Print: f"{topic[:30]:30} | {url}"
Print: "=" * 80
Print: ""
Print: "Writing to file..."
If after searching, a topic has no URL:
🚨 RESOURCE SEARCH FAILED - HUMAN INPUT REQUIRED
Topic: {topic_name}
Search 1: "python3 ~/.openclaw/skills/neo-ddg-search/scripts/search.py '{topic} tutorial for beginners' 5"
Result: {number} URLs found
None matched quality criteria
Search 2: "python3 ~/.openclaw/skills/neo-ddg-search/scripts/search.py '{topic} free course' 5"
Result: {number} URLs found
None matched quality criteria
Issue: Cannot find suitable free educational resources
Options:
1. Modify topic name to be more general
2. Accept lower-quality resource if available
3. Mark for manual research
Awaiting Decision From: Curriculum Owner
⚠️ CSV generation paused
Before writing ANY output file, you MUST complete this checklist:
STOP and verify:
FOR EACH row in the curriculum data:
topic = row['Covered Topics']
resource_link = row['Resource Link']
IF resource_link is empty OR resource_link == "TBD" OR resource_link == "N/A":
PRINT "⚠️ Missing resource link for: {topic}"
PRINT "🔍 Executing search now..."
# Execute neo-ddg-search immediately
search_query = f"{topic} tutorial for beginners"
EXECUTE: neo-ddg-search(search_query)
# Extract URLs from results
urls = EXTRACT_URLS_FROM_RESULTS()
IF urls found:
row['Resource Link'] = urls[0] # Use first result
PRINT "✅ Found resource: {urls[0]}"
ELSE:
# Try alternative search
search_query_2 = f"{topic} free course"
EXECUTE: neo-ddg-search(search_query_2)
urls = EXTRACT_URLS_FROM_RESULTS()
IF urls found:
row['Resource Link'] = urls[0]
PRINT "✅ Found resource: {urls[0]}"
ELSE:
ESCALATE("Cannot find resources for {topic}")
STOP_FILE_GENERATION
You MUST see output like:
Checking resource links before file generation...
✅ Row 1 - HTML Basics: Has resource link
✅ Row 2 - CSS Fundamentals: Has resource link
⚠️ Row 3 - JavaScript: Missing resource link
🔍 Executing search now...
Using neo-ddg-search: "JavaScript tutorial for beginners"
✅ Found resource: https://www.youtube.com/watch?v=...
✅ Row 3 - JavaScript: Resource link populated
All rows verified. Proceeding to file generation...
Verify all resource links are actual URLs:
FOR EACH resource_link in curriculum:
IF NOT resource_link.startswith("http"):
ERROR: "Invalid resource link format: {resource_link}"
STOP
total_topics = COUNT(curriculum rows)
topics_with_resources = COUNT(rows where Resource Link is valid URL)
PRINT "📊 Resource Link Status:"
PRINT " Total topics: {total_topics}"
PRINT " With resources: {topics_with_resources}"
PRINT " Missing: {total_topics - topics_with_resources}"
IF topics_with_resources < total_topics:
ESCALATE("Some topics still missing resources after search")
STOP
ELSE:
PRINT "✅ All topics have resource links. Safe to generate file."
Before writing any file, build a complete resource map:
# Initialize resource map
resource_map = {}
# Get all topics from curriculum structure
all_topics = extract_all_topics_from_curriculum()
print(f"\n📚 Building resource map for {len(all_topics)} topics...\n")
# For each topic, search and extract URL
for topic in all_topics:
print(f"🔍 Topic: {topic}")
# Execute search
search_query = f"{topic} tutorial for beginners"
print(f" Searching: {search_query}")
search_results = neo_ddg_search(search_query)
# Extract URLs from results
urls_found = extract_urls_from_search_result(search_results)
print(f" Found {len(urls_found)} URLs")
# Select best URL
if urls_found:
best_url = select_best_url(urls_found)
resource_map[topic] = best_url
print(f" ✅ Selected: {best_url}\n")
else:
print(f" ⚠️ No URLs found, trying alternative search...")
# Try alternative search
alt_search = neo_ddg_search(f"{topic} free course")
urls_found_alt = extract_urls_from_search_result(alt_search)
if urls_found_alt:
best_url = select_best_url(urls_found_alt)
resource_map[topic] = best_url
print(f" ✅ Selected: {best_url}\n")
else:
resource_map[topic] = "ESCALATION_NEEDED"
print(f" ❌ No resources found - will escalate\n")
# Verify all topics have resources
missing_resources = [t for t, url in resource_map.items() if url == "ESCALATION_NEEDED"]
if missing_resources:
print(f"🚨 {len(missing_resources)} topics need escalation:")
for topic in missing_resources:
print(f" - {topic}")
ESCALATE("Resource search failed for some topics")
STOP
else:
print(f"✅ All {len(all_topics)} topics have resource links!")
print(f"📝 Proceeding to CSV generation...\n")
When creating each row in the CSV:
for week_num, lesson in curriculum_structure:
topic = lesson['topic']
# Get resource link from resource_map
resource_link = resource_map.get(topic, "ERROR_NO_RESOURCE")
# Verify it's a valid URL
if not resource_link.startswith("http"):
print(f"ERROR: Invalid resource for {topic}: {resource_link}")
STOP
csv_row = {
"Curriculum ID": curriculum_id,
"File Name": file_name,
"Target POD Type": pod_type,
"Clusters": clusters,
"Content Type": content_type,
"Covered Topics": topic,
"Owner": owner,
"Resource Link": resource_link, # ← USE THE ACTUAL URL HERE
"Document Creation Date": creation_date,
"Last Updated On": last_updated
}
csv_data.append(csv_row)
Critical Check Before Writing:
# Final verification
print("\n🔍 Final CSV Data Verification:")
for i, row in enumerate(csv_data):
resource = row["Resource Link"]
topic = row["Covered Topics"]
if resource == "TBD" or not resource.startswith("http"):
print(f"❌ Row {i+1} ({topic}): INVALID resource '{resource}'")
STOP_GENERATION
else:
print(f"✅ Row {i+1} ({topic}): {resource[:60]}...")
print("\n✅ All rows verified - writing file...")
write_csv_file(csv_data)
CRITICAL: Execute this immediately before writing the file:
# Pseudo-code showing the exact logic needed
def prepare_curriculum_data_for_file():
"""
This function runs RIGHT BEFORE creating the CSV/Excel file.
It ensures NO 'TBD' values slip through.
"""
curriculum_rows = get_curriculum_structure()
print("\n🔍 FINAL RESOURCE LINK CHECK (Pre-File-Generation)")
print("=" * 50)
for i, row in enumerate(curriculum_rows):
topic = row['Covered Topics']
resource_link = row.get('Resource Link', '')
# Check if resource link is missing or placeholder
if not resource_link or resource_link in ['TBD', 'N/A', '', 'null', 'None']:
print(f"\n⚠️ Row {i+1}: '{topic}' has no resource link")
print(f" Current value: '{resource_link}'")
print(f" 🔍 Searching now with neo-ddg-search...")
# EXECUTE NEO-DDG-SEARCH HERE
search_results = neo_ddg_search(f"{topic} tutorial for beginners free")
# Extract URLs from search results
urls_found = extract_urls_from_search_results(search_results)
if urls_found and len(urls_found) > 0:
row['Resource Link'] = urls_found[0]
print(f" ✅ Updated with: {urls_found[0]}")
else:
# Try one more time with different query
print(f" 🔄 First search returned no URLs, trying again...")
search_results_2 = neo_ddg_search(f"{topic} learn online")
urls_found_2 = extract_urls_from_search_results(search_results_2)
if urls_found_2 and len(urls_found_2) > 0:
row['Resource Link'] = urls_found_2[0]
print(f" ✅ Updated with: {urls_found_2[0]}")
else:
# HARD STOP - escalate
print(f" ❌ FAILED: No resources found after 2 searches")
escalate_resource_failure(topic)
return None # Don't proceed to file generation
else:
print(f"✅ Row {i+1}: '{topic}' has resource: {resource_link[:50]}...")
print("\n" + "=" * 50)
print("✅ All resource links verified. Proceeding to file write.\n")
return curriculum_rows
# THEN write the file
verified_data = prepare_curriculum_data_for_file()
if verified_data is None:
print("🚨 File generation cancelled - resource verification failed")
# STOP HERE, don't write file
else:
write_csv_file(verified_data) # Only write if all checks passed
What the user should see:
🔍 FINAL RESOURCE LINK CHECK (Pre-File-Generation)
==================================================
✅ Row 1: 'HTML Basics' has resource: https://www.youtube.com/watch?v=pQN-pnXPaVg
✅ Row 2: 'CSS Fundamentals' has resource: https://www.youtube.com/watch?v=1Rs2ND1ryYc
⚠️ Row 3: 'JavaScript Intro' has no resource link
Current value: 'TBD'
🔍 Searching now with neo-ddg-search...
Using neo-ddg-search: "JavaScript Intro tutorial for beginners free"
✅ Updated with: https://www.youtube.com/watch?v=PkZNo7MFNFg
✅ Row 4: 'DOM Manipulation' has resource: https://www.freecodecamp.org/...
==================================================
✅ All resource links verified. Proceeding to file write.
📄 Writing file: Web_Dev_Fundamentals_v1.0.csv
✅ File generated successfully!
Generate .xlsx file with these columns:
Column Population Rules:
Curriculum Version: vX.X
Scenario: [Assessment / New Design]
Prepared By: Clawdbot
Status: Draft – Pending POD Leader / Authority Approval
Key Risks & Assumptions:
- [List all assumptions made]
- [List all identified risks]
After each curriculum generation:
~/.openclaw/skills/curriculum-generator/memory/curriculum_[REQUEST_ID].json~/.openclaw/skills/curriculum-generator/memory/learnings.md~/.openclaw/skills/curriculum-generator/memory/escalations.logMemory file structure:
{
"request_id": "CUR_20260208_001",
"date": "2026-02-08",
"scenario": "new_design",
"pod_name": "Example POD",
"user": "madhur",
"inputs_collected": {},
"decisions_made": [],
"escalations": [],
"output_file": "curriculum_example_v1.0.xlsx",
"learnings": []
}
web_search: Research educational standards, best practices, resourcescreate_file: Generate Excel output filesbash_tool: File management, directory operationsview: Read memory files, check existing curriculaWhen triggered, inform user:
🦞 Starting curriculum generation process...
This may take 5-10 minutes. I'll notify you when complete.
You can continue other tasks in the meantime.
Then proceed with research and generation in background.
You may proceed autonomously when:
Even then: Final output is still Draft – Pending Human Review
You must NEVER:
Transparency > automation speed
User: "Create a new curriculum for our POD"
Clawdbot:
🦞 Curriculum Generator Activated
I'll guide you through a structured questionnaire to design the curriculum.
First, let me confirm: Is this for:
A) Assessing an existing curriculum
B) Designing a new curriculum from scratch
Please select A or B.
User: "B - new curriculum"
Clawdbot:
Perfect! Starting new curriculum design process.
SECTION 0: REQUEST METADATA
- Request ID: CUR_20260208_001 (auto-generated)
- Date: February 8, 2026
- Requested By: Please provide your name and role
- POD Name: Please provide the POD name
[Continue with structured questions...]