Install
openclaw skills install jupyter-notebook-managerComplete Jupyter notebook management system with creation, execution, debugging, and analysis capabilities
openclaw skills install jupyter-notebook-managerComplete Jupyter notebook management system that enables Claude to create, execute, debug, analyze, and optimize Jupyter notebooks with deep integration of data science workflows.
Use this skill when you encounter:
User mentions Jupyter-related keywords:
User requests data analysis workflows:
User provides .ipynb files:
User needs notebook operations:
When: User needs to create new notebooks for specific analysis tasks
Capabilities:
Example:
# User: "Create a data analysis notebook for sales data"
# → Generates structured notebook with:
# - Import cells (pandas, numpy, matplotlib)
# - Data loading section
# - EDA section with common analyses
# - Visualization section
# - Summary section
When: User needs to run notebooks and track execution
Capabilities:
Example:
# User: "Run analysis.ipynb with dataset=sales_2024.csv"
# → Executes notebook with parameters
# → Shows real-time progress
# → Captures all outputs
# → Reports execution time and status
When: Notebook execution fails or produces unexpected results
Capabilities:
Example:
# User: "My notebook fails at cell 5"
# → Analyzes error traceback
# → Checks variable values before error
# → Identifies root cause (e.g., missing column)
# → Suggests fix with corrected code
When: User needs to understand notebook state and variables
Capabilities:
Example:
# User: "What variables are defined in this notebook?"
# → Lists all variables with types
# → Shows dataframe shapes and dtypes
# → Displays memory usage
# → Highlights key variables
When: User wants to improve notebook code
Capabilities:
Example:
# User: "Optimize my data processing notebook"
# → Identifies slow loops that can be vectorized
# → Suggests caching for expensive operations
# → Recommends better pandas operations
# → Provides optimized code snippets
When: User needs different formats or want to modularize code
Capabilities:
Example:
# User: "Convert my notebook to a Python module"
# → Extracts all function definitions
# → Creates proper module structure
# → Adds docstrings
# → Generates import-ready .py file
When: User needs to present or summarize notebook results
Capabilities:
Example:
# User: "Summarize the results from my analysis notebook"
# → Extracts all plots and tables
# → Identifies key metrics and insights
# → Generates markdown report
# → Includes data quality notes
When: Multiple users work on notebooks
Capabilities:
Example:
# User: "Compare my notebook with the previous version"
# → Shows cell-by-cell differences
# → Highlights output changes
# → Identifies new/deleted cells
# → Suggests conflict resolution
The skill includes these utility scripts:
scripts/notebook_creator.py - Template-based notebook generationscripts/notebook_executor.py - Robust notebook executionscripts/notebook_debugger.py - Error analysis and debuggingscripts/notebook_analyzer.py - Code quality and optimizationscripts/notebook_converter.py - Format conversion utilitiesscripts/notebook_reporter.py - Results extraction and reportingUser: "Create a sales analysis notebook and run it with Q4_sales.csv"
Step 1: Generate Template
→ Call notebook_creator.py with "sales-analysis" template
→ Customize for Q4 data
Step 2: Configure Parameters
→ Set data_file = "Q4_sales.csv"
→ Set analysis_type = "quarterly"
Step 3: Execute Notebook
→ Call notebook_executor.py
→ Monitor progress (show cell N/M)
Step 4: Report Results
→ Extract key metrics
→ Show visualizations
→ Summarize findings
User: "My notebook analysis.ipynb crashes at cell 10"
Step 1: Identify Error
→ Parse notebook with nbformat
→ Find cell 10 and error traceback
Step 2: Analyze Context
→ Check variables in cells 1-9
→ Identify dataframe state before error
Step 3: Diagnose Issue
→ Analyze error message
→ Check for common issues (missing columns, type errors, etc.)
Step 4: Suggest Fix
→ Provide corrected code
→ Explain root cause
→ Offer prevention tips
User: "This notebook takes 10 minutes to run, can you optimize it?"
Step 1: Profile Execution
→ Run with timing enabled
→ Identify slow cells
Step 2: Analyze Code
→ Detect inefficient patterns
→ Find opportunities for vectorization
Step 3: Suggest Improvements
→ Show optimized code versions
→ Estimate speed improvements
Step 4: Validate
→ Test optimized notebook
→ Verify outputs match original
Start with imports and configuration
Use markdown for documentation
Modular code cells
Clear variable naming
Avoid loops when vectorization possible
# Bad
for i in range(len(df)):
df.loc[i, 'new_col'] = df.loc[i, 'a'] + df.loc[i, 'b']
# Good
df['new_col'] = df['a'] + df['b']
Cache expensive computations
# Check if already computed
if not os.path.exists('cached_result.pkl'):
result = expensive_computation()
result.to_pickle('cached_result.pkl')
else:
result = pd.read_pickle('cached_result.pkl')
Handle errors gracefully
try:
df = pd.read_csv('data.csv')
except FileNotFoundError:
print("⚠️ Data file not found, using sample data")
df = generate_sample_data()
Use appropriate data types
Process data in chunks for large files
chunks = pd.read_csv('large_file.csv', chunksize=10000)
result = pd.concat([process(chunk) for chunk in chunks])
Leverage pandas built-in functions
query() for filteringeval() for expressionspipe() for chaining| Error Pattern | Cause | Solution |
|---|---|---|
KeyError: 'column_name' | Column doesn't exist | Check df.columns, verify spelling |
SettingWithCopyWarning | Chained assignment | Use .loc[] or .copy() |
MemoryError | Dataset too large | Process in chunks or use dask |
ModuleNotFoundError | Missing package | Add to requirements, install in kernel |
KernelDead | Out of memory or crash | Restart kernel, reduce data size |
# 1. Check data loading
print(f"Shape: {df.shape}")
print(f"Columns: {df.columns.tolist()}")
print(f"Dtypes:\n{df.dtypes}")
# 2. Check for missing values
print(f"Missing values:\n{df.isnull().sum()}")
# 3. Check data types
print(f"Object columns: {df.select_dtypes('object').columns.tolist()}")
# 4. Check memory usage
print(f"Memory: {df.memory_usage(deep=True).sum() / 1024**2:.2f} MB")
# 5. Check for duplicates
print(f"Duplicates: {df.duplicated().sum()}")
# Create notebook from template
python scripts/notebook_creator.py \
--template exploratory-data-analysis \
--output eda_analysis.ipynb \
--data-file sales_data.csv \
--target-column revenue
A successful skill invocation should:
✅ Understand user's notebook-related intent
✅ Select appropriate operation (create/run/debug/optimize)
✅ Execute operation with proper error handling
✅ Provide clear progress updates
✅ Return actionable results or insights
✅ Offer next steps or improvements
✅ Handle edge cases gracefully
See examples/ directory for:
Version: 1.0.0
Last Updated: 2026-04-16
Maintainer: AI Skills Community