Stock AI exposure analysis for investing

v1.0.2

Analyze any public company's AI exposure using the 8-dimension AI Exposure Index. Fetches last 4 10-K filings (or international equivalents), O*NET data, pat...

1· 327·1 current·2 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name and description (AI exposure analysis using O*NET, filings, patents, transcripts) match the included assets: scoring docs, data mapping guide, O*NET datasets, and Python utilities. No unrelated credentials, binaries, or system-level config paths are requested.
Instruction Scope
The SKILL.md prescribes scraping/fetching public sources (EDGAR, company IR pages, Google Patents, Yahoo Finance, Crunchbase searches) and reading bundled reference files and O*NET data. That is expected for the stated goal. Note: the skill will perform broad web_fetch / search operations when invoked (the SKILL.md also lists many trigger phrases). The instructions are fairly prescriptive rather than vague, which is good, but they entrust the agent to fetch and parse many external resources — validate rate limits and scraping policy before automated runs.
!
Install Mechanism
This is essentially an instruction-only skill (no install spec) but it includes Python scripts and a requirements.txt (pandas, openpyxl). There is no explicit install instruction or declared runtime dependency (no 'requires.python' or install step). That is an engineering coherence issue: running the provided scripts will fail unless the runtime already has Python + pandas installed. This is not a security problem by itself, but it is an operational omission that could cause accidental failures or ad-hoc manual installation of dependencies.
Credentials
The skill requests no environment variables, no credentials, and no config paths. It uses only public web sources and local bundled O*NET files. This is proportionate to the stated functionality. The SKILL.md references third‑party services (Crunchbase, Yahoo Finance) but suggests using free web search/EDGAR scraping rather than API keys; that is consistent with requiring no credentials.
Persistence & Privilege
always:false and model invocation is not disabled (normal). The skill does not request permanent platform privileges or modify other skills. It contains printable code and data but does not attempt to persist secrets or change system-wide settings.
Assessment
This skill appears to do what it says: it bundles scoring rubrics, O*NET datasets, and Python helpers to fetch filings, transcripts, patents, and compute composite scores. Before installing or enabling autonomous invocation, consider: 1) Dependency setup — the repo includes requirements.txt but no install step; ensure your agent environment has Python and pandas/openpyxl or provide a controlled install process. 2) Network behavior — the skill will perform many public web fetches (EDGAR, patents, transcripts, Yahoo/Crunchbase lookups). If you restrict internet access in your environment, the skill may fail; if you allow it, be aware of scraping rate limits and privacy/policy constraints. 3) Data volume & licensing — the skill bundles O*NET extracts (large files). Confirm you are comfortable with those data files and that any redistribution complies with their license. 4) Trigger surface — SKILL.md lists many trigger phrases; if you are concerned about unintended automatic runs, avoid enabling autonomous invocation or narrow triggers. 5) Review code before enabling execution — the included scripts are short and straightforward, but if you plan to run them in a privileged environment, inspect them and run in a sandboxed runtime. If you want, I can list the exact commands the agent will run given a sample request or produce a minimal checklist for safely enabling this skill.

Like a lobster shell, security has layers — review code before you run it.

latestvk97a99fbk369f59ysc5yx16csh826fdg
327downloads
1stars
3versions
Updated 1mo ago
v1.0.2
MIT-0

AI Exposure Analyzer

This skill implements a comprehensive 8-dimension AI Exposure Index framework to evaluate any publicly traded company. It fetches real financial data, maps workforce to O*NET occupations, and produces a scored assessment with actionable investment classification.

Before Starting: Read Reference Files

Before doing any analysis, read these reference files in order:

  1. references/framework_dimensions.md — The complete scoring rubrics, anchor checklists, and formulas for all 8 dimensions. READ THIS FIRST — it is the core of the analysis.
  2. references/data_collection_guide.md — Step-by-step instructions for fetching 10-K filings, earnings transcripts, patent data, and international equivalents.
  3. references/onet_mapping_guide.md — How to use the bundled O*NET datasets to map company job categories to AI exposure scores.
  4. references/scoring_calculations.md — Exact formulas for composite scores, sub-indices, classification matrix, and valuation overlay.

Workflow Overview

Phase 1: Company Identification & Data Collection

  1. Identify the company — Get ticker, exchange, country of incorporation, and sector.
  2. Determine filing type:
    • US-based: Fetch last 4 10-K filings from SEC EDGAR
    • Non-US cross-listed: Check for 20-F filings on EDGAR first
    • Non-US: Fetch annual reports from company IR page (English versions)
  3. Collect the data package for each dimension (see references/data_collection_guide.md).

Phase 2: Dimension Scoring (1–5 each)

Score all 8 dimensions using the rubrics in references/framework_dimensions.md:

#DimensionWeightDirection
1Labor Automation Vulnerability8%Higher = more vulnerable
2Revenue Model Disruption Potential18%Higher = more vulnerable
3AI Adoption & Implementation Maturity10%Higher = more capable ↑
4Competitive Moat Durability16%Higher = weaker moat
5Operational AI Leverage12%Higher = more capable ↑
6Regulatory & Ethical AI Risk3%Higher = more vulnerable
7Industry Transformation Velocity17%Higher = faster change
8Data & Ecosystem Strength16%Higher = stronger ↑

Capability dimensions (3, 5, 8) use ↑ = positive. All others: higher = greater risk.

Phase 3: Composite Score & Classification

Use the formulas in references/scoring_calculations.md to compute:

  • AI Vulnerability Score (geometric mean of D1, D2, D4, D6, D7)
  • AI Adaptive Capacity Score (weighted average of D3, D5, D8)
  • 2×2 Matrix Classification
  • Valuation Overlay (compare to sector medians)

Phase 4: Output Generation

Generate a comprehensive report. Use the output template below.

Output Template

Structure every analysis report as follows:

# AI Exposure Analysis: [Company Name] ([Ticker])
## Date: [Date] | Sector: [Sector] | Country: [Country]

## Executive Summary
[2-3 sentence classification result with composite scores and key finding]

## Data Sources Used
[List the specific filings, transcripts, and datasets referenced]

## Dimension Scores

| # | Dimension | Weight | Score | Explanation |
|---|-----------|--------|-------|-------------|
| 1 | Labor Automation Vulnerability | 8% | [X]/5 | [Concise evidence: workforce mix, O*NET mapping results, labor-cost-to-revenue ratio, anchor checklist match. E.g. "~60% knowledge workers (Eloundou E1+E2 ≈ 0.58), SGA 55% of revenue. Maps to anchor 4."] |
| 2 | Revenue Model Disruption | 18% | [X]/5 | [Revenue segments, tangible asset intensity, AI substitutability, forward adjustment if applicable. E.g. "85% subscription SaaS, per-seat pricing. Tangible asset intensity 0.09. Two AI-native competitors raised $100M+. Base 4 + forward adj = 5."] |
| 3 | AI Adoption & Implementation ↑ | 10% | [X]/5 | [Earnings call trajectory, 10-K AI depth, patent momentum, observable adoption, AI-washing check result. E.g. "AI mentions up 3x over 4 quarters. CFO: AI cut support costs 12%. 14 AI patents (accelerating). 35+ AI roles open. AI-washing check: PASS."] |
| 4 | Competitive Moat Durability | 16% | [X]/5 | [Moat type, friction test result, NRR/churn, switching costs. E.g. "NRR 115% but value is workflow orchestration — a friction moat. Deep embedding partially offsets. Customers exploring AI alternatives."] |
| 5 | Operational AI Leverage ↑ | 12% | [X]/5 | [Operational complexity, AI ops evidence, efficiency metrics. E.g. "Simple SaaS ops. Some AI in support routing and internal code gen. Limited supply chain complexity."] |
| 6 | Regulatory & Ethical Risk | 3% | [X]/5 | [Regulated jurisdictions, high-risk AI categories, enforcement history. E.g. "Minimal high-risk AI use. 8% EU revenue. No enforcement actions."] |
| 7 | Industry Transformation Velocity | 17% | [X]/5 | [Which proxies triggered (list by number), key evidence. E.g. "4/5 proxies triggered: massive AI startup funding, >80% peers mention AI, 3 incumbents down >20%, Microsoft/Google competing directly."] |
| 8 | Data & Ecosystem Strength ↑ | 16% | [X]/5 | [Proprietary data, partnerships depth, talent quality. E.g. "Unique transaction data from 12M users. Genuine API integration with Azure OpenAI. 6 AI/ML engineers, no elite lab alumni."] |

## Composite Scores

| Metric | Score | Level |
|--------|-------|-------|
| AI Vulnerability | [X.XX] | [Low / Moderate / High / Very High] |
| AI Adaptive Capacity | [X.XX] | [Low / Low-to-Moderate / Moderate / High] |

**Classification: [AI FORTIFIED / AI TRANSFORMER / AI BYSTANDER / AI ENDANGERED]**

[1-2 sentences on matrix placement and whether scores are borderline]

## Valuation Overlay

| Metric | Company | Sector Median | Position |
|--------|---------|---------------|----------|
| Forward P/E | [X] | [X] | [Premium / In Line / Discount] |
| EV/Sales | [X] | [X] | [Premium / In Line / Discount] |

**Assessment:** [Valuation signal from the matrix, e.g. "AI Endangered trading at premium = Short candidate"]

## Scenario Sensitivity

| Paradigm | Impact on Scores | Net Effect |
|----------|-----------------|------------|
| Agentic AI | [Which dimensions shift, by how much] | [Positive / Negative / Neutral] |
| Physical AI / Robotics | [Which dimensions shift] | [Positive / Negative / Neutral] |
| Energy Constraints | [Which dimensions shift] | [Positive / Negative / Neutral] |
| Open-Source Acceleration | [Which dimensions shift] | [Positive / Negative / Neutral] |

## Key Risks & Catalysts
[Top 3 risks and top 3 positive catalysts based on the analysis]

Critical formatting rule: The Dimension Scores table is the centerpiece of the report. The Explanation column must be dense and evidence-based — pack in specific data points (numbers, ratios, quote fragments, proxy counts) rather than vague summaries. Each explanation cell should read like a compressed analyst note, not a generic description. Aim for 2-4 sentences per cell.

Critical Rules

  1. Always fetch real data. Never estimate or hallucinate filing contents. Use web_search and web_fetch to retrieve actual SEC filings, earnings transcripts, patent data, and financial metrics.

  2. Use the AI-Washing Check. For Dimension 3, if a company uses AI buzzwords extensively but cannot cite a single quantified KPI, production deployment, or specific AI product feature, cap the score at 2.

  3. Apply forward-looking adjustments. For Dimension 2, add +1 if the primary market has significant AI-native startup funding or a major tech company has announced a competing AI product.

  4. Non-US companies: Follow the substitution table in references/data_collection_guide.md. Apply the Disclosure Quality Adjustment (±0.5 confidence range on D1 and D2).

  5. Show your work. Every score must cite specific evidence from the filings or data sources. Never assign a score without justification.

  6. Geometric mean for vulnerability. The geometric mean penalizes extreme weakness — a collapsing moat cannot be offset by low labor costs. Use the exact formulas.

  7. Present the final report as inline Markdown tables in the chat. Do NOT create a Word document or any file attachment. Render all tables (Dimension Scores, Composite Scores, Valuation Overlay, Scenario Sensitivity) directly in the conversation using Markdown table syntax. The output should be fully readable without downloading anything.

O*NET Data Access

The O*NET datasets are bundled as reference data. To map company workforce to AI exposure scores:

  1. Read the company's 10-K Human Capital and Business Description sections
  2. Identify dominant job categories (e.g., "software engineers", "customer support", "sales representatives")
  3. Map to O*NET-SOC codes using references/onet_mapping_guide.md
  4. Use the O*NET datasets bundled in the skill's data/ directory to pull task statements, work activities, and abilities for those occupations
  5. Cross-reference with Eloundou et al. exposure scores (search for "GPTs are GPTs" paper data)

Network Behavior

This skill directs the agent to fetch publicly available data from the following sources. No credentials, API keys, or accounts are required or used. No data is sent to third-party endpoints — all fetches are read-only.

SourceWhat is fetchedURL pattern
SEC EDGAR10-K and 20-F annual filingshttps://www.sec.gov/cgi-bin/browse-edgar / https://efts.sec.gov/
Earnings transcriptsQuarterly call transcripts (read-only)Motley Fool, Seeking Alpha, or company IR pages
Google PatentsPatent counts and titleshttps://patents.google.com/
Yahoo Finance / MacrotrendsForward P/E, EV/Sales, sector mediansPublic pages only
Academic papersEloundou et al. "GPTs are GPTs" supplementary dataarXiv or author-hosted pages

No data leaves your machine to any proprietary endpoint. The ONET datasets in data/ are bundled locally and sourced from the publicly available ONET database (https://www.onetcenter.org/database.html).

Python scripts (scripts/onet_lookup.py, scripts/calculate_scores.py) make no network calls. They only read local files from the data/ directory. Install dependencies with:

pip install -r requirements.txt

Handling Insufficient Data

If certain data points are unavailable (e.g., no earnings transcripts for a smaller company, or limited patent data):

  • Note the data gap explicitly
  • Widen the confidence range for that dimension by ±0.5
  • Increase reliance on the dimensions with stronger data availability
  • Flag the overall confidence level (High / Medium / Low) in the Executive Summary

Comments

Loading comments...