A precision tool designed for distilling high-fidelity professional concepts and relationships from complex information. It automatically organizes knowledge into a 3-layer architecture (Core, Primary, Detail) and ensures semantic consistency through recursive entity tracking. This skill enables any AI to act as a structured knowledge engine, generating consistent, graph-ready data for interactive learning.

Professional multi-layered knowledge extraction and recursive knowledge graph construction.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 175 · 0 current installs · 0 all-time installs
byPandas_007@askxiaozhang
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description claim a multi-layer knowledge extraction tool and the skill is instruction-only that defines exactly that behavior (entity/relation JSON schema, 3-layer architecture). There are no unrelated requirements (no env vars, no binaries, no installs).
Instruction Scope
SKILL.md stays on-purpose: it instructs the agent to parse the user's query and prior context, check a provided existing_terms list if present, build the Core/Primary/Detail hierarchy, and emit JSON. It does not instruct reading files, contacting external endpoints, or accessing unrelated system state.
Install Mechanism
No install spec and no code files — lowest-risk execution model. The skill is pure instructions that run in-memory as part of the agent's reasoning.
Credentials
The skill requests no environment variables, credentials, or config paths. The documented runtime behavior only references supplied context (user query, optional existing_terms) which is proportional to the stated functionality.
Persistence & Privilege
Flags show always:false and user-invocable:true. The skill can be invoked autonomously by the agent (platform default), which is normal; it does not request persistent system presence or modify other skills.
Assessment
This skill appears coherent and low-risk: it only prescribes how the agent should extract and structure knowledge into JSON and does not request credentials or install software. Before using, avoid submitting sensitive secrets or private credentials as part of the text you want analyzed (the skill processes whatever you provide). If you plan to use this to build a growing knowledge graph across sessions, verify how your agent/platform stores and protects that graph data (the skill itself does not specify storage or export behavior). Finally, confirm whether you are comfortable with the agent invoking the skill autonomously (default platform behavior) when it decides the skill is relevant.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk97bkdxw2xptf4y3gggs0fgtah82qwbc

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Professional Knowledge Extraction Skill

Expertly extract core concepts, entities, and logical relationships from complex professional text to build a multi-layered, interactive knowledge graph.

Core Mission

Transform any professional inquiry or text into a structured, hierarchical knowledge representation that follows a 3-layer information architecture.

Interaction Protocol

1. Response Structure

Always prioritize structured output. Every response MUST be a valid JSON object with the following schema:

{
  "reply": "Your natural language explanation of the user's query.",
  "entities": [
    {
      "id": "unique_id (kebab-case or UUID)",
      "label": "Display Name",
      "group": "layer_type"
    }
  ],
  "relations": [
    {
      "from": "entity_id_A",
      "to": "entity_id_B",
      "label": "Relationship Description"
    }
  ]
}

2. The 3-Layer Information Architecture

Classify every extracted entity into one of these three group values:

  • core: The central theme or the main subject of the user's inquiry. Usually, there is only ONE core node per response.
  • primary: Key dimensions or high-level frameworks of the core topic (e.g., "Core Components", "Problem Solved", "Application Scenarios", "Historical Context"). Limit this to 3-5 nodes to avoid clutter.
  • detail: Deep-dive nodes, specific parameters, sub-technologies, references, or granular data points that support the primary nodes.

3. Relationship Logic

  • Connect core to primary nodes with descriptive labels.
  • Connect primary to their respective detail nodes.
  • Avoid cross-linking detail nodes unless a critical logical dependency exists.
  • Maintain semantic consistency by reusing provided entity IDs if available.

Recursive Growth & Consistency

To maintain a growing knowledge network without duplication:

  1. Reference Check: Before creating a new entity, check the existing_terms list (if provided in the context).
  2. ID Mapping: If a concept already exists, use its exact id. Do NOT create a duplicate node with a different ID if the meaning is identical.
  3. Attribute Inheritance: Ensure new relationships (relations) correctly anchor onto these existing nodes, extending the network from the known to the unknown.

Professional Extraction Techniques

  • Disambiguation: Use unique IDs for entities that might have similar names (e.g., sqlite-database vs mysql-database).
  • Weighted Relationships: In the label field of a relation, use active verbs (e.g., "implements", "manages", "defines", "is a subset of").
  • Contextual Relevance: Only extract entities and relations that are strictly relevant to the current technical discussion. Avoid extracting "conversational filler".

Workflow

  1. Step 1: Ingest - Analyze the user query and previous context.
  2. Step 2: Lookup - Check existing_terms for overlaps.
  3. Step 3: Structure - Map out the 3-layer hierarchy (Core -> Primary -> Detail).
  4. Step 4: Serialize - Produce the final JSON response.

Files

1 total
Select a file
Select a file to preview.

Comments

Loading comments…