Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

graphify

v2.1.0

AI coding assistant skill for building and querying knowledge graphs from codebases, docs, and media. Use for: understanding complex codebases, architectural...

0· 8·0 current·0 all-time
Security Scan
Capability signals
CryptoCan make purchasesRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description (build/query knowledge graphs from code/docs/media) aligns with requested binaries (python3, pip), the provided CLI scripts, and optional LLM API keys for semantic extraction. The install spec (pip package 'graphify' exposing a 'graphify' binary) matches the documented workflow. Note: the registry metadata shows a serialization glitch (Required env vars listed as '[object Object]') — the SKILL.md corrects this by declaring ANTHROPIC_API_KEY and OPENAI_API_KEY as optional.
Instruction Scope
SKILL.md instructs local operations (AST extraction, local Whisper transcription) and optional remote LLM calls when API keys are provided. It also documents commands that modify project files (writing AGENTS.md for OpenClaw/other assistants, installing a local git post-commit hook) and commands that fetch external content (graphify add <url> downloads/transcribes remote content). These actions are coherent with the stated purpose but are persistent/project-scoped changes that users should review before running.
Install Mechanism
Install is via a Python package (pip) named 'graphify' which creates a 'graphify' binary — a standard distribution mechanism. No high-risk downloads (custom URLs/archives/paste sites) or extraction-from-arbitrary-URLs are specified in the install spec included in the skill bundle.
Credentials
Only optional LLM API keys (ANTHROPIC_API_KEY, OPENAI_API_KEY) are declared for semantic extraction; they are optional and their use is limited to non-AST semantics (docs/images). No unrelated secrets or cloud credentials are requested. The SKILL.md consistently marks these as optional and explains when they are used.
Persistence & Privilege
The skill is not always-enabled and does not demand elevated system privileges. However, it provides explicit commands to copy itself into an assistant's global skill directory and to write steering files (AGENTS.md, CLAUDE.md, etc.) into project roots, and an optional git post-commit hook. Those are user-invoked and coherent for integration, but they create persistent project-level changes and should be reviewed before execution.
Assessment
What to check before installing: - Review the upstream package: the SKILL.md points to a GitHub repo; inspect the PyPI package contents or the repo tags before pip installing. Prefer installing into a virtualenv or isolated environment first. - API keys: ANTHROPIC_API_KEY / OPENAI_API_KEY are optional and only used for semantic extraction of docs/images; do not provide keys you wouldn't want used for third-party requests. Consider creating scoped/limited API keys and monitor usage. - Persistent changes: the tool offers commands that write steering files (AGENTS.md, CLAUDE.md) and can copy itself into a global skill directory — these are not automatic but will persist if you run those commands. Inspect any generated AGENTS.md / hook scripts to ensure they do only what you expect. - Remote content: graphify add <url> can download and transcribe external media — avoid feeding untrusted URLs or review downloads before indexing. - Local hooks: graphify hook install places a hook in .git/hooks/ (local only). If you don't want automatic rebuilds, skip this or inspect the hook before enabling. - Metadata glitch: the registry summary shows 'Required env vars: [object Object]' which appears to be a display/serialization bug; rely on SKILL.md for the authoritative list of env vars. If you want higher assurance, inspect the full package source from the referenced GitHub repo or the installed files before giving it write access to project roots or installing into global assistant directories.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🕸️ Clawdis
OSLinux · macOS · Windows
Binspython3, pip
Env[object Object], [object Object]

Install

uv
Bins: graphify
uv tool install graphify
latestvk972ssa9eyabt9d3p57tn2g79d855yp0
8downloads
0stars
5versions
Updated 2h ago
v2.1.0
MIT-0
Linux, macOS, Windows

Graphify Skill

Graphify turns a folder of code, docs, images, and videos into a queryable knowledge graph. After running graphify ., the generated GRAPH_REPORT.md is referenced by the assistant when you ask it to explore the codebase. Use it to navigate unfamiliar codebases, trace architectural intent, and share context efficiently.

Credential note: Semantic extraction (docs, PDFs, images) calls an external LLM using your own API key (ANTHROPIC_API_KEY or OPENAI_API_KEY). AST code extraction and Whisper transcription run fully locally with no API key required.

Token efficiency: Reading GRAPH_REPORT.md is ~71.5× cheaper than reading raw source files. Always check it first.


Installation

Minimum install

pip install graphify

Recommended install with all extras

pip install "graphify[pdf,video,watch,svg]"
ExtraAdds support for
pdfPDF papers and documents
videoVideo/audio transcription via faster-whisper
watch--watch file monitoring
svg--svg export
mcpModel Context Protocol server
neo4jNeo4j graph database integration
officeWord/Excel/PowerPoint documents

Register with OpenClaw

graphify claw install

This copies the skill into OpenClaw's global skill directory and writes an AGENTS.md to the project root so the graph is consulted on every tool call.

Register with other platforms

graphify claude install     # Claude Code (CLAUDE.md + PreToolUse hook)
graphify cursor install     # Cursor (.cursorrules)
graphify codex install      # Codex (AGENTS.md)
graphify copilot install    # GitHub Copilot CLI
graphify gemini install     # Gemini CLI
graphify aider install      # Aider

Git post-commit hook (optional, local project only)

Installs a hook in the current project's .git/hooks/ — no global changes.

graphify hook install

Building a Knowledge Graph

First run

graphify .

Graphify runs a three-pass pipeline:

  1. AST extraction — deterministic tree-sitter parsing of all code files (no LLM calls)
  2. Transcription — local Whisper processing of any video/audio
  3. Semantic extraction — parallel LLM calls (using your configured API key) analyze docs, papers, and images

Incremental update (changed files only)

graphify . --update

Uses a SHA-256 cache in graphify-out/cache/ — safe to run after every save.

Watch mode (auto-sync as files change)

graphify . --watch

Deep mode (aggressive inferred-edge extraction)

graphify . --mode deep

Adds INFERRED edges with confidence scores (0.0–1.0) and AMBIGUOUS edges flagged for review.

Preserve edge directionality

graphify . --directed

Re-cluster without re-extracting

graphify . --cluster-only

Skip HTML visualization (faster CI runs)

graphify . --no-viz

Output Artifacts

All artifacts land in graphify-out/:

FilePurpose
GRAPH_REPORT.mdRead this first. God nodes, community structure, surprising connections, suggested questions.
graph.htmlInteractive browser visualization — open for human review.
graph.jsonRaw graph data for programmatic querying via CLI or script.
cache/SHA-256 incremental cache — commit everything except this directory.

Additional export formats

graphify . --svg        # SVG visualization
graphify . --graphml    # Gephi / yEd compatible export
graphify . --wiki       # Wikipedia-style article per node

Querying the Graph

# Natural-language semantic search
graphify query "where is authentication handled?"

# Trace a specific path (DFS traversal)
graphify query "how does the request reach the database?" --dfs

# Shortest path between two nodes
graphify path "AuthMiddleware" "PostgresAdapter"

# Plain-language explanation of a node
graphify explain "UserSessionManager"

Adding External Content

graphify add <arxiv-url>     # Fetch and index a research paper
graphify add <x.com-url>     # Fetch and index a tweet
graphify add <video-url>     # Download and transcribe video/audio

After adding, run graphify . --update to integrate the new nodes into the graph.


Ignoring Files

Create .graphifyignore in the project root (same syntax as .gitignore):

node_modules/
dist/
build/
.next/
vendor/
*.generated.*
*.min.js
*.lock
graphify-out/cache/

See templates/graphifyignore.txt in this skill for a comprehensive starter.


Relationship Types

TagMeaning
EXTRACTEDFound directly in source (AST, explicit reference)
INFERREDReasonable inference; includes confidence score 0.0–1.0
AMBIGUOUSLow-confidence; flagged for manual review

Use --mode deep to maximize INFERRED coverage. Filter AMBIGUOUS edges when precision matters.


Best Practices for OpenClaw / Claude Code

1. Always read the report before exploring

Read graphify-out/GRAPH_REPORT.md

The report identifies "god nodes" (highest-degree concepts), community clusters, and suggested entry-point questions. Use these to focus subsequent searches.

2. Map communities to features

Each community in the report corresponds to a functional area of the codebase. When fixing a bug or adding a feature, identify the relevant community first, then limit file reads to that cluster.

3. Use graphify query before grep

For open-ended questions ("where is X handled?"), prefer graphify query — it searches the semantic graph and returns ranked node paths rather than raw text matches.

4. Use graphify path for impact analysis

Before editing a node, run graphify path "NodeA" "NodeB" to find coupling chains and assess blast radius.

5. Use --update aggressively

After any significant edit, run graphify . --update to keep the graph current. The cache makes this cheap.

6. Team workflows

  • Commit graphify-out/ (excluding graphify-out/cache/) so teammates get graph context immediately on checkout.
  • One team member builds the initial graph; all others benefit without LLM cost.
  • Install graphify hook install to auto-rebuild on every commit.

7. Multimodal context

Place architecture diagrams, whiteboard photos, or design mockups in the project directory before running graphify .. Claude vision will link visual concepts to code nodes.


Supported Languages

Python, JavaScript, TypeScript, Go, Rust, Java, C, C++, Ruby, C#, Kotlin, Scala, PHP, Swift, Lua, Zig, PowerShell, Elixir, Objective-C, Julia, Verilog, SystemVerilog, Vue, Svelte, Dart

Plus: Markdown, MDX, HTML, plain text, RST, PDF, PNG/JPG/WebP/GIF images, MP4/MOV/MKV/WebM/MP3/WAV/M4A/OGG media.


Troubleshooting

SymptomFix
graphify: command not foundRun pip install graphify and ensure pip's bin directory is on PATH
Graph is stale after editsRun graphify . --update
Missing nodes for a languageConfirm tree-sitter grammar is installed; run graphify . --update
Video transcription slowExpected — Whisper runs locally. Add GPU acceleration or use --no-viz to skip unrelated steps
AMBIGUOUS edges dominatingSwitch from --mode deep to default mode, or filter by confidence > 0.7 in graph.json
AGENTS.md not picked upRe-run graphify claw install from the project root

Comments

Loading comments...