Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

my skill -names

v1.0.0

Connect to TigerGraph distributed graph database to query, load, and manage large-scale knowledge graph data using GSQL and REST++ APIs

0· 66·0 current·0 all-time
byMuhammad Asif@fisa712

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for fisa712/tigergraph-connector.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "my skill -names" (fisa712/tigergraph-connector) from ClawHub.
Skill page: https://clawhub.ai/fisa712/tigergraph-connector
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install tigergraph-connector

ClawHub CLI

Package manager switcher

npx clawhub@latest install tigergraph-connector
Security Scan
Capability signals
CryptoCan make purchases
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The SKILL.md, README and code all describe a TigerGraph connector (GSQL + REST++ capabilities). That purpose is consistent across files. However the registry metadata name ('my skill -names') doesn't match the internal name ('tigergraph_connector') and the publisher/source is unknown — a minor coherence issue but not by itself critical.
!
Instruction Scope
The documentation and examples describe live operations (connecting to a TigerGraph instance, running queries, loading CSVs). The included Python script, however, uses simulated/mock execution (e.g., _mock_query_execution and a connect() that 'simulates' a connection) and does not actually import or call a TigerGraph client. This inconsistency between 'production-ready' claims and the actual runtime behavior is misleading and could surprise users expecting real network interactions.
Install Mechanism
There is no install spec (instruction-only), which is low risk. README suggests installing pyTigerGraph (pip install pyTigerGraph) but the provided code does not depend on it. Absence of a formal install step is coherent for an instruction-only skill, but the mismatch between README guidance and the code's simulated implementation is a documentation/code inconsistency to be aware of.
!
Credentials
The SKILL.md describes connection parameters that include an api_token, username, and password, but the skill metadata declares no required environment variables or primary credential. That by itself isn't fatal (credentials may be supplied at runtime), but it's an inconsistency: the skill requires sensitive credentials to perform useful work yet doesn't declare them in metadata. Users should not assume the registry will manage credentials safely.
Persistence & Privilege
No persistent or elevated privileges are requested: always is false, there is no install spec writing files, and the skill does not declare config paths or system-level access. Autonomous invocation is allowed by default (normal) but there are no additional persistence flags.
What to consider before installing
This package claims to be a production TigerGraph connector but has several red flags: the source/publisher is unknown and the registry name doesn't match the internal project name; documentation advertises a real connector while the included Python file uses simulated/mock methods rather than performing real network operations; and the skill describes needing an API token/credentials but none are declared in the registry metadata. Before installing or giving credentials: (1) ask the publisher for provenance and a proper homepage/repo; (2) review the Python code yourself—note that run_query currently calls a mock method that returns dummy results; (3) do not provide API tokens or passwords until you confirm the skill actually uses a legitimate TigerGraph client and communicates with expected endpoints; (4) if you still want to test it, run it in an isolated environment and avoid exposing production credentials; (5) request or require an update that either implements real client calls (with clear dependency instructions) or explicitly documents that the current implementation is a stub/mock.

Like a lobster shell, security has layers — review code before you run it.

latestvk970bn3e10rh1a6hby3skmfe1n84zpsh
66downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

TigerGraph Connector

Purpose

This skill enables comprehensive interaction with TigerGraph graph database for storing, querying, analyzing, and managing large-scale knowledge graph data.

TigerGraph is a high-performance distributed graph database platform optimized for:

  • Large-scale graph analytics
  • Real-time graph processing
  • Advanced graph algorithms
  • Distributed graph computing
  • Enterprise-grade reliability

Key Capabilities

  • Execute GSQL queries on TigerGraph instances
  • Load vertices and edges via REST++ APIs
  • Run built-in and custom graph algorithms
  • Perform real-time graph analytics
  • Manage graph schema and data
  • Query result mapping to Python objects
  • Batch data loading
  • Performance optimization

When To Use This Skill

Use this skill when:

  • Querying TigerGraph: Executing GSQL queries and algorithms
  • Loading Data: Inserting vertices and edges into graph
  • Graph Analytics: Running PageRank, community detection, etc.
  • Large-Scale Graphs: Processing enterprise-scale knowledge graphs
  • Real-Time Analysis: Performing real-time graph computations
  • Pattern Matching: Finding complex patterns in graph data

Example Triggers

  • "Execute this GSQL query"
  • "Run PageRank algorithm"
  • "Insert vertices into TigerGraph"
  • "Find shortest path between nodes"
  • "Detect communities in the graph"
  • "Get graph statistics and metrics"

Connection Configuration

Connection Parameters

{
  "host": "http://localhost",
  "restpp_port": 9000,
  "graph_name": "MyGraph",
  "api_token": "your-api-token",
  "timeout": 30,
  "retry_count": 3
}

Configuration Details

ParameterTypeDefaultDescription
hoststringrequiredTigerGraph server URL
restpp_portinteger9000REST++ API port
graph_namestringrequiredGraph name to work with
api_tokenstringrequiredAuthentication token
timeoutinteger30Request timeout in seconds
retry_countinteger3Number of retries
usernamestringoptionalAlternative authentication
passwordstringoptionalAlternative authentication

Authentication Methods

  • API Token (preferred)
  • Username/Password
  • Custom headers

Core Concepts

GSQL (Graph Search Query Language)

  • Turing-Complete: Supports complex computations
  • Pattern Matching: Efficiently matches graph patterns
  • Algorithm Support: Built-in library of graph algorithms
  • Vertex/Edge Access: Direct access to graph structure
  • Aggregation: Built-in aggregation functions

Example Query:

CREATE QUERY getNeighbors(VERTEX<Person> person) FOR GRAPH MyGraph {
  Start = {person};
  Result = SELECT t
           FROM Start:s -(KNOWS:e)-> Person:t;
  PRINT Result;
}

Graph Schema

Vertex Types

  • Define entities in the graph
  • Have properties (attributes)
  • Can have primary keys
  • Support custom data types

Edge Types

  • Define relationships between vertices
  • Support directional connections
  • Have properties
  • Can be undirected

Properties

  • Store data on vertices/edges
  • Multiple data types supported
  • Can be indexed
  • Support default values

REST++ APIs

  • HTTP-based interface
  • JSON request/response format
  • RESTful endpoint design
  • Real-time data loading
  • Query execution

GSQL Query Patterns

Basic Query Structure

CREATE QUERY queryName(PARAMETERS) FOR GRAPH graphName {
  // Variable declarations
  // Pattern matching
  // Aggregations
  // Output
}

Vertex Pattern Matching

Query Single Vertex Type

Start = {Person.*};
Result = SELECT * FROM Start;

Query Multiple Vertex Types

Start = {Person.* UNION Company.*};
Result = SELECT * FROM Start;

Traversal Patterns

Single-Hop Traversal

Result = SELECT t
         FROM Start:s -(KNOWS:e)-> Person:t;

Multi-Hop Traversal

Result = SELECT t
         FROM Start:s -(KNOWS:e)-> Person:t -(WORKS_AT:e2)-> Company:c;

Variable-Length Traversal

Result = SELECT t
         FROM Start:s -(KNOWS:e)->* Person:t;

Aggregation Patterns

Count Aggregation

Result = SELECT COUNT(DISTINCT t)
         FROM Start:s -(KNOWS:e)-> Person:t;

Property Aggregation

Result = SELECT s.name, COUNT(DISTINCT t)
         FROM Start:s -(KNOWS:e)-> Person:t
         GROUP BY s.name;

Filtering Patterns

Where Clause

Result = SELECT *
         FROM Start
         WHERE age > 25 AND status == "active";

Having Clause

Result = SELECT s.name, COUNT(DISTINCT t) as cnt
         FROM Start:s -(KNOWS:e)-> Person:t
         GROUP BY s.name
         HAVING cnt > 5;

Data Loading Operations

Insert Vertices

{
  "vertices": {
    "Person": {
      "alice": {
        "name": "Alice",
        "age": 30,
        "email": "alice@example.com"
      },
      "bob": {
        "name": "Bob",
        "age": 25,
        "email": "bob@example.com"
      }
    }
  }
}

Insert Edges

{
  "edges": {
    "Person": {
      "alice": {
        "KNOWS": {
          "Person": {
            "bob": {
              "since": "2020-01-15"
            }
          }
        }
      }
    }
  }
}

Batch Loading

CSV File Loading

connector.load_from_csv(
    file_path="data.csv",
    vertex_type="Person",
    mapping={"name": "Name", "age": "Age"}
)

Graph Algorithms

Built-In Algorithms

PageRank

RUN QUERY pagerank(max_iterations=100, damping_factor=0.85)

Measures vertex importance in the graph.

Shortest Path

RUN QUERY shortest_path(source_vertex, target_vertex)

Finds shortest path between two vertices.

Community Detection

RUN QUERY louvain_community(resolution=1.0)

Detects communities/clusters in graph.

Centrality Analysis

RUN QUERY betweenness_centrality()

Measures vertex betweenness centrality.

Custom Algorithms

Can be defined using GSQL for specific use cases.


Query Execution Patterns

Simple Query Execution

result = connector.run_query(
    query_name="getNeighbors",
    parameters={"person": "Alice"}
)

Query with Timeout

result = connector.run_query(
    query_name="complexQuery",
    parameters={...},
    timeout=60
)

Batch Query Execution

results = connector.batch_query(
    queries=[
        {"name": "query1", "params": {...}},
        {"name": "query2", "params": {...}}
    ]
)

Error Handling

Common Error Scenarios

ErrorCauseSolution
Connection refusedServer not runningStart TigerGraph server
UnauthorizedInvalid tokenRegenerate API token
Query not foundQuery not installedInstall query definition
TimeoutQuery too slowOptimize query, increase timeout
Graph not foundWrong graph nameVerify graph name

Error Handling Best Practices

  1. Validate Connections - Check before operations
  2. Handle Retries - Implement exponential backoff
  3. Log Errors - Track all errors for debugging
  4. Graceful Degradation - Handle partial failures
  5. Timeout Management - Set appropriate timeouts

Best Practices

1. Query Design

✅ Use installed queries for performance
✅ Pre-compile queries instead of dynamic ones
✅ Optimize pattern matching
✅ Use appropriate graph traversal depth
✅ Leverage built-in algorithms

2. Data Loading

✅ Use batch loading for bulk data
✅ Validate data before loading
✅ Use atomic transactions
✅ Monitor loading progress
✅ Handle duplicates appropriately

3. Performance

✅ Create indexes on frequently queried properties
✅ Monitor query execution plans
✅ Use result streaming for large datasets
✅ Cache frequently accessed data
✅ Distribute computation across nodes

4. Schema Management

✅ Design schema for query patterns
✅ Use appropriate data types
✅ Maintain referential integrity
✅ Document schema changes
✅ Version schema updates

5. Analytics

✅ Use built-in graph algorithms
✅ Tune algorithm parameters
✅ Monitor resource usage
✅ Implement incremental updates
✅ Cache algorithm results

6. Scalability

✅ Partition data appropriately
✅ Use distributed loading
✅ Monitor cluster health
✅ Balance load across nodes
✅ Plan capacity growth

7. Security

✅ Protect API tokens
✅ Use HTTPS connections
✅ Implement access control
✅ Audit all operations
✅ Encrypt sensitive data

8. Maintenance

✅ Monitor database health
✅ Regular backups
✅ Update software regularly
✅ Archive old data
✅ Clean up temporary data


Integration with Related Skills

Neo4j Integration

  • Alternative property graph database
  • Query language: Cypher vs GSQL
  • Scale and deployment models differ

JanusGraph Connector

  • Distributed graph storage
  • Different architecture and use cases
  • Complementary strengths

RDF Triple Store Integration

  • Semantic web alternative
  • Triple-based vs property graph
  • Different query languages

Graph Query Optimization

  • Optimize GSQL query performance
  • Analyze execution plans
  • Performance tuning

REST API Wrapper

  • Expose TigerGraph via REST API
  • Custom endpoint creation
  • API documentation

Libraries & Dependencies

Core Libraries

LibraryPurpose
pyTigerGraphOfficial Python SDK
requestsHTTP client
jsonJSON handling

Installation

pip install pyTigerGraph requests

Expected Benefits

Using this skill enables:

Performance - High-speed graph processing at scale
Analytics - Advanced graph algorithms and analytics
Scalability - Enterprise-scale knowledge graph processing
Real-Time - Real-time graph computations
Flexibility - Support for complex graph patterns
Reliability - Enterprise-grade reliability and backup
Integration - Easy integration with applications


Quick Reference

Connection & Query

connector = TigerGraphConnector()
connector.connect(config)
result = connector.run_query("queryName", params)
connector.close()

Common Operations

# Insert vertices
connector.insert_vertices(vertex_type, vertices)

# Insert edges
connector.insert_edges(edge_type, edges)

# Run algorithm
connector.run_algorithm("pagerank", params)

# Get statistics
stats = connector.get_statistics()

Data Loading

connector.load_from_csv(file_path, vertex_type, mapping)
connector.batch_insert(vertices, edges)

Related Skills

  • Neo4j Integration - Property graph database using Cypher
  • JanusGraph Connector - Distributed graph using Gremlin
  • RDF Triple Store Integration - SPARQL for RDF
  • GraphQL Graph Mapping - GraphQL API interface
  • Graph Query Optimization - Query performance tuning
  • REST API Wrapper - REST interface for graphs

Resources


Status: ✅ Production Ready
Version: 1.0.0
Last Updated: April 12, 2026

Comments

Loading comments...