Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Lance Store

v1.0.12

Persist and retrieve structured data using the Lance columnar format. Use when you need to store, query, or analyze data across sessions — such as saving ski...

0· 118·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for vitorhugoze/lance-store.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Lance Store" (vitorhugoze/lance-store) from ClawHub.
Skill page: https://clawhub.ai/vitorhugoze/lance-store
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: python3
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install lance-store

ClawHub CLI

Package manager switcher

npx clawhub@latest install lance-store
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The code (scripts/*.py) implements local dataset creation, append/read/update/delete, metadata management, and uses a 'lance' Python module to operate on local files — this matches the skill's stated purpose of persisting structured data in Lance format. However, the install/requirements references a PyPI package named 'pylance' (and the README/requirements.txt comment insists 'pylance' is the package that provides the 'lance' module). That naming mismatch is unexpected and reduces confidence that the install instructions map cleanly to the runtime imports.
Instruction Scope
SKILL.md and the CLI scripts operate on the current working directory only, list and read local 'metadata.lance', and provide explicit commands; there are no instructions to read unrelated system files, environment variables, or to transmit data to external endpoints. The code includes input validation (dataset name / backup path checks) to mitigate path traversal.
!
Install Mechanism
The registry install block and requirements.txt ask for 'pylance' via an installer labelled 'uv' and for 'pandas'. 'pylance' is an unusual name for a Lance-format runtime dependency (it is also the name of a Microsoft VSCode language server), so this could be a naming mistake, a confusing wrapper, or an incorrect package. The install mechanism 'uv' is non-standard/ambiguous in this context. These inconsistencies could cause the installed packages not to provide the expected 'lance' module, or — in the worst case — install an unrelated package. No arbitrary download URLs are present, which reduces highest-risk concerns, but the package-name ambiguity is a real red flag to verify.
Credentials
The skill requests no environment variables, no external credentials, and no config paths. All filesystem access is scoped to the current working directory; the code does read and write local dataset directories and metadata files, which is appropriate for a storage skill.
Persistence & Privilege
always: false and the skill does not request permanent elevated platform privileges. It creates and modifies files under the current working directory (data and metadata files) but does not modify other skills, global config, or system-wide settings.
What to consider before installing
Before installing or running this skill, verify the package and installer mapping: 1) Check PyPI (or the source the platform will use) to confirm that the package named 'pylance' actually provides the 'lance' Python module expected by the code — if 'pylance' is a typo or a different package, the skill may fail or install an unrelated package. 2) Ask the publisher/developer to explain the 'uv' installer kind (how it resolves packages) and why 'pylance' is used instead of a package named 'lance'. 3) Run this skill in an isolated/sandbox environment (not on a production host) and inspect what packages are installed and the installed package source. 4) If you proceed, run small tests in a temporary directory to confirm datasets are created only where you expect. If the developer confirms that 'pylance' is intentionally the correct PyPI package for the Lance runtime and explains the installer, the concerns here would likely be resolved and the skill could be considered coherent.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Binspython3

Install

Install pylance (Lance columnar format) via uvuv tool install pylance
Install pandas via uvuv tool install pandas
latestvk976bpzm7dfqsbw5zvw9404x1h83e7jw
118downloads
0stars
6versions
Updated 1mo ago
v1.0.12
MIT-0

Lance Store

Installation

python3 -m pip install -r requirements.txt

A persistent data store using the Lance columnar format for fast ML data access.

Quick Start

# List all datasets and their metadata
python3 scripts/command.py list-datasets-info

# Create a dataset
python3 scripts/command.py create-dataset <name> <field1> <field2> ...

# Append data
python3 scripts/command.py append-to-dataset <name> <value1> <value2> ...

# Read all records from a dataset
python3 scripts/command.py read-dataset <name>

Note: list-datasets-info shows dataset metadata (schema, field types, record count) — it does not return the actual data rows. Use read-dataset to retrieve records.

Storage Location

DataSets are created and stored on the current path '.'

Critical Behavior: Data Type Strictness

⚠️ Lance is strict about data types — they CANNOT change after the first record

When you append the first record to a dataset, Lance infers the data type for each field. All subsequent records MUST use the same types.

Example — this FAILS:

# First record: age as STRING
append-to-dataset users "John" "25" "john@test.com"

# Second record: age as INTEGER (will FAIL!)
append-to-dataset users "Jane" 30 "jane@test.com"
# Error: `age` should have type large_string but type was int64

Correct approach — maintain consistent types:

# First record: age as STRING
append-to-dataset users "John" "25" "john@test.com"

# Second record: age as STRING
append-to-dataset users "Jane" "30" "jane@test.com"

Why This Matters

Unlike traditional databases that may coerce types, Lance rejects type mismatches. If you store numbers as strings initially, you must always pass strings. Plan your schema carefully.

Initialization Workflow

When starting a session, always initialize by listing existing datasets first:

# This command returns ALL datasets with their structure
python3 scripts/command.py list-datasets-info

Example output:

{
    "skill": "lance",
    "operation": "list_datasets_info",
    "status": "success",
    "data": [
        {
            "dataset_name": "users",
            "path": "/data/users",
            "fields": ["name", "age", "email"],
            "field_types": {
                "_id": "large_string",
                "_updated_at": "timestamp[us]",
                "name": "large_string",
                "age": "large_string",
                "email": "large_string"
            },
            "record_count": 2,
            "columns": ["id", "_updated_at", "name", "age", "email"],
            "last_updated": "2026-03-21T17:57:44.595628"
        }
    ],
    "error": null
}

Understanding field_types

StateMeaning
{} (empty)Dataset exists but no records yet — types not yet defined
populatedTypes are locked — appends must match

Important: If field_types is empty, the first append will define types. Be deliberate about the first record's types.

Commands Reference

Create Dataset

python3 scripts/command.py create-dataset <name> <field1> <field2> ...

Creates a metadata entry. Fields have no types until first append.

Append Record

python3 scripts/command.py append-to-dataset <name> <value1> <value2> ...

Appends one record. Types are inferred from first record.

Batch Append

python3 scripts/command.py batch-append-to-dataset <name> '<json-array>'

Example: batch-append-to-dataset users '[["Alice", "22", "alice@test.com"], ["Bob", "35", "bob@test.com"]]'

Update Record

python3 scripts/command.py update-dataset-record <name> <record_id> <value1> <value2> ...

Updates fields for a specific record by ID.

Delete Record

python3 scripts/command.py delete-dataset-record <name> <record_id>

List All Datasets

python3 scripts/command.py list-datasets

Get Dataset Info

python3 scripts/command.py get-dataset-info <name>

Returns schema, field types (if data exists), and record count.

List All Datasets with Full Info

python3 scripts/command.py list-datasets-info

Recommended for initialization. Returns all datasets with complete metadata.

Get Dataset Path

python3 scripts/command.py get-dataset-path-info <name>

Backup Dataset

python3 scripts/command.py backup-dataset <name> <backup_path>

Count Records

python3 scripts/command.py count-records <name>

Read All Records

Returns all records from the dataset as a list of objects.

python3 scripts/command.py read-dataset <name>

Drop Dataset

Requires confirmation if have not created a backup beforehand.

Delete the entire dataset and its metadata.

python3 scripts/command.py drop-dataset <name>

Internal fields available in every dataset:

FieldTypeDescription
_idstringUUID — unique record identifier
_updated_attimestampWhen the record was last inserted or updated

List Records (Paginated)

python3 scripts/command.py list-records <name> --limit 10 --offset 0

Returns records with optional pagination.

Get Single Record

python3 scripts/command.py get-record <name> <record_id>

Retrieves a specific record by its UUID.

Get Dataset Info

python3 scripts/command.py get-dataset-info <name>

Returns schema, field types (if data exists), and record count.

Response Format

All commands return JSON:

{
  "skill": "lance",
  "operation": "<operation_name>",
  "status": "success|error",
  "data": <result_data_or_null>,
  "error": <error_message_or_null>
}

Internal Fields

Every dataset automatically includes:

  • _id — UUID for each record
  • _updated_at — timestamp of last insert/update

These are managed automatically — when appending, only provide your defined fields.

Data Type Inference

Lance infers types from the first record:

Python TypeLance Type
"string"large_string
25 (int)int64
25.5 (float)float64
True/Falsebool

CLI caveat: When passing via command line, all values are strings. To ensure integer types, initialize with actual integers in a script rather than CLI.

Tips

  1. Initialize at session start: Run list-datasets-info to understand what data already exists
  2. Plan your schema: First record determines types for the entire dataset
  3. Use batch append when adding multiple records: More efficient than individual appends

Requirements

Dependencies are declared in frontmatter (metadata.openclaw.install) and handled by the OpenClaw install system via uv. The Python packages required are:

  • pylance — The Lance columnar format library.

    ⚠️ Naming note: Despite the PyPI package being named pylance, the library is imported as import lance in Python code. This is the official Lance project naming convention — it is NOT the VS Code "pylance" language server. See lance.org for details.

  • pandas — Data manipulation

Comments

Loading comments...