Azure Cosmos DB Python

v0.1.0

Azure Cosmos DB SDK for Python (NoSQL API). Use for document CRUD, queries, containers, and globally distributed data. Triggers: "cosmos db", "CosmosClient", "container", "document", "NoSQL", "partition key".

1· 1.4k·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for thegovind/azure-cosmos-py.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Azure Cosmos DB Python" (thegovind/azure-cosmos-py) from ClawHub.
Skill page: https://clawhub.ai/thegovind/azure-cosmos-py
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install azure-cosmos-py

ClawHub CLI

Package manager switcher

npx clawhub@latest install azure-cosmos-py
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description, SKILL.md examples, and the setup script all target Azure Cosmos DB NoSQL operations (create containers, partitioning, CRUD, queries). The requested capabilities are consistent with the stated purpose.
!
Instruction Scope
The runtime instructions and included CLI script expect COSMOS_ENDPOINT and optionally COSMOS_KEY (or DefaultAzureCredential). However the skill metadata declared no required environment variables. The setup script will perform management actions (create databases/containers, change throughput, run cross‑partition queries and count items) that have broad read/write privileges on the account — appropriate for a DB tool but potentially dangerous if run with full account keys or against production data. The SKILL.md and script do not instruct reading unrelated files, but they do rely on DefaultAzureCredential which may surface system or user credentials transparently.
Install Mechanism
There is no install spec (instruction-only). SKILL.md recommends pip packages (azure-cosmos, azure-identity) which is expected and low-risk. No arbitrary downloads or archive extraction are present.
!
Credentials
The registry declares no required env vars, but SKILL.md and scripts require COSMOS_ENDPOINT and may require COSMOS_KEY (or DefaultAzureCredential). COSMOS_KEY is a full account key (high privilege). The omission of required env vars from metadata is an incoherence and increases risk because users may not realize they are asked for account credentials.
Persistence & Privilege
The skill is not always-on and does not request persistent platform privileges. It does not modify other skills or global agent settings. Autonomous invocation is allowed (platform default) but not uniquely concerning here.
What to consider before installing
This skill appears to be a legitimate Cosmos DB helper, but exercise caution before installing or running its script. Key points: - Metadata omission: the skill metadata lists no required environment variables, yet SKILL.md and the included script require COSMOS_ENDPOINT and may use COSMOS_KEY or DefaultAzureCredential. Ask the publisher to declare required env vars explicitly. - High‑privilege credential: COSMOS_KEY is an account key that grants broad read/write/admin access. Do NOT provide a production account key to an untrusted skill. Prefer using a least‑privilege service principal or managed identity with only the needed permissions. - DefaultAzureCredential caveat: DefaultAzureCredential can pick up credentials from many sources (dev tooling, Azure CLI, managed identity). Be aware which identity will be used in your environment. - Code quality: the included setup_cosmos_container.py has a visible syntax/logic bug (malformed append of excluded path) which may cause runtime crashes — review/fix before running. The script will create containers, change throughput, and can run cross‑partition queries that enumerate counts of all items (possible data exposure), so test against a non‑production account first. Recommendations: ask the publisher for source/homepage, require them to update metadata to list required env vars, inspect and/or lint the script locally, and run only with a scoped test account or role with least privilege.

Like a lobster shell, security has layers — review code before you run it.

latestvk97arx43h76w226c7gadpa7rsh809rae
1.4kdownloads
1stars
1versions
Updated 2mo ago
v0.1.0
MIT-0

Azure Cosmos DB SDK for Python

Client library for Azure Cosmos DB NoSQL API — globally distributed, multi-model database.

Installation

pip install azure-cosmos azure-identity

Environment Variables

COSMOS_ENDPOINT=https://<account>.documents.azure.com:443/
COSMOS_DATABASE=mydb
COSMOS_CONTAINER=mycontainer

Authentication

from azure.identity import DefaultAzureCredential
from azure.cosmos import CosmosClient

credential = DefaultAzureCredential()
endpoint = "https://<account>.documents.azure.com:443/"

client = CosmosClient(url=endpoint, credential=credential)

Client Hierarchy

ClientPurposeGet From
CosmosClientAccount-level operationsDirect instantiation
DatabaseProxyDatabase operationsclient.get_database_client()
ContainerProxyContainer/item operationsdatabase.get_container_client()

Core Workflow

Setup Database and Container

# Get or create database
database = client.create_database_if_not_exists(id="mydb")

# Get or create container with partition key
container = database.create_container_if_not_exists(
    id="mycontainer",
    partition_key=PartitionKey(path="/category")
)

# Get existing
database = client.get_database_client("mydb")
container = database.get_container_client("mycontainer")

Create Item

item = {
    "id": "item-001",           # Required: unique within partition
    "category": "electronics",   # Partition key value
    "name": "Laptop",
    "price": 999.99,
    "tags": ["computer", "portable"]
}

created = container.create_item(body=item)
print(f"Created: {created['id']}")

Read Item

# Read requires id AND partition key
item = container.read_item(
    item="item-001",
    partition_key="electronics"
)
print(f"Name: {item['name']}")

Update Item (Replace)

item = container.read_item(item="item-001", partition_key="electronics")
item["price"] = 899.99
item["on_sale"] = True

updated = container.replace_item(item=item["id"], body=item)

Upsert Item

# Create if not exists, replace if exists
item = {
    "id": "item-002",
    "category": "electronics",
    "name": "Tablet",
    "price": 499.99
}

result = container.upsert_item(body=item)

Delete Item

container.delete_item(
    item="item-001",
    partition_key="electronics"
)

Queries

Basic Query

# Query within a partition (efficient)
query = "SELECT * FROM c WHERE c.price < @max_price"
items = container.query_items(
    query=query,
    parameters=[{"name": "@max_price", "value": 500}],
    partition_key="electronics"
)

for item in items:
    print(f"{item['name']}: ${item['price']}")

Cross-Partition Query

# Cross-partition (more expensive, use sparingly)
query = "SELECT * FROM c WHERE c.price < @max_price"
items = container.query_items(
    query=query,
    parameters=[{"name": "@max_price", "value": 500}],
    enable_cross_partition_query=True
)

for item in items:
    print(item)

Query with Projection

query = "SELECT c.id, c.name, c.price FROM c WHERE c.category = @category"
items = container.query_items(
    query=query,
    parameters=[{"name": "@category", "value": "electronics"}],
    partition_key="electronics"
)

Read All Items

# Read all in a partition
items = container.read_all_items()  # Cross-partition
# Or with partition key
items = container.query_items(
    query="SELECT * FROM c",
    partition_key="electronics"
)

Partition Keys

Critical: Always include partition key for efficient operations.

from azure.cosmos import PartitionKey

# Single partition key
container = database.create_container_if_not_exists(
    id="orders",
    partition_key=PartitionKey(path="/customer_id")
)

# Hierarchical partition key (preview)
container = database.create_container_if_not_exists(
    id="events",
    partition_key=PartitionKey(path=["/tenant_id", "/user_id"])
)

Throughput

# Create container with provisioned throughput
container = database.create_container_if_not_exists(
    id="mycontainer",
    partition_key=PartitionKey(path="/pk"),
    offer_throughput=400  # RU/s
)

# Read current throughput
offer = container.read_offer()
print(f"Throughput: {offer.offer_throughput} RU/s")

# Update throughput
container.replace_throughput(throughput=1000)

Async Client

from azure.cosmos.aio import CosmosClient
from azure.identity.aio import DefaultAzureCredential

async def cosmos_operations():
    credential = DefaultAzureCredential()
    
    async with CosmosClient(endpoint, credential=credential) as client:
        database = client.get_database_client("mydb")
        container = database.get_container_client("mycontainer")
        
        # Create
        await container.create_item(body={"id": "1", "pk": "test"})
        
        # Read
        item = await container.read_item(item="1", partition_key="test")
        
        # Query
        async for item in container.query_items(
            query="SELECT * FROM c",
            partition_key="test"
        ):
            print(item)

import asyncio
asyncio.run(cosmos_operations())

Error Handling

from azure.cosmos.exceptions import CosmosHttpResponseError

try:
    item = container.read_item(item="nonexistent", partition_key="pk")
except CosmosHttpResponseError as e:
    if e.status_code == 404:
        print("Item not found")
    elif e.status_code == 429:
        print(f"Rate limited. Retry after: {e.headers.get('x-ms-retry-after-ms')}ms")
    else:
        raise

Best Practices

  1. Always specify partition key for point reads and queries
  2. Use parameterized queries to prevent injection and improve caching
  3. Avoid cross-partition queries when possible
  4. Use upsert_item for idempotent writes
  5. Use async client for high-throughput scenarios
  6. Design partition key for even data distribution
  7. Use read_item instead of query for single document retrieval

Reference Files

FileContents
references/partitioning.mdPartition key strategies, hierarchical keys, hot partition detection and mitigation
references/query-patterns.mdQuery optimization, aggregations, pagination, transactions, change feed
scripts/setup_cosmos_container.pyCLI tool for creating containers with partitioning, throughput, and indexing

Comments

Loading comments...