Drip Billing
v1.0.3Track AI agent usage and costs with Drip metered billing. Use when you need to record aggregate LLM usage, tool calls, agent runs, or other metered usage for...
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The skill's stated purpose (metered billing / usage tracking) matches the SDK calls and examples in SKILL.md and references/API.md. Required credentials (a DRIP_API_KEY) and a base URL are plausible for this purpose. However, the registry metadata included with the skill claims no required env vars/binaries while SKILL.md explicitly declares DRIP_API_KEY and DRIP_BASE_URL — an inconsistency in packaging/metadata that should be resolved.
Instruction Scope
Runtime instructions stay within the billing/telemetry domain (trackUsage, recordRun, startRun, emitEvent, middleware integrations). They explicitly warn not to send raw prompts/PII and recommend least-privilege public keys. The notable scope expansion: SKILL.md offers an MCP server example using `npx @drip/mcp-server` that would give the agent native 'drip_*' tools and allow autonomous telemetry calls — expected for an integration but raises the risk surface if misconfigured or provided with an admin key.
Install Mechanism
The skill is instruction-only (no install spec), but it instructs the user/agent to install or run packages from npm (e.g., `npm install @drip-sdk/node`, `npx @drip/mcp-server`). Installing packages via npm/npx is a common pattern but is higher-risk than instruction-only skills because it pulls and executes third-party code. There is no verified upstream homepage or source in the registry metadata to validate the npm package; SKILL.md recommends checking the package on npm but does not provide a canonical upstream link.
Credentials
The only required secrets described are DRIP_API_KEY (primary) and DRIP_BASE_URL; DRIP_WORKFLOW_ID is optional. These are proportional to a billing/tracking SDK. The SKILL.md explicitly distinguishes public keys (pk_) for tracking from secret keys (sk_) that grant admin abilities; supplying an sk_ key would substantially increase risk (webhook & key management capabilities). The registry metadata, however, contradicts SKILL.md by claiming no required env vars — this mismatch is a red flag.
Persistence & Privilege
The skill does not request 'always: true' or other elevated installation privileges. It does provide instructions to run an MCP server (via npx) that would register native tools the agent can call autonomously; this is normal for integrations but increases blast radius if a high-privilege API key is used or if the MCP server is obtained from an untrusted package. The skill does not appear to modify other skills or system-wide settings.
What to consider before installing
Before installing or enabling this skill:
- Verify the upstream package and publisher: find the @drip-sdk/node package on the official npm registry and confirm the publisher and source repository/homepage match what you expect. The registry metadata here lacks a homepage/source which is a red flag.
- Prefer using a public 'pk_' key (as the SKILL.md recommends) rather than an 'sk_' secret key. An sk_ key grants admin operations (webhooks, key rotation, feature flags) and would greatly increase risk if the agent or package is compromised.
- If you run the MCP server via `npx @drip/mcp-server`, run it in a restricted environment (no sensitive env vars exposed) and only with a pk_ key; npx will execute remote code from npm.
- Confirm the DRIP_BASE_URL value is a trusted endpoint you control or recognize, and enforce metadata allowlists/redaction before emitting telemetry (do not send prompts, PII, secrets, or full request bodies).
- Ask the publisher for source code or a canonical homepage/repository if you need higher assurance. Without clear provenance and matching registry metadata, treat this package as unverified and exercise standard supply-chain caution (review package contents, pins, or run in sandboxed environments).
- If you need further certainty, request the skill maintainer to fix the metadata inconsistencies (declare required env vars in the registry spec) and provide a verified source repo and release tarball.Like a lobster shell, security has layers — review code before you run it.
latest
Drip Billing Integration
Track usage and costs for AI agents, LLM calls, tool invocations, and any metered workload.
When to Use This Skill
- Recording LLM usage quantities (for example total tokens per call)
- Tracking tool/function call costs
- Logging agent execution traces
- Metering API requests for billing
- Attributing costs to customers or workflows
Security & Data Privacy
Key scoping (least privilege):
- Use
pk_(public) keys for usage tracking, customer management, and billing. This is sufficient for all skill operations. - Only use
sk_(secret) keys if you need admin operations: webhook management, API key rotation, or feature flags. - Public keys (
pk_) cannot manage webhooks, rotate API keys, or toggle feature flags — this limits blast radius if the key is compromised.
Metadata safety:
- Include only minimal non-sensitive operational context in metadata.
- Never include PII, secrets, passwords, API keys, raw user prompts, model outputs, or full request/response bodies.
- Use a strict allowlist and redaction policy before telemetry writes.
- Prefer hashes/IDs (for example
queryHash) instead of raw user text.
What data is transmitted:
- Usage quantities (meter name + numeric value)
- Customer identifiers
- Run lifecycle events (start/end, status, duration)
- Sanitized metadata you explicitly provide (model family, tool name, status code, latency, hashed IDs)
What is NOT transmitted:
- Raw prompts, completions, or model outputs
- Environment variables or secrets
- File contents or source code
Installation
npm install @drip-sdk/node
Environment Setup
# Recommended: public key — sufficient for all usage tracking and billing
export DRIP_API_KEY=pk_live_...
# Only if you need admin operations (webhooks, key management, feature flags):
# export DRIP_API_KEY=sk_live_...
Telemetry Safety Contract
- Send only metadata needed for billing and diagnostics.
- Do not send raw prompts, raw model outputs, raw query text, full request/response bodies, or credentials.
- Prefer stable identifiers and hashes (for example
queryHash) over raw user content. - Emit telemetry only to a trusted
DRIP_BASE_URL.
Quick Start
1. Initialize the SDK
import { Drip } from '@drip-sdk/node';
// Reads DRIP_API_KEY from environment automatically (pk_live_... recommended)
const drip = new Drip({
apiKey: process.env.DRIP_API_KEY
});
2. Track Usage (Simple)
await drip.trackUsage({
customerId: 'customer_123',
meter: 'llm_tokens',
quantity: 1500,
// metadata is optional — only include operational context, never PII or secrets
metadata: { model: 'gpt-4' }
});
3. Record Agent Runs (Complete Execution)
await drip.recordRun({
customerId: 'cus_123',
workflow: 'research-agent',
events: [
{ eventType: 'llm.call', model: 'gpt-4', quantity: 1700, units: 'tokens' },
{ eventType: 'tool.call', name: 'web-search', duration: 1500 },
{ eventType: 'llm.call', model: 'gpt-4', quantity: 1000, units: 'tokens' },
],
status: 'COMPLETED',
});
4. Streaming Execution (Real-Time)
// Start the run
const run = await drip.startRun({
customerId: 'cus_123',
workflowSlug: 'document-processor',
});
// Log each step as it happens
await drip.emitEvent({
runId: run.id,
eventType: 'llm.call',
model: 'gpt-4',
quantity: 1700,
units: 'tokens',
});
await drip.emitEvent({
runId: run.id,
eventType: 'tool.call',
name: 'web-search',
duration: 1500,
});
// Complete the run
await drip.endRun(run.id, { status: 'COMPLETED' });
Event Types
| Event Type | Description | Key Fields |
|---|---|---|
llm.call | LLM API call | model, quantity, units |
tool.call | Tool invocation | name, duration, status |
agent.plan | Planning step | description |
agent.execute | Execution step | description, metadata |
error | Error occurred | description, metadata |
Common Patterns
Wrap Tool Calls
async function trackedToolCall<T>(runId: string, toolName: string, fn: () => Promise<T>): Promise<T> {
const start = Date.now();
try {
const result = await fn();
await drip.emitEvent({
runId,
eventType: 'tool.call',
name: toolName,
duration: Date.now() - start,
status: 'success',
});
return result;
} catch (error: unknown) {
const message = error instanceof Error ? error.message : 'Unknown error';
await drip.emitEvent({
runId,
eventType: 'tool.call',
name: toolName,
duration: Date.now() - start,
status: 'error',
// Only include the error message — never include stack traces, env vars, or user data
metadata: { error: message },
});
throw error;
}
}
LangChain Auto-Tracking
import { DripCallbackHandler } from '@drip-sdk/node/langchain';
const handler = new DripCallbackHandler({
drip,
customerId: 'cus_123',
workflow: 'research-agent',
});
// All LLM calls and tool usage automatically tracked
const result = await agent.invoke(
{ input: 'Research the latest AI news' },
{ callbacks: [handler] }
);
API Reference
See references/API.md for complete SDK documentation.
Comments
Loading comments...
