Install
openclaw skills install agnost-aiUSE when implementing data ingestion for Agnost AI analytics. Contains API reference, SDK guides for Python and TypeScript, and code examples for tracking AI...
openclaw skills install agnost-aiComprehensive guide for ingesting data into Agnost AI for analytics, monitoring, and insights. Covers the Conversation SDK for tracking AI interactions and the MCP SDK for Model Context Protocol server analytics.
Official docs: https://docs.agnost.ai API Endpoint:
https://api.agnost.aiDashboard: https://app.agnost.ai
Before implementing Agnost ingestion, follow this priority order:
references/ directory for detailed API| Use Case | Python | TypeScript/Node.js | Go |
|---|---|---|---|
| Conversation/AI Tracking | pip install agnost | npm install agnostai | N/A |
| MCP Server Analytics | pip install agnost-mcp | npm install agnost | go get github.com/agnostai/agnost-go |
| Endpoint | Method | Description |
|---|---|---|
/api/v1/capture-session | POST | Create a new conversation/session |
/api/v1/capture-event | POST | Record an event within a session |
Use the Conversation SDK when building AI applications, chatbots, or agents that need to track user interactions, inputs, outputs, and performance metrics.
# Installation
pip install agnost
# or
uv add agnost
# Basic Setup
import agnost
# Initialize with your org ID (from dashboard)
agnost.init("your-org-id")
// Installation
npm install agnostai
// or
pnpm add agnostai
// Basic Setup
import * as agnost from "agnostai";
// Initialize with your org ID (from dashboard)
agnost.init("your-org-id");
init(org_id, config?) - Initialize SDKMust be called before any tracking methods.
import agnost
# Basic initialization
agnost.init("your-org-id")
# With configuration
agnost.init(
"your-org-id",
endpoint="https://api.agnost.ai", # Custom endpoint (optional)
debug=True # Enable debug logging
)
import * as agnost from "agnostai";
// Basic initialization
agnost.init("your-org-id");
// With configuration
agnost.init("your-org-id", {
endpoint: "https://api.agnost.ai", // Custom endpoint (optional)
debug: true // Enable debug logging
});
begin() + end() - Track Interactions (Recommended)Use the begin/end pattern for automatic latency calculation and cleaner code.
import agnost
agnost.init("your-org-id")
# Start tracking an interaction
interaction = agnost.begin(
user_id="user_123",
agent_name="weather-agent",
input="What's the weather in NYC?",
conversation_id="conv_456", # Optional: group related events
properties={"model": "gpt-4"} # Optional: custom metadata
)
# ... Your AI processing happens here ...
response = call_your_ai_model(interaction.input)
# Complete the interaction (latency auto-calculated)
interaction.end(
output=response,
success=True # Set False if the call failed
)
import * as agnost from "agnostai";
agnost.init("your-org-id");
// Start tracking an interaction
const interaction = agnost.begin({
userId: "user_123",
agentName: "weather-agent",
input: "What's the weather in NYC?",
conversationId: "conv_456", // Optional: group related events
properties: { model: "gpt-4" } // Optional: custom metadata
});
// ... Your AI processing happens here ...
const response = await callYourAIModel(interaction.input);
// Complete the interaction (latency auto-calculated)
interaction.end(response); // or interaction.end(response, true) for success
track() - Single-Call TrackingUse when you have all data available at once (no need for begin/end).
import agnost
agnost.init("your-org-id")
agnost.track(
user_id="user_123",
input="What's the weather?",
output="The weather is sunny with 72°F.",
agent_name="weather-agent",
conversation_id="conv_456", # Optional
success=True,
latency=150, # milliseconds
properties={"model": "gpt-4", "tokens": 42}
)
identify() - User EnrichmentAssociate user metadata with a user ID for richer analytics.
import agnost
agnost.init("your-org-id")
agnost.identify("user_123", {
"name": "John Doe",
"email": "john@example.com",
"plan": "premium",
"company": "Acme Inc"
})
import * as agnost from "agnostai";
agnost.init("your-org-id");
agnost.identify("user_123", {
name: "John Doe",
email: "john@example.com",
plan: "premium",
company: "Acme Inc"
});
flush() & shutdown() - Resource Managementimport agnost
# Manually flush pending events
agnost.flush()
# Clean shutdown (flushes and closes connections)
agnost.shutdown()
import * as agnost from "agnostai";
// Manually flush pending events
await agnost.flush();
// Clean shutdown (flushes and closes connections)
await agnost.shutdown();
When using begin(), you get an Interaction object with these methods:
| Method | Description |
|---|---|
set_input(text) / setInput(text) | Set/update the input text |
set_property(key, value) / setProperty(key, value) | Add a single custom property |
set_properties(dict) / setProperties(obj) | Add multiple custom properties |
end(output, success?, latency?) | Complete and send the event |
interaction = agnost.begin(
user_id="user_123",
agent_name="my-agent"
)
# Build input from multiple sources
interaction.set_input("Combined user query: " + user_input)
interaction.set_property("source", "chat-widget")
interaction.set_properties({"model": "gpt-4", "version": "v2"})
# Process and complete
response = process_query(interaction.input)
interaction.end(output=response)
For tracking Model Context Protocol (MCP) servers, use the MCP SDK.
from mcp.server.fastmcp import FastMCP
from agnost_mcp import track, config
# Create FastMCP server
mcp = FastMCP("my-mcp-server")
# Add your tools
@mcp.tool()
def my_tool(param: str) -> str:
return f"Result: {param}"
# Enable tracking
track(mcp, "your-org-id", config(
endpoint="https://api.agnost.ai",
disable_input=False, # Track input arguments
disable_output=False # Track output results
))
# Run server
mcp.run()
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { trackMCP } from "agnost";
// Create MCP server
const server = new Server({
name: "my-mcp-server",
version: "1.0.0"
}, {
capabilities: { tools: {} }
});
// Enable tracking
trackMCP(server, "your-org-id", {
endpoint: "https://api.agnost.ai",
disableInput: false,
disableOutput: false
});
// Start server
const transport = new StdioServerTransport();
await server.connect(transport);
package main
import (
"github.com/agnostai/agnost-go/agnost"
"github.com/mark3labs/mcp-go/server"
)
func main() {
s := server.NewMCPServer("my-server", "1.0.0")
// Add tools...
// Enable tracking
agnost.Track(s, "your-org-id", &agnost.Config{
DisableInput: false,
DisableOutput: false,
BatchSize: 10,
LogLevel: "info",
})
server.ServeStdio(s)
}
For cases where you need direct API access without an SDK.
curl -X POST https://api.agnost.ai/api/v1/capture-session \
-H "Content-Type: application/json" \
-H "X-Org-Id: your-org-id" \
-d '{
"session_id": "unique-session-id",
"client_config": "my-app",
"connection_type": "http",
"ip": "",
"user_data": {
"user_id": "user_123",
"email": "user@example.com"
},
"tools": ["tool1", "tool2"]
}'
curl -X POST https://api.agnost.ai/api/v1/capture-event \
-H "Content-Type: application/json" \
-H "X-Org-Id: your-org-id" \
-d '{
"session_id": "unique-session-id",
"primitive_type": "tool",
"primitive_name": "weather_lookup",
"latency": 150,
"success": true,
"args": "{\"city\": \"NYC\"}",
"result": "{\"temp\": 72}",
"metadata": {
"model": "gpt-4",
"tokens": "42"
}
}'
{
"session_id": "string (UUID or custom ID)",
"client_config": "string (app identifier)",
"connection_type": "string (http/stdio/sse)",
"ip": "string (optional)",
"user_data": {
"user_id": "string",
"...": "any additional user fields"
},
"tools": ["array", "of", "tool", "names"]
}
{
"session_id": "string (must match existing session)",
"primitive_type": "string (tool/resource/prompt)",
"primitive_name": "string (name of the primitive)",
"latency": "integer (milliseconds)",
"success": "boolean",
"args": "string (JSON-encoded input)",
"result": "string (JSON-encoded output)",
"checkpoints": [
{
"name": "string",
"timestamp": "integer (ms since start)",
"metadata": {}
}
],
"metadata": {
"key": "value pairs"
}
}
agnost.init(
"your-org-id",
endpoint="https://api.agnost.ai", # API endpoint
debug=False # Enable debug logging
)
interface ConversationConfig {
endpoint?: string; // API endpoint (default: https://api.agnost.ai)
debug?: boolean; // Enable debug logging (default: false)
}
agnost.init("your-org-id", { endpoint: "...", debug: true });
from agnost_mcp import track, config
track(server, "your-org-id", config(
endpoint="https://api.agnost.ai",
disable_input=False, # Don't track input arguments
disable_output=False # Don't track output results
))
import { trackMCP, createConfig } from "agnost";
const cfg = createConfig({
endpoint: "https://api.agnost.ai",
disableInput: false,
disableOutput: false
});
trackMCP(server, "your-org-id", cfg);
type Config struct {
Endpoint string // default: "https://api.agnost.ai"
DisableInput bool // default: false
DisableOutput bool // default: false
BatchSize int // default: 5
MaxRetries int // default: 3
RetryDelay time.Duration // default: 1s
RequestTimeout time.Duration // default: 5s
LogLevel string // "debug", "info", "warning", "error"
Identify IdentifyFunc // optional user identification
}
# At application startup
import agnost
agnost.init("your-org-id")
# Automatically calculates processing time
interaction = agnost.begin(user_id="u1", agent_name="agent")
# ... processing ...
interaction.end(output=result)
# All events for a single chat session
conversation_id = f"chat_{session_id}"
interaction = agnost.begin(
user_id="u1",
conversation_id=conversation_id,
agent_name="chatbot"
)
interaction = agnost.begin(user_id="u1", agent_name="agent")
try:
result = process_request()
interaction.end(output=result, success=True)
except Exception as e:
interaction.end(output=str(e), success=False)
import atexit
import agnost
atexit.register(agnost.shutdown)
This skill activates when you encounter:
references/python-sdk.mdreferences/typescript-sdk.mdreferences/api-reference.md