MCP Business Integration
Integrate AI agents with business data via Model Context Protocol. Query ads, analytics, CRM data through normalized interfaces. Use when connecting agents t...
MIT-0 · Free to use, modify, and redistribute. No attribution required.
⭐ 0 · 20 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
The skill's name/description claim MCP business-data integration (ads, analytics, CRM). The examples show PostgreSQL, Google Ads, Google Analytics, Mixpanel, Salesforce, and filesystem access. A legitimate integration would require credentials, config paths, and dependencies for those services; none are declared. This is disproportionate to the declared metadata and indicates missing or inconsistent requirements.
Instruction Scope
SKILL.md instructs agents to implement resources/tools that perform network requests, database queries, and open/read/write local files (e.g., file://documents/{path}, postgres://users, httpx calls). Those instructions permit reading/writing arbitrary files and contacting external APIs but provide no limits, no guidance on credential handling, and no constraints on which paths or data are allowed.
Install Mechanism
There is no install spec (instruction-only), which reduces direct install risk. However SKILL.md references libraries and clients (mcp, httpx, ads_client, salesforce, mixpanel) without declaring dependencies or install steps; this omission makes it unclear how required packages will be provided and is a practical gap.
Credentials
The skill declares no required env vars or primary credential, yet examples clearly require API keys/service credentials for ads, analytics, CRM, and DB connections. That mismatch is a red flag: the skill either omits critical requirements or expects the agent to use ambient/system credentials, which could lead to unintended credential exposure.
Persistence & Privilege
The skill does not request always:true and is user-invocable only (normal). Still, SKILL.md shows building MCP servers that could open network endpoints and maintain persistent context; combined with the lack of declared credentials/configuration, this amplifies risk if the agent runs these components. No explicit requests to modify other skills or system configs are present.
What to consider before installing
This skill's documentation shows code that will read/write local files, query databases, and call many external services, but the package metadata lists no credentials, config paths, or install steps — that mismatch is suspicious. Before installing: (1) ask the publisher for a complete list of required environment variables, credential types, and minimal scopes (read-only service accounts where possible); (2) request an explicit install spec or dependency list so you can review packages used; (3) avoid giving production/high-privilege credentials — prefer isolated test accounts; (4) run the skill in a sandboxed environment that limits filesystem and network access and monitor its network connections and file activity; and (5) prefer skills with a verifiable source/homepage and clear owner contact. If you cannot get clarifying information, do not install this skill in sensitive environments.Like a lobster shell, security has layers — review code before you run it.
Current versionv1.0.0
Download zipanalyticsbusiness-datacrmlatestmcpprotocol
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
SKILL.md
MCP Integration
Model Context Protocol (MCP) connects AI agents to real business data through normalized interfaces.
What is MCP?
Model Context Protocol is Anthropic's open standard for connecting AI models to external data sources and tools. It provides a unified way for agents to:
- Query databases and APIs
- Access files and resources
- Execute tools and functions
- Maintain context across sessions
Why MCP Matters
Before MCP:
- Each integration = custom code
- Different APIs = different patterns
- Context lost between tools
- Security = ad-hoc per integration
With MCP:
- One protocol, many integrations
- Standard patterns for all sources
- Persistent context
- Built-in security model
MCP Architecture
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Client │────▶│ Server │────▶│ Resource │
│ (Agent) │ │ (MCP) │ │ (Data) │
└─────────────┘ └─────────────┘ └─────────────┘
│
┌──────┴──────┐
│ Tools │
│ Prompts │
│ Resources │
└─────────────┘
Components
1. MCP Server
- Exposes resources and tools
- Handles authentication
- Manages connections
2. MCP Client
- Connects to servers
- Discovers capabilities
- Executes operations
3. Resources
- Files, databases, APIs
- Read/write operations
- Subscriptions for updates
4. Tools
- Executable functions
- Input/output schemas
- Side effects
5. Prompts
- Reusable prompt templates
- Parameterized
- Composable
Integration Types
1. Database Integration
# MCP Server for PostgreSQL
from mcp import Server
server = Server("postgres-integration")
@server.resource("postgres://users")
async def get_users():
# Query users from database
return await db.query("SELECT * FROM users")
@server.tool("query_users")
async def query_users(filters: dict):
# Execute parameterized query
return await db.query_with_filters(filters)
2. API Integration
# MCP Server for REST API
@server.resource("api://customers")
async def get_customers():
response = await httpx.get("https://api.example.com/customers")
return response.json()
@server.tool("create_customer")
async def create_customer(data: dict):
response = await httpx.post(
"https://api.example.com/customers",
json=data
)
return response.json()
3. File System Integration
# MCP Server for file access
@server.resource("file://documents/{path}")
async def read_document(path: str):
with open(f"documents/{path}") as f:
return f.read()
@server.tool("write_document")
async def write_document(path: str, content: str):
with open(f"documents/{path}", "w") as f:
f.write(content)
return {"status": "written"}
Business Data Integration
Ads Data
# Google Ads MCP
@server.resource("ads://campaigns")
async def get_campaigns():
"""Get all ad campaigns with metrics"""
campaigns = await ads_client.get_campaigns()
return normalize_campaigns(campaigns)
@server.tool("optimize_budget")
async def optimize_budget(campaign_id: str):
"""Automatically adjust campaign budget"""
# Analyze performance
# Adjust spend allocation
# Return optimization results
Analytics Data
# Analytics MCP
@server.resource("analytics://metrics")
async def get_metrics():
"""Get normalized metrics across platforms"""
return {
"google_analytics": await ga.get_metrics(),
"mixpanel": await mixpanel.get_events(),
"custom_events": await custom.get_events()
}
@server.tool("query_analytics")
async def query_analytics(query: str):
"""Natural language analytics query"""
# Parse query
# Execute across platforms
# Return unified results
CRM Data
# Salesforce MCP
@server.resource("crm://leads")
async def get_leads():
"""Get leads from CRM"""
return await salesforce.query("SELECT Id, Name, Email FROM Lead")
@server.tool("create_lead")
async def create_lead(data: dict):
"""Create new lead in CRM"""
lead = await salesforce.create("Lead", data)
return lead
Best Practices
1. Normalization
# Normalize data from different sources
def normalize_campaign(data, source):
schema = {
"id": data.get("id") or data.get("campaign_id"),
"name": data.get("name") or data.get("campaign_name"),
"spend": data.get("spend") or data.get("cost"),
"impressions": data.get("impressions") or data.get("views"),
"clicks": data.get("clicks") or data.get("clicks_count"),
"source": source
}
return schema
2. Error Handling
@server.tool("risky_operation")
async def risky_operation(data: dict):
try:
result = await external_api.call(data)
return {"success": True, "data": result}
except APIError as e:
return {
"success": False,
"error": str(e),
"suggestion": "Try again with valid parameters"
}
3. Caching
from functools import lru_cache
from datetime import datetime, timedelta
cache = {}
@server.resource("api://expensive-data")
async def get_expensive_data():
cache_key = "expensive-data"
cached = cache.get(cache_key)
if cached and cached["expires"] > datetime.now():
return cached["data"]
# Fetch fresh data
data = await expensive_api_call()
cache[cache_key] = {
"data": data,
"expires": datetime.now() + timedelta(hours=1)
}
return data
4. Security
# Validate inputs
from pydantic import BaseModel
class QueryInput(BaseModel):
table: str
filters: dict
limit: int = 100
@server.tool("safe_query")
async def safe_query(input: QueryInput):
# Input is validated by Pydantic
# SQL injection prevented
return await db.query(input.table, input.filters, input.limit)
Claude Desktop Integration
// claude_desktop_config.json
{
"mcpServers": {
"business-data": {
"command": "python",
"args": ["mcp_server.py"],
"env": {
"DATABASE_URL": "postgresql://...",
"API_KEY": "..."
}
}
}
}
Common MCP Servers
Official Servers
| Server | Description |
|---|---|
| filesystem | File system access |
| postgres | PostgreSQL database |
| sqlite | SQLite database |
| github | GitHub API |
| google-drive | Google Drive |
| slack | Slack API |
Custom Servers
Create custom servers for:
- Internal APIs
- Proprietary databases
- Custom tools
- Business-specific operations
Debugging
Server Logs
import logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger("mcp_server")
@server.tool("debug_operation")
async def debug_operation(data: dict):
logger.debug(f"Input: {data}")
result = await process(data)
logger.debug(f"Output: {result}")
return result
Connection Issues
# Test MCP server
python -m mcp.server --debug
# Test client connection
python -m mcp.client --url "ws://localhost:8080"
Examples
Query Multiple Data Sources
@server.tool("cross_platform_query")
async def cross_platform_query(query: str):
"""Query across multiple platforms"""
results = {}
# Query each platform
results["analytics"] = await analytics.query(query)
results["crm"] = await crm.query(query)
results["ads"] = await ads.query(query)
# Merge results
return merge_results(results)
Automated Insights
@server.tool("generate_insights")
async def generate_insights(data_source: str):
"""Generate insights from business data"""
# Get data
data = await get_data(data_source)
# Analyze
insights = []
# Trend analysis
if data["trend"] == "increasing":
insights.append("Revenue trending up - consider scaling")
# Anomaly detection
if data["anomaly"]:
insights.append(f"Anomaly detected: {data['anomaly']}")
return {"insights": insights, "data": data}
Resources
- Anthropic MCP Docs: https://modelcontextprotocol.io
- Official Servers: https://github.com/modelcontextprotocol/servers
- Community Servers: https://github.com/punkpeye/awesome-mcp-servers
Files
1 totalSelect a file
Select a file to preview.
Comments
Loading comments…
