Azure Ai Projects - Microsoft Foundry SDKs

v0.1.0

Build AI applications using the Azure AI Projects Python SDK (azure-ai-projects). Use when working with Foundry project clients, creating versioned agents with PromptAgentDefinition, running evaluations, managing connections/deployments/datasets/indexes, or using OpenAI-compatible clients. This is the high-level Foundry SDK - for low-level agent operations, use azure-ai-agents-python skill.

1· 1.9k·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The SKILL.md is clearly an Azure AI Projects (Foundry) SDK reference and the code examples legitimately require Azure endpoints, model deployment names, and service connections. However the registry metadata declares no required environment variables/credentials while the instructions repeatedly reference AZURE_AI_PROJECT_ENDPOINT, AZURE_AI_MODEL_DEPLOYMENT_NAME and many connection-related env names (BING_CONNECTION_NAME, AI_SEARCH_CONNECTION_NAME, etc.). That metadata/instruction mismatch is incoherent and could mislead users about what credentials will be needed.
!
Instruction Scope
The instructions are extensive and within the SDK's stated purpose, but they instruct the agent/developer to: use DefaultAzureCredential (which will attempt multiple local credential sources, including Azure CLI, environment variables, and managed identities), upload files, create vector stores, enable CodeInterpreterTool (executes Python), and call OpenAI-compatible evals. Those operations can access local credentials or send uploaded data to external services. The SKILL.md does not restrict or warn about sensitive-data handling.
Install Mechanism
This is an instruction-only skill with no install spec or external downloads, so there is no additional install-time code or archive retrieval risk.
!
Credentials
Although registry metadata lists no required environment variables or primary credential, the documentation expects many env vars and uses DefaultAzureCredential. Requiring endpoint and deployment names (and connection names) is normal for this SDK, but the metadata omission is misleading and the DefaultAzureCredential behavior means local or cloud identity tokens could be used implicitly — this is a proportionality and transparency problem that raises risk if users assume no credentials are needed.
Persistence & Privilege
The skill does not request always:true and is user-invocable with normal autonomous invocation allowed. It does not declare any behavior that modifies other skills or system-wide settings. No persistence or elevated privilege requests are present in the package.
What to consider before installing
This skill appears to be documentation for the Azure AI Projects Python SDK and the code examples are plausible for that purpose — but be aware of two practical issues before installing/using it: 1) Metadata mismatch: the registry metadata claims no required environment variables, but the SKILL.md expects AZURE_AI_PROJECT_ENDPOINT, AZURE_AI_MODEL_DEPLOYMENT_NAME and various connection env names. Expect to provide Azure endpoints/connections and verify which secrets are actually required. 2) Credential exposure: examples use DefaultAzureCredential, which will try local credentials (Azure CLI tokens, environment variables, managed identities). Only run this skill in environments where those credential sources are safe to use. If you have sensitive Azure credentials on the host, be cautious. 3) Data handling and tools: examples show uploading files, creating vector stores, using CodeInterpreterTool (executes Python), and calling OpenAI-compatible evals — uploaded or evaluation data may be transmitted to external services. Don’t upload sensitive files or enable code-execution tools in untrusted contexts. 4) Source and provenance: the skill lists no homepage and source is unknown. Consider obtaining the official SDK docs or a skill from a trusted publisher, and verify required env vars and intended behavior before use. If you want to proceed, confirm which environment variables you will set, review the DefaultAzureCredential auth flows you expose, and avoid uploading or evaluating any sensitive data until you’re confident about where it will be sent and who can access it.

Like a lobster shell, security has layers — review code before you run it.

latestvk9777p20df5r2e2dw3g9cfqqjx8089ew
1.9kdownloads
1stars
1versions
Updated 1mo ago
v0.1.0
MIT-0

Azure AI Projects Python SDK (Foundry SDK)

Build AI applications on Azure AI Foundry using the azure-ai-projects SDK.

Installation

pip install azure-ai-projects azure-identity

Environment Variables

AZURE_AI_PROJECT_ENDPOINT="https://<resource>.services.ai.azure.com/api/projects/<project>"
AZURE_AI_MODEL_DEPLOYMENT_NAME="gpt-4o-mini"

Authentication

import os
from azure.identity import DefaultAzureCredential
from azure.ai.projects import AIProjectClient

credential = DefaultAzureCredential()
client = AIProjectClient(
    endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
    credential=credential,
)

Client Operations Overview

OperationAccessPurpose
client.agents.agents.*Agent CRUD, versions, threads, runs
client.connections.connections.*List/get project connections
client.deployments.deployments.*List model deployments
client.datasets.datasets.*Dataset management
client.indexes.indexes.*Index management
client.evaluations.evaluations.*Run evaluations
client.red_teams.red_teams.*Red team operations

Two Client Approaches

1. AIProjectClient (Native Foundry)

from azure.ai.projects import AIProjectClient

client = AIProjectClient(
    endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
    credential=DefaultAzureCredential(),
)

# Use Foundry-native operations
agent = client.agents.create_agent(
    model=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
    name="my-agent",
    instructions="You are helpful.",
)

2. OpenAI-Compatible Client

# Get OpenAI-compatible client from project
openai_client = client.get_openai_client()

# Use standard OpenAI API
response = openai_client.chat.completions.create(
    model=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
    messages=[{"role": "user", "content": "Hello!"}],
)

Agent Operations

Create Agent (Basic)

agent = client.agents.create_agent(
    model=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
    name="my-agent",
    instructions="You are a helpful assistant.",
)

Create Agent with Tools

from azure.ai.agents import CodeInterpreterTool, FileSearchTool

agent = client.agents.create_agent(
    model=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
    name="tool-agent",
    instructions="You can execute code and search files.",
    tools=[CodeInterpreterTool(), FileSearchTool()],
)

Versioned Agents with PromptAgentDefinition

from azure.ai.projects.models import PromptAgentDefinition

# Create a versioned agent
agent_version = client.agents.create_version(
    agent_name="customer-support-agent",
    definition=PromptAgentDefinition(
        model=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
        instructions="You are a customer support specialist.",
        tools=[],  # Add tools as needed
    ),
    version_label="v1.0",
)

See references/agents.md for detailed agent patterns.

Tools Overview

ToolClassUse Case
Code InterpreterCodeInterpreterToolExecute Python, generate files
File SearchFileSearchToolRAG over uploaded documents
Bing GroundingBingGroundingToolWeb search (requires connection)
Azure AI SearchAzureAISearchToolSearch your indexes
Function CallingFunctionToolCall your Python functions
OpenAPIOpenApiToolCall REST APIs
MCPMcpToolModel Context Protocol servers
Memory SearchMemorySearchToolSearch agent memory stores
SharePointSharepointGroundingToolSearch SharePoint content

See references/tools.md for all tool patterns.

Thread and Message Flow

# 1. Create thread
thread = client.agents.threads.create()

# 2. Add message
client.agents.messages.create(
    thread_id=thread.id,
    role="user",
    content="What's the weather like?",
)

# 3. Create and process run
run = client.agents.runs.create_and_process(
    thread_id=thread.id,
    agent_id=agent.id,
)

# 4. Get response
if run.status == "completed":
    messages = client.agents.messages.list(thread_id=thread.id)
    for msg in messages:
        if msg.role == "assistant":
            print(msg.content[0].text.value)

Connections

# List all connections
connections = client.connections.list()
for conn in connections:
    print(f"{conn.name}: {conn.connection_type}")

# Get specific connection
connection = client.connections.get(connection_name="my-search-connection")

See references/connections.md for connection patterns.

Deployments

# List available model deployments
deployments = client.deployments.list()
for deployment in deployments:
    print(f"{deployment.name}: {deployment.model}")

See references/deployments.md for deployment patterns.

Datasets and Indexes

# List datasets
datasets = client.datasets.list()

# List indexes
indexes = client.indexes.list()

See references/datasets-indexes.md for data operations.

Evaluation

# Using OpenAI client for evals
openai_client = client.get_openai_client()

# Create evaluation with built-in evaluators
eval_run = openai_client.evals.runs.create(
    eval_id="my-eval",
    name="quality-check",
    data_source={
        "type": "custom",
        "item_references": [{"item_id": "test-1"}],
    },
    testing_criteria=[
        {"type": "fluency"},
        {"type": "task_adherence"},
    ],
)

See references/evaluation.md for evaluation patterns.

Async Client

from azure.ai.projects.aio import AIProjectClient

async with AIProjectClient(
    endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
    credential=DefaultAzureCredential(),
) as client:
    agent = await client.agents.create_agent(...)
    # ... async operations

See references/async-patterns.md for async patterns.

Memory Stores

# Create memory store for agent
memory_store = client.agents.create_memory_store(
    name="conversation-memory",
)

# Attach to agent for persistent memory
agent = client.agents.create_agent(
    model=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
    name="memory-agent",
    tools=[MemorySearchTool()],
    tool_resources={"memory": {"store_ids": [memory_store.id]}},
)

Best Practices

  1. Use context managers for async client: async with AIProjectClient(...) as client:
  2. Clean up agents when done: client.agents.delete_agent(agent.id)
  3. Use create_and_process for simple runs, streaming for real-time UX
  4. Use versioned agents for production deployments
  5. Prefer connections for external service integration (AI Search, Bing, etc.)

SDK Comparison

Featureazure-ai-projectsazure-ai-agents
LevelHigh-level (Foundry)Low-level (Agents)
ClientAIProjectClientAgentsClient
Versioningcreate_version()Not available
ConnectionsYesNo
DeploymentsYesNo
Datasets/IndexesYesNo
EvaluationVia OpenAI clientNo
When to useFull Foundry integrationStandalone agent apps

Reference Files

Comments

Loading comments...