diff --git a/examples/agent-framework-integration/DIFF.md b/examples/agent-framework-integration/DIFF.md new file mode 100644 index 00000000..9375d75f --- /dev/null +++ b/examples/agent-framework-integration/DIFF.md @@ -0,0 +1,273 @@ +# Agent Framework vs ChatPrompt Comparison + +This document compares the implementation differences between the agent-framework integration and the ChatPrompt approach in the Microsoft Teams Python SDK. + +## Overview + +Both approaches provide AI capabilities for Microsoft Teams bots, but with different programming models and abstractions: + +- **agent-framework** (`main.py`): Uses the standalone agent-framework library with a simpler, more intuitive API +- **ChatPrompt** (`chat-prompt.py`): Uses the built-in microsoft.teams.ai ChatPrompt with more explicit configuration + +## Key Differences + +### 1. Setup & Imports + +#### Agent Framework +```python +from agent_framework import ChatAgent, ChatMessageStore, MCPStreamableHTTPTool +from agent_framework.azure import AzureOpenAIChatClient +``` + +#### ChatPrompt +```python +from microsoft.teams.ai import ChatPrompt, Function, ListMemory +from microsoft.teams.openai import OpenAICompletionsAIModel +from microsoft.teams.mcpplugin import McpClientPlugin + +# Requires model initialization +model = OpenAICompletionsAIModel() +``` + +**Key Difference**: Agent framework auto-initializes the client, while ChatPrompt requires explicit model creation. + +--- + +### 2. Basic Message Handling + +#### Agent Framework +```python +agent = ChatAgent( + chat_client=AzureOpenAIChatClient(), + instructions="You are a friendly but hilarious pirate robot.", +) +result = await agent.run(text) +await ctx.reply(result.text) +``` + +#### ChatPrompt +```python +prompt = ChatPrompt(model) +chat_result = await prompt.send( + input=text, + instructions="You are a friendly but hilarious pirate robot.", +) +if chat_result.response.content: + message = MessageActivityInput(text=chat_result.response.content).add_ai_generated() + await ctx.send(message) +``` + +**Key Differences**: +- Agent framework: `agent.run()` returns result with `.text` property +- ChatPrompt: `prompt.send()` returns result with `.response.content` property +- ChatPrompt requires manual construction of `MessageActivityInput` with AI-generated marker + +--- + +### 3. Function/Tool Calling + +#### Agent Framework +```python +def get_weather( + location: Annotated[str, Field(description="The location to get the weather for.")], +) -> str: + """Get the weather for a given location.""" + return f"The weather in {location} is sunny" + +agent = ChatAgent( + chat_client=AzureOpenAIChatClient(), + instructions="...", + tools=[get_weather, get_menu_specials], # Pass functions directly +) +``` + +#### ChatPrompt +```python +class GetWeatherParams(BaseModel): + location: Annotated[str, Field(description="The location to get the weather for.")] + +def get_weather(params: GetWeatherParams) -> str: + """Get the weather for a given location.""" + return f"The weather in {params.location} is sunny" + +prompt = ChatPrompt(model) +prompt.with_function( + Function( + name="get_weather", + description="Get the weather for a given location.", + parameter_schema=GetWeatherParams, + handler=get_weather, + ) +) +``` + +**Key Differences**: +- Agent framework: Functions use type annotations directly; parameters are individual function arguments +- ChatPrompt: Requires Pydantic model for parameters; function receives single params object +- Agent framework: Pass functions to `tools` list directly +- ChatPrompt: Wrap functions in `Function` objects with explicit configuration using `.with_function()` + +--- + +### 4. Streaming + +#### Agent Framework +```python +async for update in agent.run_stream(text): + ctx.stream.emit(update.text) +``` + +#### ChatPrompt +```python +chat_result = await prompt.send( + input=text, + instructions="...", + on_chunk=lambda chunk: ctx.stream.emit(chunk), +) + +# Must emit final AI marker +if chat_result.response.content: + ctx.stream.emit(MessageActivityInput().add_ai_generated()) +``` + +**Key Differences**: +- Agent framework: Uses async iteration pattern with `run_stream()` +- ChatPrompt: Uses callback pattern with `on_chunk` parameter +- ChatPrompt requires manual emission of final AI-generated marker + +--- + +### 5. Structured Output + +#### Agent Framework +```python +class SentimentResult(BaseModel): + sentiment: Literal["positive", "negative"] + +result = await agent.run(text, response_format=SentimentResult) + +if result.value: + await ctx.reply(str(result.value)) +``` + +#### ChatPrompt +```python +class SentimentResult(BaseModel): + sentiment: Literal["positive", "negative"] + +# NOTE: ChatPrompt does not support structured output natively. Typescript has this, but not python. (And typescript's version is a bit clunky https://microsoft.github.io/teams-sdk/typescript/in-depth-guides/ai/function-calling#stopping-functions-early) +chat_result = await prompt.send( + input=text, + instructions=""" + Respond with ONLY a JSON object in this format: {"sentiment": "positive"} + Do not include any other text. + """, +) + +# Manual parsing required +if chat_result.response.content: + await ctx.reply(chat_result.response.content) +``` + +**Key Differences**: +- Agent framework: Native support via `response_format` parameter, returns typed `.value` +- ChatPrompt: **No native support** - requires workaround using instructions and manual JSON parsing + +--- + +### 6. Conversation Memory + +#### Agent Framework +```python +memory = ChatMessageStore() + +agent = ChatAgent( + chat_client=AzureOpenAIChatClient(), + instructions="...", + chat_message_store_factory=lambda: memory, +) +``` + +#### ChatPrompt +```python +memory_store: dict[str, ListMemory] = {} + +def get_or_create_memory(conversation_id: str) -> ListMemory: + if conversation_id not in memory_store: + memory_store[conversation_id] = ListMemory() + return memory_store[conversation_id] + +memory = get_or_create_memory(ctx.activity.conversation.id) +prompt = ChatPrompt(model, memory=memory) +``` + +**Key Differences**: +- Agent framework: Uses `ChatMessageStore` with factory pattern +- ChatPrompt: Uses `ListMemory` passed directly to constructor; requires manual conversation tracking +- ChatPrompt requires developer to manage conversation-specific memory instances + +--- + +### 7. MCP (Model Context Protocol) Integration + +#### Agent Framework +```python +learn_mcp = MCPStreamableHTTPTool("microsoft-learn", "https://learn.microsoft.com/api/mcp") +agent = ChatAgent( + chat_client=AzureOpenAIChatClient(), + instructions="...", + tools=[learn_mcp], # MCP tools in same list as regular tools +) +``` + +#### ChatPrompt +```python +mcp_plugin = McpClientPlugin() +mcp_plugin.use_mcp_server("https://learn.microsoft.com/api/mcp") + +prompt = ChatPrompt(model, memory=memory, plugins=[mcp_plugin]) +``` + +**Key Differences**: +- Agent framework: MCP tools are treated as regular tools, added to `tools` list +- ChatPrompt: MCP requires plugin system, added to `plugins` list separately from functions + +--- + +## Summary Table + +| Feature | Agent Framework | ChatPrompt | +|---------|----------------|------------| +| **Setup** | Auto-initialized client | Manual model creation | +| **Tool Definition** | Type-annotated functions | Pydantic models + Function wrapper | +| **Streaming** | Async iteration | Callback pattern | +| **Structured Output** | Native via `response_format` | ❌ Not supported (workaround needed) | +| **Memory** | `ChatMessageStore` + factory | `ListMemory` + manual tracking | +| **MCP Integration** | Tools list | Plugins system | +| **Response Access** | `result.text` | `chat_result.response.content` | +| **Verbosity** | Less verbose | More explicit configuration | + +## Recommendations + +**Use Agent Framework when**: +- You want simpler, more intuitive APIs +- You need structured output support +- You prefer async iteration for streaming +- You want unified tool/MCP handling + +**Use ChatPrompt when**: +- You need fine-grained control over the AI pipeline +- You're already using the microsoft.teams.ai ecosystem +- You want to use the plugin system +- You prefer explicit configuration over conventions + +## Migration Tips + +If migrating from ChatPrompt to Agent Framework: + +1. **Functions**: Remove Pydantic parameter models, use type annotations directly +2. **Memory**: Replace `ListMemory` with `ChatMessageStore` and use factory pattern +3. **Streaming**: Replace `on_chunk` callbacks with `async for` iteration +4. **MCP**: Move MCP from plugins to tools list +5. **Response handling**: Change `.response.content` to `.text` +6. **Structured output**: Use `response_format` parameter instead of instruction-based workaround diff --git a/examples/agent-framework-integration/README.md b/examples/agent-framework-integration/README.md new file mode 100644 index 00000000..5213c6b8 --- /dev/null +++ b/examples/agent-framework-integration/README.md @@ -0,0 +1,13 @@ +# agent-framework-integration + +Things to test: +| Scenario | How to test | Comments | +| - | - | - | +| Simple Chat messages | basic Tell me a joke | | +| Structured messages | structured urggghhhh | Tells you if your statement was positive or negative | +| Function calling | function whats on the menu | Uses tool calling | +| Streaming | streaming tell me a long story | | +| Memory | memory my pet's name is bingo, then, what is my pet's name? | | +| Deferred Messages | | | + + diff --git a/examples/agent-framework-integration/pyproject.toml b/examples/agent-framework-integration/pyproject.toml new file mode 100644 index 00000000..d3d6ad00 --- /dev/null +++ b/examples/agent-framework-integration/pyproject.toml @@ -0,0 +1,19 @@ +[project] +name = "agent-framework-integration" +version = "0.1.0" +description = "an example showcasing integration with microsoft agent framework" +readme = "README.md" +requires-python = ">=3.12,<3.14" +dependencies = [ + "agent-framework-core>=1.0.0b251114", + "dotenv>=0.9.9", + "microsoft-teams-ai", + "microsoft-teams-apps", + "microsoft-teams-devtools", + "microsoft-teams-openai", +] + +[tool.uv.sources] +microsoft-teams-apps = { workspace = true } +microsoft-teams-openai = { workspace = true } +microsoft-teams-ai = { workspace = true } diff --git a/examples/agent-framework-integration/src/approval.py b/examples/agent-framework-integration/src/approval.py new file mode 100644 index 00000000..7f517ad3 --- /dev/null +++ b/examples/agent-framework-integration/src/approval.py @@ -0,0 +1,237 @@ +""" +Copyright (c) Microsoft Corporation. All rights reserved. +Licensed under the MIT License. +""" + +import asyncio +import json +import re +from typing import Annotated + +from agent_framework import ( + AgentThread, + ChatAgent, + ChatMessage, + FunctionApprovalRequestContent, + FunctionApprovalResponseContent, + FunctionCallContent, + ai_function, +) +from agent_framework.azure import AzureOpenAIChatClient +from microsoft.teams.api import AdaptiveCardInvokeActivity, MessageActivity +from microsoft.teams.api.models.adaptive_card import ( + AdaptiveCardActionErrorResponse, + AdaptiveCardActionMessageResponse, +) +from microsoft.teams.api.models.error import HttpError, InnerHttpError +from microsoft.teams.api.models.invoke_response import AdaptiveCardInvokeResponse +from microsoft.teams.apps import ActivityContext, App +from microsoft.teams.cards import AdaptiveCard, ExecuteAction, TextBlock +from microsoft.teams.devtools import DevToolsPlugin +from pydantic import Field + +app = App(plugins=[DevToolsPlugin()]) + +# Thread storage keyed by conversation ID +threads: dict[str, AgentThread] = {} + + +# Define approval-required functions +@ai_function(approval_mode="always_require") +async def send_email( + to: Annotated[str, Field(description="Recipient email address")], + subject: Annotated[str, Field(description="Email subject")], + body: Annotated[str, Field(description="Email body")], +) -> str: + """Send an email to a recipient.""" + await asyncio.sleep(0.5) # Simulate sending + return f"✅ Email sent to {to} with subject '{subject}'" + + +@ai_function(approval_mode="always_require") +async def book_meeting_room( + room: Annotated[str, Field(description="Room name or number")], + date: Annotated[str, Field(description="Date in YYYY-MM-DD format")], + time: Annotated[str, Field(description="Time in HH:MM format")], +) -> str: + """Book a meeting room.""" + await asyncio.sleep(0.5) + return f"✅ Booked {room} for {date} at {time}" + + +@ai_function(approval_mode="always_require") +async def create_calendar_event( + title: Annotated[str, Field(description="Event title")], + date: Annotated[str, Field(description="Date in YYYY-MM-DD format")], + time: Annotated[str, Field(description="Time in HH:MM format")], +) -> str: + """Create a calendar event.""" + await asyncio.sleep(0.5) + return f"✅ Created calendar event '{title}' on {date} at {time}" + + +# Create agent (singleton, reused across conversations) +agent = ChatAgent( + chat_client=AzureOpenAIChatClient(), + instructions=""" + You are a helpful assistant that can manage emails, calendar events, and meeting rooms. + When asked to perform actions, use the available tools to help the user. + """, + tools=[send_email, book_meeting_room, create_calendar_event], +) + + +def create_approval_card(request: FunctionApprovalRequestContent) -> AdaptiveCard: + """Create an adaptive card for function approval. + + Args: + request: FunctionApprovalRequestContent from agent response + + Returns: + AdaptiveCard with approval/rejection actions + """ + arguments = request.function_call.parse_arguments() + + return AdaptiveCard( + body=[ + TextBlock(text="🔐 Approval Required", weight="Bolder", size="Large"), + TextBlock(text=f"**Function:** `{request.function_call.name}`"), + TextBlock(text="**Parameters:**"), + TextBlock(text=f"```json\n{json.dumps(arguments, indent=2)}\n```"), + ], + actions=[ + ExecuteAction(title="✅ Approve").with_data( + { + "type": "approval", + "action": "approve", + "request_id": request.id, + "function_call": request.function_call.to_dict(), + } + ), + ExecuteAction(title="❌ Reject").with_data( + { + "type": "approval", + "action": "reject", + "request_id": request.id, + "function_call": request.function_call.to_dict(), + } + ), + ], + ) + + +@app.on_message_pattern(re.compile("approval .*")) +async def handle_approval_message(ctx: ActivityContext[MessageActivity]): + """Handle approval workflow messages. + + This handler: + 1. Runs the agent with the user's message + 2. Checks if any function requires approval + 3. Sends adaptive card(s) if approval is needed + 4. Otherwise, sends the agent's response directly + """ + ctx.logger.info("Handling approval message") + + conversation_id = ctx.activity.conversation.id + text = ctx.activity.text.removeprefix("approval ") + + # Get or create thread for this conversation + if conversation_id not in threads: + threads[conversation_id] = agent.get_new_thread() + thread = threads[conversation_id] + + # Run agent + result = await agent.run(text, thread=thread) + + # Check for approval requests + if result.user_input_requests: + # Send adaptive card for each approval request + for request in result.user_input_requests: + card = create_approval_card(request) + await ctx.send(card) + else: + # No approval needed, send result directly + await ctx.reply(result.text) + + +@app.on_card_action +async def handle_approval_card_action(ctx: ActivityContext[AdaptiveCardInvokeActivity]) -> AdaptiveCardInvokeResponse: + """Handle approval card submissions. + + This handler: + 1. Extracts the approval decision (approve/reject) + 2. Reconstructs the FunctionApprovalResponseContent + 3. Resumes the agent with the approval response + 4. Sends the final result back to the user + """ + data = ctx.activity.value.action.data + + # Check if this is an approval action + if data.get("type") != "approval": + # Not our action, skip + return AdaptiveCardActionMessageResponse( + status_code=200, + type="application/vnd.microsoft.activity.message", + value="Action processed", + ) + + conversation_id = ctx.activity.conversation.id + thread = threads.get(conversation_id) + + if not thread: + await ctx.send("❌ Session expired. Please start over.") + return AdaptiveCardActionMessageResponse( + status_code=200, + type="application/vnd.microsoft.activity.message", + value="Session expired", + ) + + # Parse approval decision + approved = data.get("action") == "approve" + request_id = data.get("request_id") + function_call_data = data.get("function_call") + + # Validate data + if not request_id or not function_call_data: + await ctx.send("❌ Invalid approval data.") + return AdaptiveCardActionErrorResponse( + status_code=400, + type="application/vnd.microsoft.error", + value=HttpError( + code="BadRequest", + message="Invalid approval data", + inner_http_error=InnerHttpError( + status_code=400, + body={"error": "Missing request_id or function_call data"}, + ), + ), + ) + + # Reconstruct FunctionApprovalResponseContent + approval_response = FunctionApprovalResponseContent( + id=request_id, + function_call=FunctionCallContent.from_dict(function_call_data), + approved=approved, + ) + + # Resume agent with approval response + result = await agent.run(ChatMessage(role="user", contents=[approval_response]), thread=thread) + + # Send final result + await ctx.send(result.text) + + return AdaptiveCardActionMessageResponse( + status_code=200, + type="application/vnd.microsoft.activity.message", + value="Approval processed", + ) + + +@app.on_message +async def handle_message(ctx: ActivityContext[MessageActivity]): + """Default message handler.""" + ctx.logger.info("Handling general message") + + +if __name__ == "__main__": + asyncio.run(app.start()) diff --git a/examples/agent-framework-integration/src/chat-prompt.py b/examples/agent-framework-integration/src/chat-prompt.py new file mode 100644 index 00000000..0165b9db --- /dev/null +++ b/examples/agent-framework-integration/src/chat-prompt.py @@ -0,0 +1,240 @@ +""" +Copyright (c) Microsoft Corporation. All rights reserved. +Licensed under the MIT License. +""" + +import asyncio +import re +from random import randint +from typing import Annotated, Literal + +from microsoft.teams.ai import ChatPrompt, Function, ListMemory +from microsoft.teams.api import MessageActivity, MessageActivityInput +from microsoft.teams.apps import ActivityContext, App +from microsoft.teams.devtools import DevToolsPlugin +from microsoft.teams.mcpplugin import McpClientPlugin +from microsoft.teams.openai import OpenAICompletionsAIModel +from pydantic import BaseModel, Field + +app = App(plugins=[DevToolsPlugin()]) + +# AI Model +model = OpenAICompletionsAIModel() + + +# Tool function definitions (same as agent-framework version) +class GetWeatherParams(BaseModel): + location: Annotated[str, Field(description="The location to get the weather for.")] + + +def get_weather(params: GetWeatherParams) -> str: + """Get the weather for a given location.""" + conditions = ["sunny", "cloudy", "rainy", "stormy"] + return f"The weather in {params.location} is {conditions[randint(0, 3)]} with a high of {randint(10, 30)}°C." + + +class GetMenuSpecialsParams(BaseModel): + """No parameters needed for menu specials""" + + pass + + +def get_menu_specials(params: GetMenuSpecialsParams) -> str: + """Get today's menu specials.""" + return """ + Special Soup: Clam Chowder + Special Salad: Cobb Salad + Special Drink: Chai Tea + """ + + +@app.on_message_pattern(re.compile("basic .*")) +async def handle_basic_message(ctx: ActivityContext[MessageActivity]): + """Handle message activities using ChatPrompt (equivalent to basic handler).""" + ctx.logger.info("Handling basic message") + text = ctx.activity.text.removeprefix("basic ") + + prompt = ChatPrompt(model) + chat_result = await prompt.send( + input=text, + instructions=""" + You are a friendly but hilarious pirate robot. + """, + ) + + if chat_result.response.content: + message = MessageActivityInput(text=chat_result.response.content).add_ai_generated() + await ctx.send(message) + + +@app.on_message_pattern(re.compile("function .*")) +async def handle_tool_calling(ctx: ActivityContext[MessageActivity]): + """Handle function calling using ChatPrompt with functions.""" + ctx.logger.info("Handling function calling message") + text = ctx.activity.text.removeprefix("function ") + + prompt = ChatPrompt(model) + prompt.with_function( + Function( + name="get_weather", + description="Get the weather for a given location.", + parameter_schema=GetWeatherParams, + handler=get_weather, + ) + ).with_function( + Function( + name="get_menu_specials", + description="Get today's menu specials.", + parameter_schema=GetMenuSpecialsParams, + handler=get_menu_specials, + ) + ) + + chat_result = await prompt.send( + input=text, + instructions=""" + You are a friendly but hilarious pirate robot. + You MUST use a tool call to answer the user's question. + If no tool call is available, then you may tell the user that + they need to use one of the available functions. + """, + ) + + if chat_result.response.content: + message = MessageActivityInput(text=chat_result.response.content).add_ai_generated() + await ctx.send(message) + + +@app.on_message_pattern(re.compile("streaming .*")) +async def handle_streaming(ctx: ActivityContext[MessageActivity]): + """Handle streaming responses using ChatPrompt with on_chunk callback.""" + ctx.logger.info("Handling streaming message") + text = ctx.activity.text.removeprefix("streaming ") + + prompt = ChatPrompt(model) + prompt.with_function( + Function( + name="get_weather", + description="Get the weather for a given location.", + parameter_schema=GetWeatherParams, + handler=get_weather, + ) + ).with_function( + Function( + name="get_menu_specials", + description="Get today's menu specials.", + parameter_schema=GetMenuSpecialsParams, + handler=get_menu_specials, + ) + ) + + chat_result = await prompt.send( + input=text, + instructions=""" + You are a friendly but hilarious pirate robot. + """, + on_chunk=lambda chunk: ctx.stream.emit(chunk), + ) + + # Emit final AI generated marker for streaming + if chat_result.response.content: + ctx.stream.emit(MessageActivityInput().add_ai_generated()) + + +class SentimentResult(BaseModel): + sentiment: Literal["positive", "negative"] + + +@app.on_message_pattern(re.compile("structured .*")) +async def handle_structured_message(ctx: ActivityContext[MessageActivity]): + """ + Handle structured output requests. + + NOTE: ChatPrompt does not currently support structured output (response_format). + This handler uses instructions to request structured format as a workaround. + """ + ctx.logger.info("Handling structured message") + text = ctx.activity.text.removeprefix("structured ") + + prompt = ChatPrompt(model) + chat_result = await prompt.send( + input=text, + instructions=""" + You are an agent that judges if a sentence is positive or negative. + Respond with ONLY a JSON object in this format: {"sentiment": "positive"} or {"sentiment": "negative"} + Do not include any other text. + """, + ) + + if chat_result.response.content: + # Note: Without response_format support, we get a string response + # In a production app, you would parse the JSON string here + await ctx.reply(chat_result.response.content) + + +# Memory store for conversations +memory_store: dict[str, ListMemory] = {} + + +def get_or_create_memory(conversation_id: str) -> ListMemory: + """Get or create conversation memory for a specific conversation.""" + if conversation_id not in memory_store: + memory_store[conversation_id] = ListMemory() + return memory_store[conversation_id] + + +@app.on_message_pattern(re.compile("memory .*")) +async def handle_memory_message(ctx: ActivityContext[MessageActivity]): + """Handle messages with conversation memory using ChatPrompt with ListMemory.""" + ctx.logger.info("Handling memory message") + text = ctx.activity.text.removeprefix("memory ") + + # Get or create memory for this conversation + memory = get_or_create_memory(ctx.activity.conversation.id) + + prompt = ChatPrompt(model, memory=memory) + chat_result = await prompt.send( + input=text, + instructions=""" + You are a friendly but hilarious pirate robot. + """, + ) + + if chat_result.response.content: + message = MessageActivityInput(text=chat_result.response.content).add_ai_generated() + await ctx.send(message) + + +@app.on_message_pattern(re.compile("mcp .*")) +async def handle_mcp_message(ctx: ActivityContext[MessageActivity]): + """Handle MCP requests using ChatPrompt with McpClientPlugin.""" + ctx.logger.info("Handling mcp message") + text = ctx.activity.text.removeprefix("mcp ") + + # Create MCP plugin for Microsoft Learn + mcp_plugin = McpClientPlugin() + mcp_plugin.use_mcp_server("https://learn.microsoft.com/api/mcp") + + # Get or create memory for this conversation + memory = get_or_create_memory(ctx.activity.conversation.id) + + prompt = ChatPrompt(model, memory=memory, plugins=[mcp_plugin]) + chat_result = await prompt.send( + input=text, + instructions=""" + You MUST use the tools that you have available to answer the user's request + """, + ) + + if chat_result.response.content: + message = MessageActivityInput(text=chat_result.response.content).add_ai_generated() + await ctx.send(message) + + +@app.on_message +async def handle_message2(ctx: ActivityContext[MessageActivity]): + ctx.logger.info("Handling all message") + + +if __name__ == "__main__": + asyncio.run(app.start()) diff --git a/examples/agent-framework-integration/src/main.py b/examples/agent-framework-integration/src/main.py new file mode 100644 index 00000000..08df23f4 --- /dev/null +++ b/examples/agent-framework-integration/src/main.py @@ -0,0 +1,148 @@ +""" +Copyright (c) Microsoft Corporation. All rights reserved. +Licensed under the MIT License. +""" + +import asyncio +import re +from random import randint +from typing import Annotated, Literal + +from agent_framework import ChatAgent, ChatMessageStore, MCPStreamableHTTPTool +from agent_framework.azure import AzureOpenAIChatClient +from microsoft.teams.api import MessageActivity +from microsoft.teams.apps import ActivityContext, App +from microsoft.teams.devtools import DevToolsPlugin +from pydantic import BaseModel, Field + +app = App(plugins=[DevToolsPlugin()]) + + +def get_weather( + location: Annotated[str, Field(description="The location to get the weather for.")], +) -> str: + """Get the weather for a given location.""" + conditions = ["sunny", "cloudy", "rainy", "stormy"] + return f"The weather in {location} is {conditions[randint(0, 3)]} with a high of {randint(10, 30)}°C." + + +def get_menu_specials() -> str: + """Get today's menu specials.""" + return """ + Special Soup: Clam Chowder + Special Salad: Cobb Salad + Special Drink: Chai Tea + """ + + +@app.on_message_pattern(re.compile("basic .*")) +async def handle_basic_message(ctx: ActivityContext[MessageActivity]): + """Handle message activities using the new generated handler system.""" + ctx.logger.info("Handling basic message") + text = ctx.activity.text.removeprefix("basic ") + agent = ChatAgent( + chat_client=AzureOpenAIChatClient(), + instructions=""" + You are a friendly but hilarious pirate robot. + """, + ) + + result = await agent.run(text) + await ctx.reply(result.text) + + +@app.on_message_pattern(re.compile("function .*")) +async def handle_tool_calling(ctx: ActivityContext[MessageActivity]): + ctx.logger.info("Handling function calling message") + text = ctx.activity.text.removeprefix("function ") + agent = ChatAgent( + chat_client=AzureOpenAIChatClient(), + instructions=""" + You are a friendly but hilarious pirate robot. + You MUST use a tool call to answer the user's question. + If no tool call is available, then you may tell the user that + they need to use one of the available functions. + """, + tools=[get_weather, get_menu_specials], + ) + + result = await agent.run(text) + await ctx.reply(result.text) + + +@app.on_message_pattern(re.compile("streaming .*")) +async def handle_streaming(ctx: ActivityContext[MessageActivity]): + ctx.logger.info("Handling streaming message") + text = ctx.activity.text.removeprefix("streaming ") + agent = ChatAgent( + chat_client=AzureOpenAIChatClient(), + instructions=""" + You are a friendly but hilarious pirate robot. + """, + tools=[get_weather, get_menu_specials], + ) + + async for update in agent.run_stream(text): + ctx.stream.emit(update.text) + + +class SentimentResult(BaseModel): + sentiment: Literal["positive", "negative"] + + +@app.on_message_pattern(re.compile("structured .*")) +async def handle_structured_message(ctx: ActivityContext[MessageActivity]): + ctx.logger.info("Handling structured message") + text = ctx.activity.text.removeprefix("structured ") + agent = ChatAgent( + chat_client=AzureOpenAIChatClient(), + instructions=""" + You are an agent that judges if a senstence is positive or negative. + """, + ) + + result = await agent.run(text, response_format=SentimentResult) + + if result.value: + await ctx.reply(str(result.value)) + + +memory = ChatMessageStore() + + +@app.on_message_pattern(re.compile("memory .*")) +async def handle_memory_message(ctx: ActivityContext[MessageActivity]): + ctx.logger.info("Handling memory message") + text = ctx.activity.text.removeprefix("memory ") + agent = ChatAgent( + chat_client=AzureOpenAIChatClient(), + instructions=""" + You are a friendly but hilarious pirate robot. + """, + chat_message_store_factory=lambda: memory, + ) + + result = await agent.run(text) + await ctx.reply(result.text) + + +@app.on_message_pattern(re.compile("mcp .*")) +async def handle_mcp_message(ctx: ActivityContext[MessageActivity]): + ctx.logger.info("Handling mcp message") + text = ctx.activity.text.removeprefix("mcp ") + learn_mcp = MCPStreamableHTTPTool("microsoft-learn", "https://learn.microsoft.com/api/mcp") + agent = ChatAgent( + chat_client=AzureOpenAIChatClient(), + instructions=""" + You MUST use the tools that you have available to answer the user's request + """, + chat_message_store_factory=lambda: memory, + tools=[learn_mcp], + ) + + result = await agent.run(text) + await ctx.reply(result.text) + + +if __name__ == "__main__": + asyncio.run(app.start()) diff --git a/uv.lock b/uv.lock index c8f4267e..ae029328 100644 --- a/uv.lock +++ b/uv.lock @@ -9,8 +9,10 @@ resolution-markers = [ [manifest] members = [ "a2a", + "agent-framework-integration", "ai-test", "cards", + "defferred-ai", "dialogs", "echo", "graph", @@ -88,6 +90,51 @@ http-server = [ { name = "starlette" }, ] +[[package]] +name = "agent-framework-core" +version = "1.0.0b251114" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "azure-identity" }, + { name = "mcp", extra = ["ws"] }, + { name = "openai" }, + { name = "opentelemetry-api" }, + { name = "opentelemetry-exporter-otlp-proto-grpc" }, + { name = "opentelemetry-sdk" }, + { name = "opentelemetry-semantic-conventions-ai" }, + { name = "packaging" }, + { name = "pydantic" }, + { name = "pydantic-settings" }, + { name = "typing-extensions" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/3d/e4/5e0f7277e381794d6ee218e8b1172614d2520db7e3a84d6b599f21bc8e72/agent_framework_core-1.0.0b251114.tar.gz", hash = "sha256:adaff1297bcc185e1ca24fcec6c511c0a7c8ec0fccad65c1f8b3096de5154ecd", size = 278321, upload-time = "2025-11-15T01:01:38.013Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/e1/f6/90f3aa4c1b1c2a4c7a8281301a5151554a9d77426e1f7868c8588b1d9307/agent_framework_core-1.0.0b251114-py3-none-any.whl", hash = "sha256:28834b439de75aa4aaa7310a202cb9dfa414542b16332b7ed572d28f9798ae15", size = 322518, upload-time = "2025-11-15T01:01:36.366Z" }, +] + +[[package]] +name = "agent-framework-integration" +version = "0.1.0" +source = { virtual = "examples/agent-framework-integration" } +dependencies = [ + { name = "agent-framework-core" }, + { name = "dotenv" }, + { name = "microsoft-teams-ai" }, + { name = "microsoft-teams-apps" }, + { name = "microsoft-teams-devtools" }, + { name = "microsoft-teams-openai" }, +] + +[package.metadata] +requires-dist = [ + { name = "agent-framework-core", specifier = ">=1.0.0b251114" }, + { name = "dotenv", specifier = ">=0.9.9" }, + { name = "microsoft-teams-ai", editable = "packages/ai" }, + { name = "microsoft-teams-apps", editable = "packages/apps" }, + { name = "microsoft-teams-devtools", editable = "packages/devtools" }, + { name = "microsoft-teams-openai", editable = "packages/openai" }, +] + [[package]] name = "ai-test" version = "0.1.0" @@ -618,6 +665,21 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/f0/8b/2c95f0645c6f40211896375e6fa51f504b8ccb29c21f6ae661fe87ab044e/cyclopts-3.24.0-py3-none-any.whl", hash = "sha256:809d04cde9108617106091140c3964ee6fceb33cecdd537f7ffa360bde13ed71", size = 86154, upload-time = "2025-09-08T15:40:56.41Z" }, ] +[[package]] +name = "defferred-ai" +version = "0.1.0" +source = { virtual = "examples/defferred_ai" } +dependencies = [ + { name = "dotenv" }, + { name = "microsoft-teams-apps" }, +] + +[package.metadata] +requires-dist = [ + { name = "dotenv", specifier = ">=0.9.9" }, + { name = "microsoft-teams-apps", editable = "packages/apps" }, +] + [[package]] name = "dependency-injector" version = "4.48.2" @@ -909,6 +971,37 @@ requires-dist = [ { name = "microsoft-teams-graph", editable = "packages/graph" }, ] +[[package]] +name = "grpcio" +version = "1.76.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "typing-extensions" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/b6/e0/318c1ce3ae5a17894d5791e87aea147587c9e702f24122cc7a5c8bbaeeb1/grpcio-1.76.0.tar.gz", hash = "sha256:7be78388d6da1a25c0d5ec506523db58b18be22d9c37d8d3a32c08be4987bd73", size = 12785182, upload-time = "2025-10-21T16:23:12.106Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/bf/05/8e29121994b8d959ffa0afd28996d452f291b48cfc0875619de0bde2c50c/grpcio-1.76.0-cp312-cp312-linux_armv7l.whl", hash = "sha256:81fd9652b37b36f16138611c7e884eb82e0cec137c40d3ef7c3f9b3ed00f6ed8", size = 5799718, upload-time = "2025-10-21T16:21:17.939Z" }, + { url = "https://files.pythonhosted.org/packages/d9/75/11d0e66b3cdf998c996489581bdad8900db79ebd83513e45c19548f1cba4/grpcio-1.76.0-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:04bbe1bfe3a68bbfd4e52402ab7d4eb59d72d02647ae2042204326cf4bbad280", size = 11825627, upload-time = "2025-10-21T16:21:20.466Z" }, + { url = "https://files.pythonhosted.org/packages/28/50/2f0aa0498bc188048f5d9504dcc5c2c24f2eb1a9337cd0fa09a61a2e75f0/grpcio-1.76.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:d388087771c837cdb6515539f43b9d4bf0b0f23593a24054ac16f7a960be16f4", size = 6359167, upload-time = "2025-10-21T16:21:23.122Z" }, + { url = "https://files.pythonhosted.org/packages/66/e5/bbf0bb97d29ede1d59d6588af40018cfc345b17ce979b7b45424628dc8bb/grpcio-1.76.0-cp312-cp312-manylinux2014_i686.manylinux_2_17_i686.whl", hash = "sha256:9f8f757bebaaea112c00dba718fc0d3260052ce714e25804a03f93f5d1c6cc11", size = 7044267, upload-time = "2025-10-21T16:21:25.995Z" }, + { url = "https://files.pythonhosted.org/packages/f5/86/f6ec2164f743d9609691115ae8ece098c76b894ebe4f7c94a655c6b03e98/grpcio-1.76.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:980a846182ce88c4f2f7e2c22c56aefd515daeb36149d1c897f83cf57999e0b6", size = 6573963, upload-time = "2025-10-21T16:21:28.631Z" }, + { url = "https://files.pythonhosted.org/packages/60/bc/8d9d0d8505feccfdf38a766d262c71e73639c165b311c9457208b56d92ae/grpcio-1.76.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:f92f88e6c033db65a5ae3d97905c8fea9c725b63e28d5a75cb73b49bda5024d8", size = 7164484, upload-time = "2025-10-21T16:21:30.837Z" }, + { url = "https://files.pythonhosted.org/packages/67/e6/5d6c2fc10b95edf6df9b8f19cf10a34263b7fd48493936fffd5085521292/grpcio-1.76.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:4baf3cbe2f0be3289eb68ac8ae771156971848bb8aaff60bad42005539431980", size = 8127777, upload-time = "2025-10-21T16:21:33.577Z" }, + { url = "https://files.pythonhosted.org/packages/3f/c8/dce8ff21c86abe025efe304d9e31fdb0deaaa3b502b6a78141080f206da0/grpcio-1.76.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:615ba64c208aaceb5ec83bfdce7728b80bfeb8be97562944836a7a0a9647d882", size = 7594014, upload-time = "2025-10-21T16:21:41.882Z" }, + { url = "https://files.pythonhosted.org/packages/e0/42/ad28191ebf983a5d0ecef90bab66baa5a6b18f2bfdef9d0a63b1973d9f75/grpcio-1.76.0-cp312-cp312-win32.whl", hash = "sha256:45d59a649a82df5718fd9527ce775fd66d1af35e6d31abdcdc906a49c6822958", size = 3984750, upload-time = "2025-10-21T16:21:44.006Z" }, + { url = "https://files.pythonhosted.org/packages/9e/00/7bd478cbb851c04a48baccaa49b75abaa8e4122f7d86da797500cccdd771/grpcio-1.76.0-cp312-cp312-win_amd64.whl", hash = "sha256:c088e7a90b6017307f423efbb9d1ba97a22aa2170876223f9709e9d1de0b5347", size = 4704003, upload-time = "2025-10-21T16:21:46.244Z" }, + { url = "https://files.pythonhosted.org/packages/fc/ed/71467ab770effc9e8cef5f2e7388beb2be26ed642d567697bb103a790c72/grpcio-1.76.0-cp313-cp313-linux_armv7l.whl", hash = "sha256:26ef06c73eb53267c2b319f43e6634c7556ea37672029241a056629af27c10e2", size = 5807716, upload-time = "2025-10-21T16:21:48.475Z" }, + { url = "https://files.pythonhosted.org/packages/2c/85/c6ed56f9817fab03fa8a111ca91469941fb514e3e3ce6d793cb8f1e1347b/grpcio-1.76.0-cp313-cp313-macosx_11_0_universal2.whl", hash = "sha256:45e0111e73f43f735d70786557dc38141185072d7ff8dc1829d6a77ac1471468", size = 11821522, upload-time = "2025-10-21T16:21:51.142Z" }, + { url = "https://files.pythonhosted.org/packages/ac/31/2b8a235ab40c39cbc141ef647f8a6eb7b0028f023015a4842933bc0d6831/grpcio-1.76.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:83d57312a58dcfe2a3a0f9d1389b299438909a02db60e2f2ea2ae2d8034909d3", size = 6362558, upload-time = "2025-10-21T16:21:54.213Z" }, + { url = "https://files.pythonhosted.org/packages/bd/64/9784eab483358e08847498ee56faf8ff6ea8e0a4592568d9f68edc97e9e9/grpcio-1.76.0-cp313-cp313-manylinux2014_i686.manylinux_2_17_i686.whl", hash = "sha256:3e2a27c89eb9ac3d81ec8835e12414d73536c6e620355d65102503064a4ed6eb", size = 7049990, upload-time = "2025-10-21T16:21:56.476Z" }, + { url = "https://files.pythonhosted.org/packages/2b/94/8c12319a6369434e7a184b987e8e9f3b49a114c489b8315f029e24de4837/grpcio-1.76.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:61f69297cba3950a524f61c7c8ee12e55c486cb5f7db47ff9dcee33da6f0d3ae", size = 6575387, upload-time = "2025-10-21T16:21:59.051Z" }, + { url = "https://files.pythonhosted.org/packages/15/0f/f12c32b03f731f4a6242f771f63039df182c8b8e2cf8075b245b409259d4/grpcio-1.76.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:6a15c17af8839b6801d554263c546c69c4d7718ad4321e3166175b37eaacca77", size = 7166668, upload-time = "2025-10-21T16:22:02.049Z" }, + { url = "https://files.pythonhosted.org/packages/ff/2d/3ec9ce0c2b1d92dd59d1c3264aaec9f0f7c817d6e8ac683b97198a36ed5a/grpcio-1.76.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:25a18e9810fbc7e7f03ec2516addc116a957f8cbb8cbc95ccc80faa072743d03", size = 8124928, upload-time = "2025-10-21T16:22:04.984Z" }, + { url = "https://files.pythonhosted.org/packages/1a/74/fd3317be5672f4856bcdd1a9e7b5e17554692d3db9a3b273879dc02d657d/grpcio-1.76.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:931091142fd8cc14edccc0845a79248bc155425eee9a98b2db2ea4f00a235a42", size = 7589983, upload-time = "2025-10-21T16:22:07.881Z" }, + { url = "https://files.pythonhosted.org/packages/45/bb/ca038cf420f405971f19821c8c15bcbc875505f6ffadafe9ffd77871dc4c/grpcio-1.76.0-cp313-cp313-win32.whl", hash = "sha256:5e8571632780e08526f118f74170ad8d50fb0a48c23a746bef2a6ebade3abd6f", size = 3984727, upload-time = "2025-10-21T16:22:10.032Z" }, + { url = "https://files.pythonhosted.org/packages/41/80/84087dc56437ced7cdd4b13d7875e7439a52a261e3ab4e06488ba6173b0a/grpcio-1.76.0-cp313-cp313-win_amd64.whl", hash = "sha256:f9f7bd5faab55f47231ad8dba7787866b69f5e93bc306e3915606779bbfb4ba8", size = 4702799, upload-time = "2025-10-21T16:22:12.709Z" }, +] + [[package]] name = "h11" version = "0.16.0" @@ -1260,6 +1353,11 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/1c/72/3751feae343a5ad07959df713907b5c3fbaed269d697a14b0c449080cf2e/mcp-1.17.0-py3-none-any.whl", hash = "sha256:0660ef275cada7a545af154db3082f176cf1d2681d5e35ae63e014faf0a35d40", size = 167737, upload-time = "2025-10-10T12:16:42.863Z" }, ] +[package.optional-dependencies] +ws = [ + { name = "websockets" }, +] + [[package]] name = "mcp-client" version = "0.1.0" @@ -1969,6 +2067,48 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/ae/a2/d86e01c28300bd41bab8f18afd613676e2bd63515417b77636fc1add426f/opentelemetry_api-1.38.0-py3-none-any.whl", hash = "sha256:2891b0197f47124454ab9f0cf58f3be33faca394457ac3e09daba13ff50aa582", size = 65947, upload-time = "2025-10-16T08:35:30.23Z" }, ] +[[package]] +name = "opentelemetry-exporter-otlp-proto-common" +version = "1.38.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "opentelemetry-proto" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/19/83/dd4660f2956ff88ed071e9e0e36e830df14b8c5dc06722dbde1841accbe8/opentelemetry_exporter_otlp_proto_common-1.38.0.tar.gz", hash = "sha256:e333278afab4695aa8114eeb7bf4e44e65c6607d54968271a249c180b2cb605c", size = 20431, upload-time = "2025-10-16T08:35:53.285Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/a7/9e/55a41c9601191e8cd8eb626b54ee6827b9c9d4a46d736f32abc80d8039fc/opentelemetry_exporter_otlp_proto_common-1.38.0-py3-none-any.whl", hash = "sha256:03cb76ab213300fe4f4c62b7d8f17d97fcfd21b89f0b5ce38ea156327ddda74a", size = 18359, upload-time = "2025-10-16T08:35:34.099Z" }, +] + +[[package]] +name = "opentelemetry-exporter-otlp-proto-grpc" +version = "1.38.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "googleapis-common-protos" }, + { name = "grpcio" }, + { name = "opentelemetry-api" }, + { name = "opentelemetry-exporter-otlp-proto-common" }, + { name = "opentelemetry-proto" }, + { name = "opentelemetry-sdk" }, + { name = "typing-extensions" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/a2/c0/43222f5b97dc10812bc4f0abc5dc7cd0a2525a91b5151d26c9e2e958f52e/opentelemetry_exporter_otlp_proto_grpc-1.38.0.tar.gz", hash = "sha256:2473935e9eac71f401de6101d37d6f3f0f1831db92b953c7dcc912536158ebd6", size = 24676, upload-time = "2025-10-16T08:35:53.83Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/28/f0/bd831afbdba74ca2ce3982142a2fad707f8c487e8a3b6fef01f1d5945d1b/opentelemetry_exporter_otlp_proto_grpc-1.38.0-py3-none-any.whl", hash = "sha256:7c49fd9b4bd0dbe9ba13d91f764c2d20b0025649a6e4ac35792fb8d84d764bc7", size = 19695, upload-time = "2025-10-16T08:35:35.053Z" }, +] + +[[package]] +name = "opentelemetry-proto" +version = "1.38.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "protobuf" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/51/14/f0c4f0f6371b9cb7f9fa9ee8918bfd59ac7040c7791f1e6da32a1839780d/opentelemetry_proto-1.38.0.tar.gz", hash = "sha256:88b161e89d9d372ce723da289b7da74c3a8354a8e5359992be813942969ed468", size = 46152, upload-time = "2025-10-16T08:36:01.612Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/b6/6a/82b68b14efca5150b2632f3692d627afa76b77378c4999f2648979409528/opentelemetry_proto-1.38.0-py3-none-any.whl", hash = "sha256:b6ebe54d3217c42e45462e2a1ae28c3e2bf2ec5a5645236a490f55f45f1a0a18", size = 72535, upload-time = "2025-10-16T08:35:45.749Z" }, +] + [[package]] name = "opentelemetry-sdk" version = "1.38.0" @@ -1996,6 +2136,15 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/24/7d/c88d7b15ba8fe5c6b8f93be50fc11795e9fc05386c44afaf6b76fe191f9b/opentelemetry_semantic_conventions-0.59b0-py3-none-any.whl", hash = "sha256:35d3b8833ef97d614136e253c1da9342b4c3c083bbaf29ce31d572a1c3825eed", size = 207954, upload-time = "2025-10-16T08:35:48.054Z" }, ] +[[package]] +name = "opentelemetry-semantic-conventions-ai" +version = "0.4.13" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/ba/e6/40b59eda51ac47009fb47afcdf37c6938594a0bd7f3b9fadcbc6058248e3/opentelemetry_semantic_conventions_ai-0.4.13.tar.gz", hash = "sha256:94efa9fb4ffac18c45f54a3a338ffeb7eedb7e1bb4d147786e77202e159f0036", size = 5368, upload-time = "2025-08-22T10:14:17.387Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/35/b5/cf25da2218910f0d6cdf7f876a06bed118c4969eacaf60a887cbaef44f44/opentelemetry_semantic_conventions_ai-0.4.13-py3-none-any.whl", hash = "sha256:883a30a6bb5deaec0d646912b5f9f6dcbb9f6f72557b73d0f2560bf25d13e2d5", size = 6080, upload-time = "2025-08-22T10:14:16.477Z" }, +] + [[package]] name = "packaging" version = "25.0" @@ -2147,17 +2296,16 @@ wheels = [ [[package]] name = "protobuf" -version = "6.33.0" +version = "5.29.5" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/19/ff/64a6c8f420818bb873713988ca5492cba3a7946be57e027ac63495157d97/protobuf-6.33.0.tar.gz", hash = "sha256:140303d5c8d2037730c548f8c7b93b20bb1dc301be280c378b82b8894589c954", size = 443463, upload-time = "2025-10-15T20:39:52.159Z" } +sdist = { url = "https://files.pythonhosted.org/packages/43/29/d09e70352e4e88c9c7a198d5645d7277811448d76c23b00345670f7c8a38/protobuf-5.29.5.tar.gz", hash = "sha256:bc1463bafd4b0929216c35f437a8e28731a2b7fe3d98bb77a600efced5a15c84", size = 425226, upload-time = "2025-05-28T23:51:59.82Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/7e/ee/52b3fa8feb6db4a833dfea4943e175ce645144532e8a90f72571ad85df4e/protobuf-6.33.0-cp310-abi3-win32.whl", hash = "sha256:d6101ded078042a8f17959eccd9236fb7a9ca20d3b0098bbcb91533a5680d035", size = 425593, upload-time = "2025-10-15T20:39:40.29Z" }, - { url = "https://files.pythonhosted.org/packages/7b/c6/7a465f1825872c55e0341ff4a80198743f73b69ce5d43ab18043699d1d81/protobuf-6.33.0-cp310-abi3-win_amd64.whl", hash = "sha256:9a031d10f703f03768f2743a1c403af050b6ae1f3480e9c140f39c45f81b13ee", size = 436882, upload-time = "2025-10-15T20:39:42.841Z" }, - { url = "https://files.pythonhosted.org/packages/e1/a9/b6eee662a6951b9c3640e8e452ab3e09f117d99fc10baa32d1581a0d4099/protobuf-6.33.0-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:905b07a65f1a4b72412314082c7dbfae91a9e8b68a0cc1577515f8df58ecf455", size = 427521, upload-time = "2025-10-15T20:39:43.803Z" }, - { url = "https://files.pythonhosted.org/packages/10/35/16d31e0f92c6d2f0e77c2a3ba93185130ea13053dd16200a57434c882f2b/protobuf-6.33.0-cp39-abi3-manylinux2014_aarch64.whl", hash = "sha256:e0697ece353e6239b90ee43a9231318302ad8353c70e6e45499fa52396debf90", size = 324445, upload-time = "2025-10-15T20:39:44.932Z" }, - { url = "https://files.pythonhosted.org/packages/e6/eb/2a981a13e35cda8b75b5585aaffae2eb904f8f351bdd3870769692acbd8a/protobuf-6.33.0-cp39-abi3-manylinux2014_s390x.whl", hash = "sha256:e0a1715e4f27355afd9570f3ea369735afc853a6c3951a6afe1f80d8569ad298", size = 339159, upload-time = "2025-10-15T20:39:46.186Z" }, - { url = "https://files.pythonhosted.org/packages/21/51/0b1cbad62074439b867b4e04cc09b93f6699d78fd191bed2bbb44562e077/protobuf-6.33.0-cp39-abi3-manylinux2014_x86_64.whl", hash = "sha256:35be49fd3f4fefa4e6e2aacc35e8b837d6703c37a2168a55ac21e9b1bc7559ef", size = 323172, upload-time = "2025-10-15T20:39:47.465Z" }, - { url = "https://files.pythonhosted.org/packages/07/d1/0a28c21707807c6aacd5dc9c3704b2aa1effbf37adebd8caeaf68b17a636/protobuf-6.33.0-py3-none-any.whl", hash = "sha256:25c9e1963c6734448ea2d308cfa610e692b801304ba0908d7bfa564ac5132995", size = 170477, upload-time = "2025-10-15T20:39:51.311Z" }, + { url = "https://files.pythonhosted.org/packages/5f/11/6e40e9fc5bba02988a214c07cf324595789ca7820160bfd1f8be96e48539/protobuf-5.29.5-cp310-abi3-win32.whl", hash = "sha256:3f1c6468a2cfd102ff4703976138844f78ebd1fb45f49011afc5139e9e283079", size = 422963, upload-time = "2025-05-28T23:51:41.204Z" }, + { url = "https://files.pythonhosted.org/packages/81/7f/73cefb093e1a2a7c3ffd839e6f9fcafb7a427d300c7f8aef9c64405d8ac6/protobuf-5.29.5-cp310-abi3-win_amd64.whl", hash = "sha256:3f76e3a3675b4a4d867b52e4a5f5b78a2ef9565549d4037e06cf7b0942b1d3fc", size = 434818, upload-time = "2025-05-28T23:51:44.297Z" }, + { url = "https://files.pythonhosted.org/packages/dd/73/10e1661c21f139f2c6ad9b23040ff36fee624310dc28fba20d33fdae124c/protobuf-5.29.5-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:e38c5add5a311f2a6eb0340716ef9b039c1dfa428b28f25a7838ac329204a671", size = 418091, upload-time = "2025-05-28T23:51:45.907Z" }, + { url = "https://files.pythonhosted.org/packages/6c/04/98f6f8cf5b07ab1294c13f34b4e69b3722bb609c5b701d6c169828f9f8aa/protobuf-5.29.5-cp38-abi3-manylinux2014_aarch64.whl", hash = "sha256:fa18533a299d7ab6c55a238bf8629311439995f2e7eca5caaff08663606e9015", size = 319824, upload-time = "2025-05-28T23:51:47.545Z" }, + { url = "https://files.pythonhosted.org/packages/85/e4/07c80521879c2d15f321465ac24c70efe2381378c00bf5e56a0f4fbac8cd/protobuf-5.29.5-cp38-abi3-manylinux2014_x86_64.whl", hash = "sha256:63848923da3325e1bf7e9003d680ce6e14b07e55d0473253a690c3a8b8fd6e61", size = 319942, upload-time = "2025-05-28T23:51:49.11Z" }, + { url = "https://files.pythonhosted.org/packages/7e/cc/7e77861000a0691aeea8f4566e5d3aa716f2b1dece4a24439437e41d3d25/protobuf-5.29.5-py3-none-any.whl", hash = "sha256:6cf42630262c59b2d8de33954443d94b746c952b01434fc58a417fdbd2e84bd5", size = 172823, upload-time = "2025-05-28T23:51:58.157Z" }, ] [[package]]