🔥 microsoft/agent-framework
A framework for building, orchestrating and deploying AI agents and multi-agent workflows with support for Python and .NET. — Trending on GitHub today with 20 new stars.
Welcome to Microsoft Agent Framework!
Welcome to Microsoft's comprehensive multi-language framework for building, orchestrating, and deploying AI agents with support for both .NET and Python implementations. This framework provides everything from simple chat agents to complex multi-agent workflows with graph-based orchestration.
Watch the full Agent Framework introduction (30 min)
📋 Getting Started
📦 Installation
Python
pip install agent-framework --pre
This will install all sub-packages, see python/packages for individual packages.
It may take a minute on first install on Windows.`
.NET
dotnet add package Microsoft.Agents.AI
📚 Documentation
-
Overview - High level overview of the framework
-
Quick Start - Get started with a simple agent
-
Tutorials - Step by step tutorials
-
User Guide - In-depth user guide for building agents and workflows
-
Migration from Semantic Kernel - Guide to migrate from Semantic Kernel
-
Migration from AutoGen - Guide to migrate from AutoGen
Still have questions? Join our weekly office hours or ask questions in our Discord channel to get help from the team and other users.
✨ Highlights
- Graph-based Workflows: Connect agents and deterministic functions using data flows with streaming, checkpointing, human-in-the-loop, and time-travel capabilities
Python workflows | .NET workflows
- AF Labs: Experimental packages for cutting-edge features including benchmarking, reinforcement learning, and research initiatives
Labs directory
- DevUI: Interactive developer UI for agent development, testing, and debugging workflows
DevUI package
See the DevUI in action (1 min)
- Python and C#/.NET Support: Full framework support for both Python and C#/.NET implementations with consistent APIs
Python packages | .NET source
- Observability: Built-in OpenTelemetry integration for distributed tracing, monitoring, and debugging
Python observability | .NET telemetry
- Multiple Agent Provider Support: Support for various LLM providers with more being added continuously
Python examples | .NET examples
- Middleware: Flexible middleware system for request/response processing, exception handling, and custom pipelines
Python middleware | .NET middleware
💬 We want your feedback!
- For bugs, please file a GitHub issue.
Quickstart
Basic Agent - Python
Create a simple Azure Responses Agent that writes a haiku about the Microsoft Agent Framework
# pip install agent-framework --pre
Use az login to authenticate with Azure CLI
import os import asyncio from agent_framework import Agent from agent_framework.foundry import FoundryChatClient from azure.identity import AzureCliCredential
async def main():
Initialize a chat agent with Microsoft Foundry
the endpoint, deployment name, and api version can be set via environment variables
or they can be passed in directly to the FoundryChatClient constructor
agent = Agent( client=FoundryChatClient( credential=AzureCliCredential(),
project_endpoint=os.environ["FOUNDRY_PROJECT_ENDPOINT"],
model=os.environ["FOUNDRY_MODEL_DEPLOYMENT_NAME"],
), name="HaikuBot", instructions="You are an upbeat assistant that writes beautifully.", )
print(await agent.run("Write a haiku about Microsoft Agent Framework."))
if name == "main": asyncio.run(main())`
Basic Agent - .NET
Create a simple Agent, using OpenAI Responses, that writes a haiku about the Microsoft Agent Framework
// dotnet add package Microsoft.Agents.AI.OpenAI --prerelease using Microsoft.Agents.AI; using OpenAI; using OpenAI.Responses;// dotnet add package Microsoft.Agents.AI.OpenAI --prerelease using Microsoft.Agents.AI; using OpenAI; using OpenAI.Responses;// Replace the with your OpenAI API key. var agent = new OpenAIClient("") .GetResponsesClient("gpt-4o-mini") .AsAIAgent(name: "HaikuBot", instructions: "You are an upbeat assistant that writes beautifully.");
Console.WriteLine(await agent.RunAsync("Write a haiku about Microsoft Agent Framework."));`
Create a simple Agent, using Microsoft Foundry with token-based auth, that writes a haiku about the Microsoft Agent Framework
// dotnet add package Microsoft.Agents.AI.AzureAI --prerelease // dotnet add package Azure.Identity // Use // dotnet add package Microsoft.Agents.AI.AzureAI --prerelease // dotnet add package Azure.Identity // Use to authenticate with Azure CLI using Azure.AI.Projects; using Azure.Identity; using Microsoft.Agents.AI; to authenticate with Azure CLI using Azure.AI.Projects; using Azure.Identity; using Microsoft.Agents.AI;var endpoint = Environment.GetEnvironmentVariable("AZURE_AI_PROJECT_ENDPOINT") ?? throw new InvalidOperationException("AZURE_AI_PROJECT_ENDPOINT is not set."); var deploymentName = Environment.GetEnvironmentVariable("AZURE_AI_MODEL_DEPLOYMENT_NAME") ?? "gpt-4o-mini";
var agent = new AIProjectClient(new Uri(endpoint), new DefaultAzureCredential()) .AsAIAgent(model: deploymentName, name: "HaikuBot", instructions: "You are an upbeat assistant that writes beautifully.");
Console.WriteLine(await agent.RunAsync("Write a haiku about Microsoft Agent Framework."));`
More Examples & Samples
Python
-
Getting Started: progressive tutorial from hello-world to hosting
-
Agent Concepts: deep-dive samples by topic (tools, middleware, providers, etc.)
-
Workflows: workflow creation and integration with agents
-
Hosting: A2A, Azure Functions, Durable Task hosting
-
End-to-End: full applications, evaluation, and demos
.NET
-
Getting Started: progressive tutorial from hello agent to hosting
-
Agent Concepts: basic agent creation and tool usage
-
Agent Providers: samples showing different agent providers
-
Workflows: advanced multi-agent patterns and workflow orchestration
-
Hosting: A2A, Durable Agents, Durable Workflows
-
End-to-End: full applications and demos
Troubleshooting
Authentication
Problem Cause Fix
Authentication errors when using Azure credentials
Not signed in to Azure CLI
Run az login before starting your app
API key errors Wrong or missing API key Verify the key and ensure it's for the correct resource/provider
Tip: DefaultAzureCredential is convenient for development but in production, consider using a specific credential (e.g., ManagedIdentityCredential) to avoid latency issues, unintended credential probing, and potential security risks from fallback mechanisms.
Environment Variables
The samples typically read configuration from environment variables. Common required variables:
Variable Used by Purpose
AZURE_OPENAI_ENDPOINT
Azure OpenAI samples
Your Azure OpenAI resource URL
AZURE_OPENAI_DEPLOYMENT_NAME
Azure OpenAI samples
Model deployment name (e.g. gpt-4o-mini)
AZURE_AI_PROJECT_ENDPOINT
Microsoft Foundry samples
Your Microsoft Foundry project endpoint
AZURE_AI_MODEL_DEPLOYMENT_NAME
Microsoft Foundry samples
Model deployment name
OPENAI_API_KEY
OpenAI (non-Azure) samples
Your OpenAI platform API key
Contributor Resources
-
Contributing Guide
-
Python Development Guide
-
Design Documents
-
Architectural Decision Records
Important Notes
If you use the Microsoft Agent Framework to build applications that operate with third-party servers or agents, you do so at your own risk. We recommend reviewing all data being shared with third-party servers or agents and being cognizant of third-party practices for retention and location of data. It is your responsibility to manage whether your data will flow outside of your organization's Azure compliance and geographic boundaries and any related implications.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
githubtrendingopen-source
I Built a Chrome Extension That Auto-Saves Your Form Data Locally (Zero Network Requests)
The Problem You're halfway through a long form — maybe a job application, an insurance quote, or a school registration — and you accidentally close the tab. Or the page crashes. Or your session expires. Everything you typed is gone. Browser autofill only covers the basics (name, email, address). It doesn't save the custom fields, text areas, or dropdowns that make up 90% of real forms. What I Built FormVault is a Chrome extension that automatically saves everything you type into any web form — locally, on your machine. No accounts, no cloud sync, no network requests at all. When you come back to a form, FormVault lets you restore your previous inputs with one click. How it works: Detects form fields on any page Saves inputs to Chrome's local storage as you type Restore button appears when
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Open Source AI

I Tested Every Gemma 4 Model Locally on My MacBook - What Actually Works
Audio ASR in 3 languages, image understanding, full-stack app generation, coding, and agentic behavior -- all running on a MacBook M4 Pro with 24GB RAM. Interactive version with playable audio, live charts, and the working React app: gemma4-benchmark.pages.dev Google just released Gemma 4 -- their new family of open-source multimodal models. Four sizes, Apache-2.0 licensed, supports text + image + audio. I spent a day testing every variant. Real audio files. Real images. Code that has to compile and run. Here is my honest report. The Gemma 4 Family E2B -- Dense 2.3B, Text/Image/Audio, 4 GB at 4-bit. Phones and edge. E4B -- Dense 4.5B, Text/Image/Audio, 5.5 GB at 4-bit. Laptops. 26B-A4B -- MoE 4B active/26B total, Text/Image, 16-18 GB at 4-bit. 31B -- Dense 31B, Text/Image, 17-20 GB at 4-bi

Quantizers appriciation post
Hey everyone, Yesterday I decided to try and learn how to quantize ggufs myself with reasonable quality, in order to understand the magic behind the curtain. Holy... I did not expect how much work it is, how long it takes, and requires A LOT (500GB!) of storage space for just Gemma-4-26B-A4B in various sizes. There really is an art to configuring them too, with variations between architectures and quant types. Thanks to unsloth releasing their imatrix file and huggingface showing the weight types inside their viewer, I managed to cobble something together without LLM assistance. I ran into a few hiccups and some of the information is a bit confusing, so I documented my process in the hopes of making it easier for someone else to learn and experiment. My recipe and full setup guide can be f


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!