AutoGen is an advanced framework from Microsoft for creating conversational AI agents that can collaborate to solve complex problems. In this tutorial, we’ll show you how to set up multi-agent systems and leverage their potential for automating various tasks.
AutoGen: Multi-Agent Framework for Advanced AI Applications¶
AutoGen is an open-source framework from Microsoft that enables the creation of complex multi-agent systems using large language models (LLM). Unlike simple chatbots, AutoGen brings the ability to orchestrate multiple AI agents that communicate and collaborate to solve complex tasks.
Key Concepts and Architecture¶
AutoGen is built on the concept of conversational agents, where each agent has its role, instructions, and capabilities. The basic building block is the ConversableAgent class, from which all specialized agent types inherit:
- UserProxyAgent - represents a human or system that initiates conversations
- AssistantAgent - AI agent with access to LLM for generating responses
- GroupChat - orchestrates communication between multiple agents
Installation and Basic Setup¶
To get started with AutoGen, you need to install the basic dependencies:
pip install pyautogen
# Or with extra dependencies
pip install "pyautogen[teachable]"
Next, you need to configure access to the LLM. AutoGen supports various providers including OpenAI, Azure OpenAI, or local models:
import autogen
config_list = [
{
"model": "gpt-4-turbo",
"api_key": "your-api-key",
"api_type": "openai"
}
]
llm_config = {
"config_list": config_list,
"temperature": 0.7,
"timeout": 120
}
Creating Your First Multi-Agent System¶
Let’s show a practical example of two agents - a programmer and a code reviewer who collaborate on creating and reviewing code:
# Creating User Proxy agent
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
system_message="I initiate tasks and execute code.",
code_execution_config={
"work_dir": "coding",
"use_docker": False
},
human_input_mode="NEVER"
)
# Creating Assistant agent - Programmer
programmer = autogen.AssistantAgent(
name="programmer",
system_message="""You are an expert Python programmer.
Write clean, well-documented code according to specifications.
Always add comments and docstrings.""",
llm_config=llm_config
)
# Creating Assistant agent - Code Reviewer
reviewer = autogen.AssistantAgent(
name="code_reviewer",
system_message="""You are a senior code reviewer.
Check code for errors, best practices, and optimizations.
Provide constructive feedback.""",
llm_config=llm_config
)
GroupChat for Multi-Agent Orchestration¶
When we have more than two agents, we use GroupChat to manage the conversation:
# Creating GroupChat with multiple agents
groupchat = autogen.GroupChat(
agents=[user_proxy, programmer, reviewer],
messages=[],
max_round=10,
speaker_selection_method="round_robin"
)
# Creating Group Chat Manager
manager = autogen.GroupChatManager(
groupchat=groupchat,
llm_config=llm_config
)
# Starting the conversation
user_proxy.initiate_chat(
manager,
message="Create a Python function for email address validation using regex. Then perform a code review."
)
Advanced Features: Custom Tools and Function Calling¶
AutoGen supports integration of external tools and APIs through function calling. Agents can call specific functions as needed:
import requests
from typing import Annotated
def get_weather(city: Annotated[str, "City name"]) -> str:
"""Gets current weather for the specified city."""
# API call simulation
return f"In {city} it's 22°C, sunny"
def search_web(query: Annotated[str, "Search query"]) -> str:
"""Searches for information on the web."""
return f"Results for: {query}"
# Registering functions with the agent
assistant_with_tools = autogen.AssistantAgent(
name="assistant_with_tools",
system_message="You are an assistant with access to external tools.",
llm_config={
**llm_config,
"functions": [
{
"name": "get_weather",
"description": "Gets weather for a city",
"parameters": {
"type": "object",
"properties": {
"city": {"type": "string", "description": "City name"}
},
"required": ["city"]
}
}
]
}
)
# Registering function implementations
user_proxy.register_function(
function_map={
"get_weather": get_weather,
"search_web": search_web
}
)
Teachable Agents - Learning Agents¶
AutoGen also offers TeachableAgent, which can remember and learn from previous interactions:
from autogen.agentchat.contrib.teachable_agent import TeachableAgent
teachable_agent = TeachableAgent(
name="teachable_assistant",
system_message="I am a learning assistant, I remember new information.",
llm_config=llm_config,
teach_config={
"verbosity": 0, # 0 for silent mode
"reset_db": False, # don't reset database on restart
"path_to_db_dir": "./tmp/notebook",
"use_analyzer": True
}
)
# Agent remembers new information
user_proxy.initiate_chat(
teachable_agent,
message="Remember: Our main server runs on port 8080"
)
# Later the agent uses learned information
user_proxy.initiate_chat(
teachable_agent,
message="On which port does our main server run?"
)
Best Practices and Production Tips¶
When deploying AutoGen in production environments, it’s important to follow several key principles:
- Code Security - always use Docker containers for running generated code
- Rate Limiting - implement limits on the number of API calls
- Error Handling - prepare for LLM and network failures
- Logging - log all conversations for debugging and analysis
- Cost Monitoring - track API call costs
# Example of robust configuration for production
llm_config_production = {
"config_list": config_list,
"temperature": 0.1, # lower temperature for more consistent results
"timeout": 60,
"retry_wait_time": 10,
"max_retries": 3,
"cache_seed": 42 # for reproducibility
}
# Safe code execution configuration
safe_execution_config = {
"work_dir": "sandbox",
"use_docker": True,
"timeout": 30,
"last_n_messages": 5 # limit context
}
Integration with Existing Systems¶
AutoGen can be easily integrated into existing applications through callback functions and custom message handlers:
class CustomAgent(autogen.AssistantAgent):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.message_history = []
def receive(self, message, sender, request_reply=None, silent=False):
# Custom logic for message processing
self.message_history.append({
"from": sender.name,
"content": message.get("content", ""),
"timestamp": time.time()
})
# Call parent method
return super().receive(message, sender, request_reply, silent)
Summary¶
AutoGen represents a significant step forward in the field of multi-agent AI systems. It enables the creation of sophisticated applications where various specialized agents collaborate to solve complex tasks. The framework is ideal for automating code review, generating documentation, data analysis, or creating intelligent assistants. Thanks to its flexible architecture and support for various LLM providers, AutoGen is suitable for both experimentation and production deployment in enterprise environments.