Function Calling is a key technology that enables LLM models and AI agents to execute specific functions and interact with external systems. This tutorial will guide you from basics to advanced implementation techniques.
Introduction to Function Calling in Large Language Models¶
Function Calling (also known as Tool Use) represents a revolutionary approach to extending the capabilities of large language models. Instead of merely generating text, LLM models can now actively call external functions, APIs, or tools, enabling them to perform specific actions and obtain real-time data.
This technology transforms static chatbots into dynamic agents capable of interacting with the external world - from retrieving current information from databases to controlling IoT devices.
How Function Calling Works¶
The Function Calling process occurs in several steps:
- Function definition - We specify available functions including their parameters
- User request - LLM analyzes the query and decides if it needs to call a function
- Selection and calling - Model selects appropriate function and generates correct parameters
- Result processing - Application executes the function and returns result to model
- Final response - LLM formulates response based on obtained data
Practical Implementation with OpenAI API¶
Let’s look at a specific example of Function Calling implementation using OpenAI API in Python:
import openai
import json
import requests
# Definition of functions that LLM can call
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Gets current weather for specified city",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "City name"
},
"units": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "Temperature units"
}
},
"required": ["city"]
}
}
}
]
def get_weather(city, units="celsius"):
"""Weather API call simulation"""
# In real application, this would be a call to weather API
return {
"city": city,
"temperature": "22°C" if units == "celsius" else "72°F",
"condition": "sunny",
"humidity": "65%"
}
Main Communication Loop¶
def chat_with_functions(user_message):
# Initialize conversation
messages = [
{"role": "system", "content": "You are a helpful assistant with access to functions."},
{"role": "user", "content": user_message}
]
# First request with function definition
response = openai.chat.completions.create(
model="gpt-4-1106-preview",
messages=messages,
tools=tools,
tool_choice="auto"
)
response_message = response.choices[0].message
messages.append(response_message)
# Check if model wants to call function
if response_message.tool_calls:
for tool_call in response_message.tool_calls:
function_name = tool_call.function.name
function_args = json.loads(tool_call.function.arguments)
# Call appropriate function
if function_name == "get_weather":
function_response = get_weather(**function_args)
# Add result to conversation
messages.append({
"tool_call_id": tool_call.id,
"role": "tool",
"name": function_name,
"content": json.dumps(function_response)
})
# Second request with function results
final_response = openai.chat.completions.create(
model="gpt-4-1106-preview",
messages=messages
)
return final_response.choices[0].message.content
return response_message.content
# Usage
result = chat_with_functions("What's the weather in Prague?")
print(result)
Advanced Techniques and Best Practices¶
Parameter Validation¶
It’s crucial to implement robust parameter validation since LLM may sometimes generate invalid values:
from pydantic import BaseModel, ValidationError
from typing import Literal
class WeatherRequest(BaseModel):
city: str
units: Literal["celsius", "fahrenheit"] = "celsius"
def validate_city(cls, v):
if len(v.strip()) < 2:
raise ValueError("City must have at least 2 characters")
return v.strip().title()
def safe_get_weather(tool_call):
try:
args = json.loads(tool_call.function.arguments)
validated_args = WeatherRequest(**args)
return get_weather(validated_args.city, validated_args.units)
except (ValidationError, json.JSONDecodeError) as e:
return {"error": f"Invalid parameters: {str(e)}"}
Error Handling and Fallbacks¶
def robust_function_call(tool_call, max_retries=3):
for attempt in range(max_retries):
try:
function_name = tool_call.function.name
if function_name == "get_weather":
return safe_get_weather(tool_call)
return {"error": f"Unknown function: {function_name}"}
except Exception as e:
if attempt == max_retries - 1:
return {"error": f"Function failed after {max_retries} attempts: {str(e)}"}
continue
Specific Use Case: Database Agent¶
Practical example of an agent for working with databases:
database_tools = [
{
"type": "function",
"function": {
"name": "execute_query",
"description": "Executes SQL SELECT query on database",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "SQL SELECT query"
},
"table": {
"type": "string",
"description": "Table name for validation"
}
},
"required": ["query", "table"]
}
}
}
]
def execute_query(query, table):
"""Safe SQL query execution with validation"""
# Basic SQL injection prevention
forbidden_keywords = ["DROP", "DELETE", "UPDATE", "INSERT", "ALTER"]
if any(keyword in query.upper() for keyword in forbidden_keywords):
return {"error": "Query contains forbidden operations"}
# Simulate DB call
if "users" in table.lower():
return {
"results": [
{"id": 1, "name": "John Smith", "email": "[email protected]"},
{"id": 2, "name": "Mary Johnson", "email": "[email protected]"}
],
"count": 2
}
return {"results": [], "count": 0}
Optimization and Monitoring¶
For production deployment, it’s important to implement monitoring and optimizations:
import time
import logging
from functools import wraps
def monitor_function_calls(func):
@wraps(func)
def wrapper(*args, **kwargs):
start_time = time.time()
function_name = func.__name__
try:
result = func(*args, **kwargs)
execution_time = time.time() - start_time
logging.info(f"Function {function_name} executed in {execution_time:.2f}s")
return result
except Exception as e:
logging.error(f"Function {function_name} failed: {str(e)}")
raise
return wrapper
@monitor_function_calls
def get_weather_monitored(city, units="celsius"):
# Implementation with monitoring
return get_weather(city, units)
Security Aspects¶
Function Calling brings new security challenges that need to be addressed:
- Sandboxing - Isolation of functions from critical system operations
- Rate limiting - Limiting number of function calls per time unit
- Whitelisting - Explicit list of allowed operations
- Input validation - Thorough validation of all parameters
- Audit logging - Logging all function calls for analysis
Summary¶
Function Calling represents a fundamental evolution in LLM applications, enabling creation of truly interactive AI agents. Successful implementation requires careful function design, robust error handling, and thorough security. With growing adoption of this technology, we can expect increasingly sophisticated AI systems capable of complex interactions with the real world. The key to success is balancing functionality with security and maintaining control over what an AI agent can and cannot do.