LangChain is a powerful framework for developing applications with large language models (LLM). In this tutorial, we’ll show you how to create intelligent AI agents and connect them with external tools and databases.
Introduction to LangChain: Building AI Applications¶
LangChain is a powerful Python framework designed for developing applications that utilize Large Language Models (LLM). This tutorial will guide you through basic concepts and practical examples that will help you quickly start building your own AI solutions.
Installation and Basic Setup¶
To start working with LangChain, let’s install the necessary dependencies:
pip install langchain langchain-openai python-dotenv
Create a .env file to store API keys:
OPENAI_API_KEY=your_openai_api_key_here
Basic Configuration¶
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain.schema import HumanMessage
load_dotenv()
# LLM initialization
llm = ChatOpenAI(
model="gpt-3.5-turbo",
temperature=0.7,
api_key=os.getenv("OPENAI_API_KEY")
)
Working with Prompt Templates¶
Prompt Templates allow dynamic creation of prompts with parameters. This is key for scalable applications:
from langchain.prompts import PromptTemplate, ChatPromptTemplate
# Simple template
simple_template = PromptTemplate(
input_variables=["product"],
template="Write a marketing description for {product}"
)
# Chat template with multiple roles
chat_template = ChatPromptTemplate.from_messages([
("system", "You are an expert in {domain}"),
("human", "Explain {concept} to me in simple terms")
])
# Usage
prompt = chat_template.format_messages(
domain="machine learning",
concept="gradient descent"
)
response = llm.invoke(prompt)
Chains: Connecting Components¶
Chains represent the heart of LangChain architecture. They allow connecting different components into complex workflows:
from langchain.chains import LLMChain
from langchain.chains import SimpleSequentialChain
# Basic chain
llm_chain = LLMChain(
llm=llm,
prompt=simple_template
)
result = llm_chain.run("smartphone")
print(result)
# Sequential chain - output of first part is input of second
template1 = PromptTemplate(
input_variables=["concept"],
template="Explain the concept of {concept} in one sentence"
)
template2 = PromptTemplate(
input_variables=["explanation"],
template="Create a practical example for: {explanation}"
)
chain1 = LLMChain(llm=llm, prompt=template1)
chain2 = LLMChain(llm=llm, prompt=template2)
overall_chain = SimpleSequentialChain(
chains=[chain1, chain2],
verbose=True
)
result = overall_chain.run("blockchain")
Memory: Maintaining Conversation Context¶
For chatbots and interactive applications, it’s crucial to maintain context of previous messages:
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationChain
# Memory initialization
memory = ConversationBufferMemory()
# Conversational chain with memory
conversation = ConversationChain(
llm=llm,
memory=memory,
verbose=True
)
# Dialogue
response1 = conversation.predict(input="What is my name?")
print(response1)
response2 = conversation.predict(input="My name is Paul")
print(response2)
response3 = conversation.predict(input="What is my name?")
print(response3) # LLM remembers the name Paul
RAG: Retrieval-Augmented Generation¶
RAG enables LLM to work with external data. Let’s show implementation of a document querying system:
Data Preparation and Embeddings¶
from langchain.document_loaders import TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
# Load document
loader = TextLoader("document.txt", encoding="utf-8")
documents = loader.load()
# Split into chunks
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200
)
chunks = text_splitter.split_documents(documents)
# Create embeddings
embeddings = OpenAIEmbeddings()
# Create vector store
vectorstore = FAISS.from_documents(chunks, embeddings)
QA System with RAG¶
from langchain.chains import RetrievalQA
# QA chain with retrieval
qa_chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=vectorstore.as_retriever(search_kwargs={"k": 3}),
return_source_documents=True
)
# Query
query = "What are the main advantages of microservices?"
result = qa_chain({"query": query})
print("Answer:", result["result"])
print("\nSource documents:")
for doc in result["source_documents"]:
print(f"- {doc.page_content[:100]}...")
Agents: Autonomous Decision Making¶
Agents can dynamically decide which tools to use based on user input:
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
import requests
def get_weather(city):
"""Simple function to get weather"""
# Here would be integration with weather API
return f"In {city} it's sunny today, 22°C"
def calculate(expression):
"""Safe calculator"""
try:
result = eval(expression)
return f"Result: {result}"
except:
return "Error in calculation"
# Define tools
tools = [
Tool(
name="Weather",
func=get_weather,
description="Gets weather information for specified city"
),
Tool(
name="Calculator",
func=calculate,
description="Performs mathematical calculation"
)
]
# Initialize agent
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True
)
# Use agent
response = agent.run("What's the weather in Prague and what is 15 * 7?")
print(response)
Practical Tips for Production Use¶
Error Handling and Retry Logic¶
from tenacity import retry, stop_after_attempt, wait_random_exponential
@retry(wait=wait_random_exponential(min=1, max=60), stop=stop_after_attempt(6))
def llm_with_retry(prompt):
try:
return llm.invoke(prompt)
except Exception as e:
print(f"Error: {e}")
raise
Cost Optimization¶
- Use cache for repeated queries
- Optimize prompt lengths
- Implement rate limiting
- Monitor token consumption
Summary¶
LangChain provides a robust framework for building advanced AI applications. Key components like Prompt Templates, Chains, Memory, and RAG enable creating scalable solutions. For production deployment, don’t forget error handling, monitoring, and cost optimization. This tutorial gave you the fundamentals - now you can experiment with your own use cases and gradually expand the functionality of your applications.