随着大模型(LLM)能力越来越强,RAG(Retrieval Augmented Generation,检索增强生成)技术成为增强大模型知识准确性的关键手段。
通过检索实时数据、外部文档,模型能回答更多基于事实的问题,降低“幻觉”概率。
而 LangChain 的 LangGraph 能将 LLM、RAG、工具调用(Tools)整合成一个智能 Agent 流程图,极大提升了问答系统的动态能力。
本文通过一个完整示例,展示如何用 LangChain 构建一个「RAG + Agent」的问答系统,代码可直接复用,帮助大家快速落地智能应用。
工程结构
llm_env.py # 初始化 LLM rag_agent.py # 结合 RAG 与 Agent 的主逻辑
初始化 LLM
首先通过 llm_env.py 初始化一个 LLM 模型对象,供整个流程使用:
from langchain.chat_models import init_chat_model llm = init_chat_model("gpt-4o-mini", model_provider="openai")
RAG + Agent 系统搭建
导入依赖
import os import sys import time sys.path.append(os.getcwd()) from llm_set import llm_env from langchain.embeddings import OpenAIEmbeddings from langchain_postgres import PGVector from langchain_community.document_loaders import WebBaseLoader from langchain_core.documents import Document from langchain_text_splitters import RecursiveCharacterTextSplitter from langgraph.graph import MessagesState, StateGraph from langchain_core.tools import tool from langchain_core.messages import HumanMessage, SystemMessage from langgraph.prebuilt import ToolNode, tools_condition from langgraph.graph import END from langgraph.checkpoint.postgres import PostgresSaver
初始化 LLM 与 Embedding
llm = llm_env.llm embeddings = OpenAIEmbeddings(model="text-embedding-3-large")
初始化向量数据库
vector_store = PGVector( embeddings=embeddings, collection_name="my_rag_agent_docs", connection="postgresql+psycopg2://postgres:123456@localhost:5433/langchainvector", )
加载网页文档
url = "https://www.cnblogs.com/chenyishi/p/18926783" loader = WebBaseLoader( web_paths=(url,), ) docs = loader.load() for doc in docs: doc.metadata["source"] = url
文本分割 & 入库
text_splitter = RecursiveCharacterTextSplitter(chunk_size=200, chunk_overlap=50) all_splits = text_splitter.split_documents(docs) existing = vector_store.similarity_search(url, k=1, filter={"source": url}) if not existing: _ = vector_store.add_documents(documents=all_splits) print("文档向量化完成")
定义 RAG 检索工具
通过 @tool 装饰器,定义一个文档检索工具,供 Agent 动态调用:
@tool(response_format="content_and_artifact") def retrieve(query: str) -> tuple[str, dict]: """Retrieve relevant documents from the vector store.""" retrieved_docs = vector_store.similarity_search(query, k=2) if not retrieved_docs: return "No relevant documents found.", {} return "nn".join( (f"Source: {doc.metadata}n" f"Content: {doc.page_content}") for doc in retrieved_docs ), retrieved_docs
定义 Agent Graph 节点
LLM 调用工具节点
def query_or_respond(state: MessagesState): llm_with_tools = llm.bind_tools([retrieve]) response = llm_with_tools.invoke(state["messages"]) return {"messages": [response]}
工具节点
tools = ToolNode([retrieve])
生成响应节点
def generate(state: MessagesState): recent_tool_messages = [] for message in reversed(state["messages"]): if message.type == "tool": recent_tool_messages.append(message) else: break tool_messages = recent_tool_messages[::-1] system_message_content = "nn".join(doc.content for doc in tool_messages) conversation_messages = [ message for message in state["messages"] if message.type in ("human", "system") or (message.type == "ai" and not message.tool_calls) ] prompt = [SystemMessage(system_message_content)] + conversation_messages response = llm.invoke(prompt) return {"messages": [response]}
组装 Agent 流程图
graph_builder = StateGraph(MessagesState) graph_builder.add_node(query_or_respond) graph_builder.add_node(tools) graph_builder.add_node(generate) graph_builder.set_entry_point("query_or_respond") graph_builder.add_conditional_edges( "query_or_respond", tools_condition, path_map={END: END, "tools": "tools"}, ) graph_builder.add_edge("tools", "generate") graph_builder.add_edge("generate", END)
启用 Checkpoint & 运行流程
数据库存储器
DB_URI = "postgresql://postgres:123456@localhost:5433/langchaindemo?sslmode=disable" with PostgresSaver.from_conn_string(DB_URI) as checkpointer: checkpointer.setup() graph = graph_builder.compile(checkpointer=checkpointer)
启动交互循环
input_thread_id = input("输入thread_id:") time_str = time.strftime("%Y%m%d", time.localtime()) config = {"configurable": {"thread_id": f"rag-{time_str}-demo-{input_thread_id}"}} print("输入问题,输入 exit 退出。") while True: query = input("你: ") if query.strip().lower() == "exit": break response = graph.invoke({"messages": [HumanMessage(content=query)]}, config=config) print(response)
总结
本文完整展示了如何用 LangChain + LangGraph,结合:
LLM(大模型)
Embedding 检索(RAG)
Agent 动态调用工具
流程图编排
Checkpoint 存储
构建一个智能问答系统。通过将工具(RAG 检索)和 Agent 机制结合,可以让 LLM 在需要的时候 自主调用检索能力,有效增强对知识的引用能力,解决“幻觉”问题,具备很好的落地应用价值。