正在使用 AI 编程助手?
- 安装 LangChain Docs MCP 服务器,让您的代理能够访问最新的 LangChain 文档和示例。
- 安装 LangChain Skills,以提升您的代理在 LangChain 生态系统任务上的表现。
先决条件
对于这些示例,您需要:- 安装 LangChain 包
- 设置 Claude (Anthropic) 账户并获取 API 密钥
- 在终端中设置
ANTHROPIC_API_KEY环境变量
构建基本代理
首先创建一个可以回答问题并调用工具的简单代理。该代理将使用 Claude Sonnet 4.6 作为其语言模型,一个基本的天气函数作为工具,以及一个简单的提示来指导其行为。from langchain.agents import create_agent
def get_weather(city: str) -> str:
"""Get weather for a given city."""
return f"It's always sunny in {city}!"
agent = create_agent(
model="claude-sonnet-4-6",
tools=[get_weather],
system_prompt="You are a helpful assistant",
)
# Run the agent
agent.invoke(
{"messages": [{"role": "user", "content": "what is the weather in sf"}]}
)
要了解如何使用 LangSmith 追踪您的代理,请参阅 LangSmith 文档。
构建真实世界代理
接下来,构建一个实用的天气预报代理,以演示关键的生产概念:- 详细的系统提示以改善代理行为
- 创建工具以集成外部数据
- 模型配置以获得一致的响应
- 结构化输出 以获得可预测的结果
- 对话记忆以实现类似聊天的交互
- 创建并运行代理以测试功能完整的代理
定义系统提示
系统提示定义了代理的角色和行为。保持其具体且可操作:
SYSTEM_PROMPT = """You are an expert weather forecaster, who speaks in puns.
You have access to two tools:
- get_weather_for_location: use this to get the weather for a specific location
- get_user_location: use this to get the user's location
If a user asks you for the weather, make sure you know the location. If you can tell from the question that they mean wherever they are, use the get_user_location tool to find their location."""
创建工具
工具 允许模型通过调用您定义的函数与外部系统交互。
工具可以依赖于运行时上下文,也可以与代理内存交互。请注意下面的
get_user_location 工具如何使用运行时上下文:from dataclasses import dataclass
from langchain.tools import tool, ToolRuntime
@tool
def get_weather_for_location(city: str) -> str:
"""Get weather for a given city."""
return f"It's always sunny in {city}!"
@dataclass
class Context:
"""Custom runtime context schema."""
user_id: str
@tool
def get_user_location(runtime: ToolRuntime[Context]) -> str:
"""Retrieve user information based on user ID."""
user_id = runtime.context.user_id
return "Florida" if user_id == "1" else "SF"
配置模型
为您的用例设置正确的参数来配置您的语言模型:根据所选模型和提供商的不同,初始化参数可能有所不同;请参阅其参考页面以获取详细信息。
from langchain.chat_models import init_chat_model
model = init_chat_model(
"claude-sonnet-4-6",
temperature=0.5,
timeout=10,
max_tokens=1000
)
定义响应格式
可选地,如果您需要代理响应匹配特定模式,请定义结构化响应格式。
from dataclasses import dataclass
# We use a dataclass here, but Pydantic models are also supported.
@dataclass
class ResponseFormat:
"""Response schema for the agent."""
# A punny response (always required)
punny_response: str
# Any interesting information about the weather if available
weather_conditions: str | None = None
创建并运行代理
现在,将所有组件组装到您的代理中并运行它!
from langchain.agents.structured_output import ToolStrategy
agent = create_agent(
model=model,
system_prompt=SYSTEM_PROMPT,
tools=[get_user_location, get_weather_for_location],
context_schema=Context,
response_format=ToolStrategy(ResponseFormat),
checkpointer=checkpointer
)
# `thread_id` is a unique identifier for a given conversation.
config = {"configurable": {"thread_id": "1"}}
response = agent.invoke(
{"messages": [{"role": "user", "content": "what is the weather outside?"}]},
config=config,
context=Context(user_id="1")
)
print(response['structured_response'])
# ResponseFormat(
# punny_response="Florida is still having a 'sun-derful' day! The sunshine is playing 'ray-dio' hits all day long! I'd say it's the perfect weather for some 'solar-bration'! If you were hoping for rain, I'm afraid that idea is all 'washed up' - the forecast remains 'clear-ly' brilliant!",
# weather_conditions="It's always sunny in Florida!"
# )
# Note that we can continue the conversation using the same `thread_id`.
response = agent.invoke(
{"messages": [{"role": "user", "content": "thank you!"}]},
config=config,
context=Context(user_id="1")
)
print(response['structured_response'])
# ResponseFormat(
# punny_response="You're 'thund-erfully' welcome! It's always a 'breeze' to help you stay 'current' with the weather. I'm just 'cloud'-ing around waiting to 'shower' you with more forecasts whenever you need them. Have a 'sun-sational' day in the Florida sunshine!",
# weather_conditions=None
# )
Show 完整示例代码
Show 完整示例代码
from dataclasses import dataclass
from langchain.agents import create_agent
from langchain.chat_models import init_chat_model
from langchain.tools import tool, ToolRuntime
from langgraph.checkpoint.memory import InMemorySaver
from langchain.agents.structured_output import ToolStrategy
# Define system prompt
SYSTEM_PROMPT = """You are an expert weather forecaster, who speaks in puns.
You have access to two tools:
- get_weather_for_location: use this to get the weather for a specific location
- get_user_location: use this to get the user's location
If a user asks you for the weather, make sure you know the location. If you can tell from the question that they mean wherever they are, use the get_user_location tool to find their location."""
# Define context schema
@dataclass
class Context:
"""Custom runtime context schema."""
user_id: str
# Define tools
@tool
def get_weather_for_location(city: str) -> str:
"""Get weather for a given city."""
return f"It's always sunny in {city}!"
@tool
def get_user_location(runtime: ToolRuntime[Context]) -> str:
"""Retrieve user information based on user ID."""
user_id = runtime.context.user_id
return "Florida" if user_id == "1" else "SF"
# Configure model
model = init_chat_model(
"claude-sonnet-4-6",
temperature=0
)
# Define response format
@dataclass
class ResponseFormat:
"""Response schema for the agent."""
# A punny response (always required)
punny_response: str
# Any interesting information about the weather if available
weather_conditions: str | None = None
# Set up memory
checkpointer = InMemorySaver()
# Create agent
agent = create_agent(
model=model,
system_prompt=SYSTEM_PROMPT,
tools=[get_user_location, get_weather_for_location],
context_schema=Context,
response_format=ToolStrategy(ResponseFormat),
checkpointer=checkpointer
)
# Run agent
# `thread_id` is a unique identifier for a given conversation.
config = {"configurable": {"thread_id": "1"}}
response = agent.invoke(
{"messages": [{"role": "user", "content": "what is the weather outside?"}]},
config=config,
context=Context(user_id="1")
)
print(response['structured_response'])
# ResponseFormat(
# punny_response="Florida is still having a 'sun-derful' day! The sunshine is playing 'ray-dio' hits all day long! I'd say it's the perfect weather for some 'solar-bration'! If you were hoping for rain, I'm afraid that idea is all 'washed up' - the forecast remains 'clear-ly' brilliant!",
# weather_conditions="It's always sunny in Florida!"
# )
# Note that we can continue the conversation using the same `thread_id`.
response = agent.invoke(
{"messages": [{"role": "user", "content": "thank you!"}]},
config=config,
context=Context(user_id="1")
)
print(response['structured_response'])
# ResponseFormat(
# punny_response="You're 'thund-erfully' welcome! It's always a 'breeze' to help you stay 'current' with the weather. I'm just 'cloud'-ing around waiting to 'shower' you with more forecasts whenever you need them. Have a 'sun-sational' day in the Florida sunshine!",
# weather_conditions=None
# )
要了解如何使用 LangSmith 追踪您的代理,请参阅 LangSmith 文档。
- 理解上下文并记住对话
- 智能地使用多个工具
- 以一致的格式提供结构化响应
- 通过上下文处理用户特定信息
- 在交互之间维护对话状态
通过 MCP 将这些文档 连接到 Claude、VSCode 等,以获取实时答案。

