create_agent 提供了一个生产就绪的代理实现。
一个 LLM 代理在循环中运行工具以实现目标。
代理运行直到满足停止条件——即模型发出最终输出或达到迭代限制。
create_agent 使用 LangGraph 构建了一个基于图的代理运行时。图由节点(步骤)和边(连接)组成,定义了代理如何处理信息。代理通过此图移动,执行诸如模型节点(调用模型)、工具节点(执行工具)或中间件的节点。了解更多关于 Graph API 的信息。核心组件
模型
模型是代理的推理引擎。它可以以多种方式指定,支持静态和动态模型选择。静态模型
静态模型在创建代理时配置一次,执行过程中保持不变。这是最常见的直接方法。 要从一个 初始化一个静态模型:ChatOpenAI。参见 Chat models 以查看其他可用的聊天模型类。
temperature, max_tokens, timeouts, base_url, 和其他提供者特定的设置。参见 参考 以查看可用的参数和方法。
动态模型
动态模型在 基于当前 和上下文进行选择。这使得复杂的路由逻辑和成本优化成为可能。 要使用动态模型,创建中间件使用@wrap_model_call 装饰器,该装饰器在请求中修改模型:
工具
工具给代理提供了采取行动的能力。代理超越了简单的模型工具绑定,通过促进:- 多个工具调用的序列(由单个提示触发)
- 适当时候的并行工具调用
- 基于先前结果的动态工具选择
- 工具重试逻辑和错误处理
- 工具调用之间的状态持久性
静态工具
静态工具在创建代理时定义,执行过程中保持不变。这是最常见的直接方法。 要定义一个带有静态工具的代理,将工具列表传递给代理。动态工具
与动态工具,代理在运行时修改可用工具的集合,而不是在一开始就定义。不是每个工具都适合每个情况。太多工具可能会使模型过载(超出上下文)并增加错误;太少则限制了能力。动态工具选择使代理能够根据认证状态、用户权限、功能标志或对话阶段来适应可用工具集。 有两种方法,取决于工具是否在运行时已知:- Filtering pre-registered tools
- Runtime tool registration
当所有可能的工具在代理创建时都已知,你可以预注册它们,并基于状态、权限或上下文动态过滤哪些工具暴露给模型。This approach is best when:
- State
- Store
- Runtime Context
只有在达到某些对话里程碑后才启用高级工具:
- All possible tools are known at compile/startup time
- You want to filter based on permissions, feature flags, or conversation state
- Tools are static but their availability is dynamic
Tool error handling
To customize how tool errors are handled, use the@wrap_tool_call decorator to create middleware:
ToolMessage with the custom error message when a tool fails:
Tool use in the ReAct loop
Agents follow the ReAct (“Reasoning + Acting”) pattern, alternating between brief reasoning steps with targeted tool calls and feeding the resulting observations into subsequent decisions until they can deliver a final answer.Example of ReAct loop
Example of ReAct loop
Prompt: Identify the current most popular wireless headphones and verify availability.
- Reasoning: “Popularity is time-sensitive, I need to use the provided search tool.”
- Acting: Call
search_products("wireless headphones")
- Reasoning: “I need to confirm availability for the top-ranked item before answering.”
- Acting: Call
check_inventory("WH-1000XM5")
- Reasoning: “I have the most popular model and its stock status. I can now answer the user’s question.”
- Acting: Produce final answer
System prompt
You can shape how your agent approaches tasks by providing a prompt. Thesystem_prompt parameter can be provided as a string:
system_prompt is provided, the agent will infer its task from the messages directly.
The system_prompt parameter accepts either a str or a SystemMessage. Using a SystemMessage gives you more control over the prompt structure, which is useful for provider-specific features like Anthropic’s prompt caching:
cache_control field with {"type": "ephemeral"} tells Anthropic to cache that content block, reducing latency and costs for repeated requests that use the same system prompt.
Dynamic system prompt
For more advanced use cases where you need to modify the system prompt based on runtime context or agent state, you can use middleware. The@dynamic_prompt decorator creates middleware that generates system prompts based on the model request:
Name
Set an optionalname for the agent. This is used as the node identifier when adding the agent as a subgraph in multi-agent systems:
Invocation
You can invoke an agent by passing an update to itsState. All agents include a sequence of messages in their state; to invoke the agent, pass a new message:
stream and invoke.
Advanced concepts
Structured output
In some situations, you may want the agent to return an output in a specific format. LangChain provides strategies for structured output via theresponse_format parameter.
ToolStrategy
ToolStrategy uses artificial tool calling to generate structured output. This works with any model that supports tool calling. ToolStrategy should be used when provider-native structured output (via ProviderStrategy) is not available or reliable.
ProviderStrategy
ProviderStrategy uses the model provider’s native structured output generation. This is more reliable but only works with providers that support native structured output:
As of
langchain 1.0, simply passing a schema (e.g.,
response_format=ContactInfo) will default to ProviderStrategy if the model
supports native structured output. It will fall back to ToolStrategy
otherwise.Memory
Agents maintain conversation history automatically through the message state. You can also configure the agent to use a custom state schema to remember additional information during the conversation. Information stored in the state can be thought of as the short-term memory of the agent: Custom state schemas must extendAgentState as a TypedDict.
There are two ways to define custom state:
- Via middleware (preferred)
- Via
state_schemaoncreate_agent
Defining state via middleware
Use middleware to define custom state when your custom state needs to be accessed by specific middleware hooks and tools attached to said middleware.Defining state via state_schema
Use the state_schema parameter as a shortcut to define custom state that is only used in tools.
As of
langchain 1.0, custom state schemas must be TypedDict types.
Pydantic models and dataclasses are no longer supported. See the v1 migration
guide for more
details.Defining custom state via middleware is preferred over defining it via
state_schema on create_agent because it allows you to keep state extensions conceptually scoped to the relevant middleware and tools.state_schema is still supported for backwards compatibility on create_agent.Streaming
We’ve seen how the agent can be called withinvoke to get a final response. If the agent executes multiple steps, this may take a while. To show intermediate progress, we can stream back messages as they occur.
Middleware
Middleware provides powerful extensibility for customizing agent behavior at different stages of execution. You can use middleware to:- Process state before the model is called (e.g., message trimming, context injection)
- Modify or validate the model’s response (e.g., guardrails, content filtering)
- Handle tool execution errors with custom logic
- Implement dynamic model selection based on state or context
- Add custom logging, monitoring, or analytics
Connect these docs to Claude, VSCode, and more via MCP
for real-time answers.

