Skip to main content
Context engineering is the practice of building dynamic systems that provide the right information and tools, in the right format, so that an AI application can accomplish a task. Context can be characterized along two key dimensions:
  1. By mutability:
    • Static context: Immutable data that doesn’t change during execution (e.g., user metadata, database connections, tools)
    • Dynamic context: Mutable data that evolves as the application runs (e.g., conversation history, intermediate results, tool call observations)
  2. By lifetime:
    • Runtime context: Data scoped to a single run or invocation
    • Cross-conversation context: Data that persists across multiple conversations or sessions
Runtime context refers to local context: data and dependencies your code needs to run. It does not refer to:
  • The LLM context, which is the data passed into the LLM’s prompt.
  • The “context window”, which is the maximum number of tokens that can be passed to the LLM.
Runtime context is a form of dependency injection and can be used to optimize the LLM context. It lets you provide dependencies (like database connections, user IDs, or API clients) to your tools and nodes at runtime rather than hardcoding them. For example, you can use user metadata in the runtime context to fetch user preferences and feed them into the context window.
LangGraph 提供了三种管理上下文的方法,结合了可变性和生命周期维度:
上下文类型描述可变性生命周期访问方法
静态运行时上下文启动时通过 context 参数传递给应用的用户元数据、工具和数据库连接静态单次运行invoke/streamcontext 参数
动态运行时上下文(状态)在单次运行期间可变化的数据动态单次运行LangGraph 状态对象
跨对话动态上下文(存储)跨多个对话或会话共享的持久数据动态跨对话LangGraph 存储

静态运行时上下文

静态运行时上下文 代表在启动应用时通过 context 参数传递给 invoke/stream 的不可变数据,如用户元数据、工具和数据库连接。这些数据在执行过程中不会改变。
@dataclass
class ContextSchema:
    user_name: str

graph.invoke(
    {"messages": [{"role": "user", "content": "hi!"}]},
    context={"user_name": "John Smith"}
)
from dataclasses import dataclass
from langchain.agents import create_agent
from langchain.agents.middleware import dynamic_prompt, ModelRequest


@dataclass
class ContextSchema:
    user_name: str

@dynamic_prompt
def personalized_prompt(request: ModelRequest) -> str:
    user_name = request.runtime.context.user_name
    return f"You are a helpful assistant. Address the user as {user_name}."

agent = create_agent(
    model="claude-sonnet-4-6",
    tools=[get_weather],
    middleware=[personalized_prompt],
    context_schema=ContextSchema
)

agent.invoke(
    {"messages": [{"role": "user", "content": "what is the weather in sf"}]},
    context=ContextSchema(user_name="John Smith")
)
详情请参阅 Agents
Runtime 对象可以用于访问静态上下文和其他实用程序,如活动存储和流写入器。 详情请参阅 Runtime 文档。

动态运行时上下文

动态运行时上下文 表示在单次运行期间可变化的数据,并通过 LangGraph 状态对象进行管理。这包括对话历史、中间结果和工具或 LLM 输出中派生的值。在 LangGraph 中,状态对象充当 短期记忆
该示例展示了如何将状态集成到代理 提示 中。状态也可以被代理的 工具 访问,这些工具可以根据需要读取或更新状态。详情请参阅 工具调用指南
from langchain.agents import create_agent
from langchain.agents.middleware import dynamic_prompt, ModelRequest
from langchain.agents import AgentState


class CustomState(AgentState):
    user_name: str

@dynamic_prompt
def personalized_prompt(request: ModelRequest) -> str:
    user_name = request.state.get("user_name", "User")
    return f"You are a helpful assistant. User's name is {user_name}"

agent = create_agent(
    model="claude-sonnet-4-6",
    tools=[...],
    state_schema=CustomState,
    middleware=[personalized_prompt],
)

agent.invoke({
    "messages": "hi!",
    "user_name": "John Smith"
})
启用记忆 更多详情请参阅 记忆指南。这是一个强大的功能,允许您在多次调用之间持久化代理的状态。否则,状态仅限于单次运行。

跨对话动态上下文

跨对话动态上下文 表示跨越多个对话或会话的持久、可变化数据,并通过 LangGraph 存储进行管理。这包括用户资料、偏好设置和历史交互。LangGraph 存储充当 长期记忆 跨多次运行。可以用于读取或更新持久事实(例如,用户资料、偏好设置、先前的交互)。

更多信息