Skip to main content
专为 Anthropic Claude 模型设计的中间件。了解更多关于中间件的信息。
中间件描述
提示词缓存通过缓存重复的提示词前缀来降低成本
Bash 工具通过本地命令执行来运行 Claude 的原生 bash 工具
文本编辑器提供 Claude 的文本编辑器工具用于文件编辑
记忆提供 Claude 的记忆工具以实现持久化的 agent 记忆
文件搜索用于基于状态的文件系统的搜索工具

中间件 vs 工具

langchain-anthropic 提供了两种使用 Claude 原生工具的方式:
  • 中间件(本页面):生产就绪的实现,内置执行、状态管理和安全策略
  • 工具(通过 bind_tools):底层构建模块,需要自行提供执行逻辑

何时选用哪种方式

使用场景推荐方式原因
带 bash 的生产级 agent中间件持久会话、Docker 隔离、输出脱敏
基于状态的文件编辑中间件内置 LangGraph 状态持久化
文件系统文件编辑中间件写入磁盘并进行路径验证
自定义执行逻辑工具完全控制执行过程
快速原型开发工具更简单,自带回调
通过 bind_tools 在非 agent 场景中使用工具中间件需要 create_agent

功能对比

功能中间件工具
支持 create_agent
支持 bind_tools
内置状态管理
自定义执行回调
使用中间件(开箱即用方案):
from langchain_anthropic import ChatAnthropic
from langchain_anthropic.middleware import ClaudeBashToolMiddleware
from langchain.agents import create_agent
from langchain.agents.middleware import DockerExecutionPolicy

# 生产就绪,支持 Docker 隔离、会话管理等
agent = create_agent(
    model=ChatAnthropic(model="claude-sonnet-4-6"),
    middleware=[
        ClaudeBashToolMiddleware(
            workspace_root="/workspace",
            execution_policy=DockerExecutionPolicy(image="python:3.11"),
            startup_commands=["pip install pandas"],
        ),
    ],
)
使用工具(自带执行逻辑):
import subprocess

from anthropic.types.beta import BetaToolBash20250124Param
from langchain_anthropic import ChatAnthropic
from langchain.agents import create_agent
from langchain.tools import tool

tool_spec = BetaToolBash20250124Param(
    name="bash",
    type="bash_20250124",
    strict=True,
)

@tool(extras={"provider_tool_definition": tool_spec})
def bash(*, command: str, restart: bool = False, **kw):
    """Execute a bash command."""
    if restart:
        return "Bash session restarted"
    try:
        result = subprocess.run(
            command,
            shell=True,
            capture_output=True,
            text=True,
            timeout=30,
        )
        return result.stdout + result.stderr
    except Exception as e:
        return f"Error: {e}"


agent = create_agent(
    model=ChatAnthropic(model="claude-sonnet-4-6"),
    tools=[bash],
)

result = agent.invoke(
    {"messages": [{"role": "user", "content": "List files in this directory"}]}
)
print(result["messages"][-1].content)

提示词缓存

通过在 Anthropic 服务器上缓存静态或重复的提示词内容(如系统提示词、工具定义和对话历史),降低成本和延迟。该中间件实现了一种对话缓存策略,在最新消息之后放置缓存断点,使整个对话历史(包括最新的用户消息)可以被缓存并在后续 API 调用中复用。 提示词缓存适用于以下场景:
  • 应用程序中存在长且静态、在请求之间不变的系统提示词
  • Agent 拥有许多在各次调用中保持不变的工具定义
  • 对话中早期的消息历史在多轮中被复用
  • 需要降低 API 成本和延迟的高流量部署场景
了解更多关于 Anthropic 提示词缓存策略和限制。
API 参考: AnthropicPromptCachingMiddleware
from langchain_anthropic import ChatAnthropic
from langchain_anthropic.middleware import AnthropicPromptCachingMiddleware
from langchain.agents import create_agent

agent = create_agent(
    model=ChatAnthropic(model="claude-sonnet-4-6"),
    system_prompt="<Your long system prompt here>",
    middleware=[AnthropicPromptCachingMiddleware(ttl="5m")],
)
type
string
default:"ephemeral"
缓存类型。目前仅支持 'ephemeral'
ttl
string
default:"5m"
缓存内容的生存时间。有效值:'5m''1h'
min_messages_to_cache
number
default:"0"
开始缓存前所需的最少消息数
unsupported_model_behavior
string
default:"warn"
使用非 Anthropic 模型时的行为。选项:'ignore''warn''raise'
中间件会缓存每次请求中包括最新消息在内的所有内容。在 TTL 窗口(5 分钟或 1 小时)内的后续请求中,之前已处理的内容将从缓存中获取,而无需重新处理,从而显著降低成本和延迟。工作原理:
  1. 第一次请求:系统提示词、工具以及用户消息 “Hi, my name is Bob” 被发送到 API 并缓存
  2. 第二次请求:缓存的内容(系统提示词、工具和第一条消息)从缓存中获取。只有新消息 “What’s my name?” 以及第一次请求中模型的回复需要被处理
  3. 此模式在每轮对话中持续,每次请求都复用缓存的对话历史
提示词缓存通过缓存 token 来降低 API 成本,但提供对话记忆功能。若要在多次调用之间持久化对话历史,请使用 checkpointer,例如 MemorySaver
from langchain_anthropic import ChatAnthropic
from langchain_anthropic.middleware import AnthropicPromptCachingMiddleware
from langchain.agents import create_agent
from langchain.messages import HumanMessage
from langchain_core.runnables import RunnableConfig
from langgraph.checkpoint.memory import MemorySaver


LONG_PROMPT = """
Please be a helpful assistant.

<Lots more context ...>
"""

agent = create_agent(
    model=ChatAnthropic(model="claude-sonnet-4-6"),
    system_prompt=LONG_PROMPT,
    middleware=[AnthropicPromptCachingMiddleware(ttl="5m")],
    checkpointer=MemorySaver(),  # Persists conversation history
)

# Use a thread_id to maintain conversation state
config: RunnableConfig = {"configurable": {"thread_id": "user-123"}}

# First invocation: Creates cache with system prompt, tools, and "Hi, my name is Bob"
agent.invoke({"messages": [HumanMessage("Hi, my name is Bob")]}, config=config)

# Second invocation: Reuses cached system prompt, tools, and previous messages
# The checkpointer maintains conversation history, so the agent remembers "Bob"
result = agent.invoke({"messages": [HumanMessage("What's my name?")]}, config=config)
print(result["messages"][-1].content)
Your name is Bob! You told me that when you introduced yourself at the start of our conversation.

Bash 工具

通过本地命令执行来运行 Claude 的原生 bash_20250124 工具。 Bash 工具中间件适用于以下场景:
  • 在本地执行 Claude 内置的 bash 工具
  • 利用 Claude 优化的 bash 工具接口
  • 需要与 Anthropic 模型保持持久 shell 会话的 agent
该中间件封装了 ShellToolMiddleware,并将其作为 Claude 的原生 bash 工具暴露出来。
API 参考: ClaudeBashToolMiddleware
from langchain_anthropic import ChatAnthropic
from langchain_anthropic.middleware import ClaudeBashToolMiddleware
from langchain.agents import create_agent

agent = create_agent(
    model=ChatAnthropic(model="claude-sonnet-4-6"),
    tools=[],
    middleware=[
        ClaudeBashToolMiddleware(
            workspace_root="/workspace",
        ),
    ],
)
ClaudeBashToolMiddleware 接受 ShellToolMiddleware 的所有参数,包括:
workspace_root
str | Path | None
shell 会话的根目录
startup_commands
tuple[str, ...] | list[str] | str | None
会话启动时运行的命令
execution_policy
BaseExecutionPolicy | None
执行策略(HostExecutionPolicyDockerExecutionPolicyCodexSandboxExecutionPolicy
redaction_rules
tuple[RedactionRule, ...] | list[RedactionRule] | None
用于清理命令输出的规则
完整配置详情请参阅 Shell 工具
import tempfile

from langchain_anthropic import ChatAnthropic
from langchain_anthropic.middleware import ClaudeBashToolMiddleware
from langchain.agents import create_agent
from langchain.agents.middleware import DockerExecutionPolicy

# Create a temporary workspace directory for this demo.
# In production, use a persistent directory path.
workspace = tempfile.mkdtemp(prefix="agent-workspace-")

agent = create_agent(
    model=ChatAnthropic(model="claude-sonnet-4-6"),
    tools=[],
    middleware=[
        ClaudeBashToolMiddleware(
            workspace_root=workspace,
            startup_commands=["echo 'Session initialized'"],
            execution_policy=DockerExecutionPolicy(
                image="python:3.11-slim",
            ),
        ),
    ],
)

# Claude can now use its native bash tool
result = agent.invoke(
    {"messages": [{"role": "user", "content": "What version of Python is installed?"}]}
)
print(result["messages"][-1].content)
Python 3.11.14 is installed.

文本编辑器

提供 Claude 的文本编辑器工具(text_editor_20250728),用于文件创建和编辑。 文本编辑器中间件适用于以下场景:
  • 基于文件的 agent 工作流
  • 代码编辑和重构任务
  • 多文件项目工作
  • 需要持久化文件存储的 agent
提供两种变体:基于状态(文件存储在 LangGraph 状态中)和基于文件系统(文件存储在磁盘上)。
API 参考:
State-based text editor
from langchain_anthropic import ChatAnthropic
from langchain_anthropic.middleware import StateClaudeTextEditorMiddleware
from langchain.agents import create_agent

agent = create_agent(
    model=ChatAnthropic(model="claude-sonnet-4-6"),
    tools=[],
    middleware=[StateClaudeTextEditorMiddleware()],
)
Filesystem-based text editor
from langchain_anthropic import ChatAnthropic
from langchain_anthropic.middleware import FilesystemClaudeTextEditorMiddleware
from langchain.agents import create_agent

agent = create_agent(
    model=ChatAnthropic(model="claude-sonnet-4-6"),
    tools=[],
    middleware=[
        FilesystemClaudeTextEditorMiddleware(
            root_path="/workspace",
        ),
    ],
)
Claude 的文本编辑器工具支持以下命令:
  • view - 查看文件内容或列出目录
  • create - 创建新文件
  • str_replace - 替换文件中的字符串
  • insert - 在指定行号插入文本
  • delete - 删除文件
  • rename - 重命名/移动文件
StateClaudeTextEditorMiddleware(基于状态)
allowed_path_prefixes
Sequence[str] | None
可选的允许路径前缀列表。若指定,则只允许以这些前缀开头的路径。
FilesystemClaudeTextEditorMiddleware(基于文件系统)
root_path
str
required
文件操作的根目录
allowed_prefixes
list[str] | None
可选的允许虚拟路径前缀列表(默认:["/"]
max_file_size_mb
int
default:"10"
最大文件大小(MB)
from langchain_anthropic import ChatAnthropic
from langchain_anthropic.middleware import StateClaudeTextEditorMiddleware
from langchain.agents import create_agent
from langchain_core.runnables import RunnableConfig
from langgraph.checkpoint.memory import MemorySaver


agent = create_agent(
    model=ChatAnthropic(model="claude-sonnet-4-6"),
    tools=[],
    middleware=[
        StateClaudeTextEditorMiddleware(
            allowed_path_prefixes=["/project"],
        ),
    ],
    checkpointer=MemorySaver(),
)

# Use a thread_id to persist state across invocations
config: RunnableConfig = {"configurable": {"thread_id": "my-session"}}

# Claude can now create and edit files (stored in LangGraph state)
result = agent.invoke(
    {"messages": [{"role": "user", "content": "Create a file at /project/hello.py with a simple hello world program"}]},
    config=config,
)
print(result["messages"][-1].content)
I've created a simple "Hello, World!" program at `/project/hello.py`. The program uses Python's `print()` function to display "Hello, World!" to the console when executed.
import tempfile

from langchain_anthropic import ChatAnthropic
from langchain_anthropic.middleware import FilesystemClaudeTextEditorMiddleware
from langchain.agents import create_agent


# Create a temporary workspace directory for this demo.
# In production, use a persistent directory path.
workspace = tempfile.mkdtemp(prefix="editor-workspace-")

agent = create_agent(
    model=ChatAnthropic(model="claude-sonnet-4-6"),
    tools=[],
    middleware=[
        FilesystemClaudeTextEditorMiddleware(
            root_path=workspace,
            allowed_prefixes=["/src"],
            max_file_size_mb=10,
        ),
    ],
)

# Claude can now create and edit files (stored on disk)
result = agent.invoke(
    {"messages": [{"role": "user", "content": "Create a file at /src/hello.py with a simple hello world program"}]}
)
print(result["messages"][-1].content)
I've created a simple "Hello, World!" program at `/src/hello.py`. The program uses Python's `print()` function to display "Hello, World!" to the console when executed.

记忆

提供 Claude 的记忆工具(memory_20250818),用于在对话轮次之间实现持久化的 agent 记忆。 记忆中间件适用于以下场景:
  • 长时间运行的 agent 对话
  • 在中断之间维持上下文
  • 任务进度追踪
  • 持久化 agent 状态管理
Claude 的记忆工具使用 /memories 目录,并自动注入一个系统提示词,鼓励 agent 检查和更新记忆。
API 参考: StateClaudeMemoryMiddlewareFilesystemClaudeMemoryMiddleware
State-based memory
from langchain_anthropic import ChatAnthropic
from langchain_anthropic.middleware import StateClaudeMemoryMiddleware
from langchain.agents import create_agent

agent = create_agent(
    model=ChatAnthropic(model="claude-sonnet-4-6"),
    tools=[],
    middleware=[StateClaudeMemoryMiddleware()],
)
Filesystem-based memory
from langchain_anthropic import ChatAnthropic
from langchain_anthropic.middleware import FilesystemClaudeMemoryMiddleware
from langchain.agents import create_agent

agent_fs = create_agent(
    model=ChatAnthropic(model="claude-sonnet-4-6"),
    tools=[],
    middleware=[
        FilesystemClaudeMemoryMiddleware(
            root_path="/workspace",
        ),
    ],
)
StateClaudeMemoryMiddleware(基于状态)
allowed_path_prefixes
Sequence[str] | None
可选的允许路径前缀列表。默认为 ["/memories"]
system_prompt
str
要注入的系统提示词。默认为 Anthropic 推荐的记忆提示词,鼓励 agent 检查和更新记忆。
FilesystemClaudeMemoryMiddleware(基于文件系统)
root_path
str
required
文件操作的根目录
allowed_prefixes
list[str] | None
可选的允许虚拟路径前缀列表。默认为 ["/memories"]
max_file_size_mb
int
default:"10"
最大文件大小(MB)
system_prompt
str
要注入的系统提示词
agent 将自动:
  1. 在启动时检查 /memories 目录
  2. 在执行过程中记录进度和想法
  3. 随着工作推进更新记忆文件
from langchain_anthropic import ChatAnthropic
from langchain_anthropic.middleware import StateClaudeMemoryMiddleware
from langchain.agents import create_agent
from langchain_core.runnables import RunnableConfig
from langgraph.checkpoint.memory import MemorySaver


agent = create_agent(
    model=ChatAnthropic(model="claude-sonnet-4-6"),
    tools=[],
    middleware=[StateClaudeMemoryMiddleware()],
    checkpointer=MemorySaver(),
)

# Use a thread_id to persist state across invocations
config: RunnableConfig = {"configurable": {"thread_id": "my-session"}}

# Claude can now use memory to track progress (stored in LangGraph state)
result = agent.invoke(
    {"messages": [{"role": "user", "content": "Remember that my favorite color is blue, then confirm what you stored."}]},
    config=config,
)
print(result["messages"][-1].content)
Perfect! I've stored your favorite color as **blue** in my memory system. The information is saved in my user preferences file where I can access it in future conversations.
agent 将自动:
  1. 在启动时检查 /memories 目录
  2. 在执行过程中记录进度和想法
  3. 随着工作推进更新记忆文件
import tempfile

from langchain_anthropic import ChatAnthropic
from langchain_anthropic.middleware import FilesystemClaudeMemoryMiddleware
from langchain.agents import create_agent


# Create a temporary workspace directory for this demo.
# In production, use a persistent directory path.
workspace = tempfile.mkdtemp(prefix="memory-workspace-")

agent = create_agent(
    model=ChatAnthropic(model="claude-sonnet-4-6"),
    tools=[],
    middleware=[
        FilesystemClaudeMemoryMiddleware(
            root_path=workspace,
        ),
    ],
)

# Claude can now use memory to track progress (stored on disk)
result = agent.invoke(
    {"messages": [{"role": "user", "content": "Remember that my favorite color is blue, then confirm what you stored."}]}
)
print(result["messages"][-1].content)
Perfect! I've stored your favorite color as **blue** in my memory system. The information is saved in my user preferences file where I can access it in future conversations.

文件搜索

为存储在 LangGraph 状态中的文件提供 Glob 和 Grep 搜索工具。文件搜索中间件适用于以下场景:
  • 搜索基于状态的虚拟文件系统
  • 与文本编辑器和记忆工具配合使用
  • 按模式查找文件
  • 使用正则表达式进行内容搜索
API 参考: StateFileSearchMiddleware
from langchain_anthropic import ChatAnthropic
from langchain_anthropic.middleware import (
    StateClaudeTextEditorMiddleware,
    StateFileSearchMiddleware,
)
from langchain.agents import create_agent

agent = create_agent(
    model=ChatAnthropic(model="claude-sonnet-4-6"),
    tools=[],
    middleware=[
        StateClaudeTextEditorMiddleware(),
        StateFileSearchMiddleware(),  # Search text editor files
    ],
)
state_key
str
default:"text_editor_files"
包含待搜索文件的状态键。使用 "text_editor_files" 搜索文本编辑器文件,或使用 "memory_files" 搜索记忆文件。
该中间件添加了与基于状态的文件协同工作的 Glob 和 Grep 搜索工具。
from langchain_anthropic import ChatAnthropic
from langchain_anthropic.middleware import (
    StateClaudeTextEditorMiddleware,
    StateFileSearchMiddleware,
)
from langchain.agents import create_agent
from langchain.messages import HumanMessage
from langchain_core.runnables import RunnableConfig
from langgraph.checkpoint.memory import MemorySaver


agent = create_agent(
    model=ChatAnthropic(model="claude-sonnet-4-6"),
    tools=[],
    middleware=[
        StateClaudeTextEditorMiddleware(),
        StateFileSearchMiddleware(state_key="text_editor_files"),
    ],
    checkpointer=MemorySaver(),
)

# Use a thread_id to persist state across invocations
config: RunnableConfig = {"configurable": {"thread_id": "my-session"}}

# First invocation: Create some files using the text editor tool
result = agent.invoke(
    {"messages": [HumanMessage("Create a Python project with main.py, utils/helpers.py, and tests/test_main.py")]},
    config=config,
)

# The agent creates files, which are stored in state
print("Files created:", list(result["text_editor_files"].keys()))

# Second invocation: Search the files we just created
# State is automatically persisted via the checkpointer
result = agent.invoke(
    {"messages": [HumanMessage("Find all Python files in the project")]},
    config=config,
)
print(result["messages"][-1].content)
Files created: ['/project/main.py', '/project/utils/helpers.py', '/project/utils/__init__.py', '/project/tests/test_main.py', '/project/tests/__init__.py', '/project/README.md']
I found 5 Python files in the project:

1. `/project/main.py` - Main application file
2. `/project/utils/__init__.py` - Utils package initialization
3. `/project/utils/helpers.py` - Helper utilities
4. `/project/tests/__init__.py` - Tests package initialization
5. `/project/tests/test_main.py` - Main test file

Would you like me to view the contents of any of these files?
from langchain_anthropic import ChatAnthropic
from langchain_anthropic.middleware import (
    StateClaudeMemoryMiddleware,
    StateFileSearchMiddleware,
)
from langchain.agents import create_agent
from langchain.messages import HumanMessage
from langchain_core.runnables import RunnableConfig
from langgraph.checkpoint.memory import MemorySaver


agent = create_agent(
    model=ChatAnthropic(model="claude-sonnet-4-6"),
    tools=[],
    middleware=[
        StateClaudeMemoryMiddleware(),
        StateFileSearchMiddleware(state_key="memory_files"),
    ],
    checkpointer=MemorySaver(),
)

# Use a thread_id to persist state across invocations
config: RunnableConfig = {"configurable": {"thread_id": "my-session"}}

# First invocation: Record some memories
result = agent.invoke(
    {"messages": [HumanMessage("Remember that the project deadline is March 15th and code review deadline is March 10th")]},
    config=config,
)

# The agent creates memory files, which are stored in state
print("Memory files created:", list(result["memory_files"].keys()))

# Second invocation: Search the memories we just recorded
# State is automatically persisted via the checkpointer
result = agent.invoke(
    {"messages": [HumanMessage("Search my memories for project deadlines")]},
    config=config,
)
print(result["messages"][-1].content)
Memory files created: ['/memories/project_info.md']
I found your project deadlines in my memory! Here's what I have recorded:

## Important Deadlines
- **Code Review Deadline:** March 10th
- **Project Deadline:** March 15th

## Notes
- Code review must be completed 5 days before final project deadline
- Need to ensure all code is ready for review by March 10th

Is there anything specific about these deadlines you'd like to know or update?