第一次请求:系统提示词、工具以及用户消息 “Hi, my name is Bob” 被发送到 API 并缓存
第二次请求:缓存的内容(系统提示词、工具和第一条消息)从缓存中获取。只有新消息 “What’s my name?” 以及第一次请求中模型的回复需要被处理
此模式在每轮对话中持续,每次请求都复用缓存的对话历史
提示词缓存通过缓存 token 来降低 API 成本,但不提供对话记忆功能。若要在多次调用之间持久化对话历史,请使用 checkpointer,例如 MemorySaver。
Copy
from langchain_anthropic import ChatAnthropicfrom langchain_anthropic.middleware import AnthropicPromptCachingMiddlewarefrom langchain.agents import create_agentfrom langchain.messages import HumanMessagefrom langchain_core.runnables import RunnableConfigfrom langgraph.checkpoint.memory import MemorySaverLONG_PROMPT = """Please be a helpful assistant.<Lots more context ...>"""agent = create_agent( model=ChatAnthropic(model="claude-sonnet-4-6"), system_prompt=LONG_PROMPT, middleware=[AnthropicPromptCachingMiddleware(ttl="5m")], checkpointer=MemorySaver(), # Persists conversation history)# Use a thread_id to maintain conversation stateconfig: RunnableConfig = {"configurable": {"thread_id": "user-123"}}# First invocation: Creates cache with system prompt, tools, and "Hi, my name is Bob"agent.invoke({"messages": [HumanMessage("Hi, my name is Bob")]}, config=config)# Second invocation: Reuses cached system prompt, tools, and previous messages# The checkpointer maintains conversation history, so the agent remembers "Bob"result = agent.invoke({"messages": [HumanMessage("What's my name?")]}, config=config)print(result["messages"][-1].content)
Copy
Your name is Bob! You told me that when you introduced yourself at the start of our conversation.
import tempfilefrom langchain_anthropic import ChatAnthropicfrom langchain_anthropic.middleware import ClaudeBashToolMiddlewarefrom langchain.agents import create_agentfrom langchain.agents.middleware import DockerExecutionPolicy# Create a temporary workspace directory for this demo.# In production, use a persistent directory path.workspace = tempfile.mkdtemp(prefix="agent-workspace-")agent = create_agent( model=ChatAnthropic(model="claude-sonnet-4-6"), tools=[], middleware=[ ClaudeBashToolMiddleware( workspace_root=workspace, startup_commands=["echo 'Session initialized'"], execution_policy=DockerExecutionPolicy( image="python:3.11-slim", ), ), ],)# Claude can now use its native bash toolresult = agent.invoke( {"messages": [{"role": "user", "content": "What version of Python is installed?"}]})print(result["messages"][-1].content)
from langchain_anthropic import ChatAnthropicfrom langchain_anthropic.middleware import StateClaudeTextEditorMiddlewarefrom langchain.agents import create_agentfrom langchain_core.runnables import RunnableConfigfrom langgraph.checkpoint.memory import MemorySaveragent = create_agent( model=ChatAnthropic(model="claude-sonnet-4-6"), tools=[], middleware=[ StateClaudeTextEditorMiddleware( allowed_path_prefixes=["/project"], ), ], checkpointer=MemorySaver(),)# Use a thread_id to persist state across invocationsconfig: RunnableConfig = {"configurable": {"thread_id": "my-session"}}# Claude can now create and edit files (stored in LangGraph state)result = agent.invoke( {"messages": [{"role": "user", "content": "Create a file at /project/hello.py with a simple hello world program"}]}, config=config,)print(result["messages"][-1].content)
Copy
I've created a simple "Hello, World!" program at `/project/hello.py`. The program uses Python's `print()` function to display "Hello, World!" to the console when executed.
完整示例:基于文件系统的文本编辑器
Copy
import tempfilefrom langchain_anthropic import ChatAnthropicfrom langchain_anthropic.middleware import FilesystemClaudeTextEditorMiddlewarefrom langchain.agents import create_agent# Create a temporary workspace directory for this demo.# In production, use a persistent directory path.workspace = tempfile.mkdtemp(prefix="editor-workspace-")agent = create_agent( model=ChatAnthropic(model="claude-sonnet-4-6"), tools=[], middleware=[ FilesystemClaudeTextEditorMiddleware( root_path=workspace, allowed_prefixes=["/src"], max_file_size_mb=10, ), ],)# Claude can now create and edit files (stored on disk)result = agent.invoke( {"messages": [{"role": "user", "content": "Create a file at /src/hello.py with a simple hello world program"}]})print(result["messages"][-1].content)
Copy
I've created a simple "Hello, World!" program at `/src/hello.py`. The program uses Python's `print()` function to display "Hello, World!" to the console when executed.
from langchain_anthropic import ChatAnthropicfrom langchain_anthropic.middleware import StateClaudeMemoryMiddlewarefrom langchain.agents import create_agentfrom langchain_core.runnables import RunnableConfigfrom langgraph.checkpoint.memory import MemorySaveragent = create_agent( model=ChatAnthropic(model="claude-sonnet-4-6"), tools=[], middleware=[StateClaudeMemoryMiddleware()], checkpointer=MemorySaver(),)# Use a thread_id to persist state across invocationsconfig: RunnableConfig = {"configurable": {"thread_id": "my-session"}}# Claude can now use memory to track progress (stored in LangGraph state)result = agent.invoke( {"messages": [{"role": "user", "content": "Remember that my favorite color is blue, then confirm what you stored."}]}, config=config,)print(result["messages"][-1].content)
Copy
Perfect! I've stored your favorite color as **blue** in my memory system. The information is saved in my user preferences file where I can access it in future conversations.
完整示例:基于文件系统的记忆
agent 将自动:
在启动时检查 /memories 目录
在执行过程中记录进度和想法
随着工作推进更新记忆文件
Copy
import tempfilefrom langchain_anthropic import ChatAnthropicfrom langchain_anthropic.middleware import FilesystemClaudeMemoryMiddlewarefrom langchain.agents import create_agent# Create a temporary workspace directory for this demo.# In production, use a persistent directory path.workspace = tempfile.mkdtemp(prefix="memory-workspace-")agent = create_agent( model=ChatAnthropic(model="claude-sonnet-4-6"), tools=[], middleware=[ FilesystemClaudeMemoryMiddleware( root_path=workspace, ), ],)# Claude can now use memory to track progress (stored on disk)result = agent.invoke( {"messages": [{"role": "user", "content": "Remember that my favorite color is blue, then confirm what you stored."}]})print(result["messages"][-1].content)
Copy
Perfect! I've stored your favorite color as **blue** in my memory system. The information is saved in my user preferences file where I can access it in future conversations.
from langchain_anthropic import ChatAnthropicfrom langchain_anthropic.middleware import ( StateClaudeTextEditorMiddleware, StateFileSearchMiddleware,)from langchain.agents import create_agentfrom langchain.messages import HumanMessagefrom langchain_core.runnables import RunnableConfigfrom langgraph.checkpoint.memory import MemorySaveragent = create_agent( model=ChatAnthropic(model="claude-sonnet-4-6"), tools=[], middleware=[ StateClaudeTextEditorMiddleware(), StateFileSearchMiddleware(state_key="text_editor_files"), ], checkpointer=MemorySaver(),)# Use a thread_id to persist state across invocationsconfig: RunnableConfig = {"configurable": {"thread_id": "my-session"}}# First invocation: Create some files using the text editor toolresult = agent.invoke( {"messages": [HumanMessage("Create a Python project with main.py, utils/helpers.py, and tests/test_main.py")]}, config=config,)# The agent creates files, which are stored in stateprint("Files created:", list(result["text_editor_files"].keys()))# Second invocation: Search the files we just created# State is automatically persisted via the checkpointerresult = agent.invoke( {"messages": [HumanMessage("Find all Python files in the project")]}, config=config,)print(result["messages"][-1].content)
I found 5 Python files in the project:1. `/project/main.py` - Main application file2. `/project/utils/__init__.py` - Utils package initialization3. `/project/utils/helpers.py` - Helper utilities4. `/project/tests/__init__.py` - Tests package initialization5. `/project/tests/test_main.py` - Main test fileWould you like me to view the contents of any of these files?
完整示例:搜索记忆文件
Copy
from langchain_anthropic import ChatAnthropicfrom langchain_anthropic.middleware import ( StateClaudeMemoryMiddleware, StateFileSearchMiddleware,)from langchain.agents import create_agentfrom langchain.messages import HumanMessagefrom langchain_core.runnables import RunnableConfigfrom langgraph.checkpoint.memory import MemorySaveragent = create_agent( model=ChatAnthropic(model="claude-sonnet-4-6"), tools=[], middleware=[ StateClaudeMemoryMiddleware(), StateFileSearchMiddleware(state_key="memory_files"), ], checkpointer=MemorySaver(),)# Use a thread_id to persist state across invocationsconfig: RunnableConfig = {"configurable": {"thread_id": "my-session"}}# First invocation: Record some memoriesresult = agent.invoke( {"messages": [HumanMessage("Remember that the project deadline is March 15th and code review deadline is March 10th")]}, config=config,)# The agent creates memory files, which are stored in stateprint("Memory files created:", list(result["memory_files"].keys()))# Second invocation: Search the memories we just recorded# State is automatically persisted via the checkpointerresult = agent.invoke( {"messages": [HumanMessage("Search my memories for project deadlines")]}, config=config,)print(result["messages"][-1].content)
I found your project deadlines in my memory! Here's what I have recorded:## Important Deadlines- **Code Review Deadline:** March 10th- **Project Deadline:** March 15th## Notes- Code review must be completed 5 days before final project deadline- Need to ensure all code is ready for review by March 10thIs there anything specific about these deadlines you'd like to know or update?