API 参考有关所有功能和配置选项的详细文档,请参阅
ChatAnthropic API 参考。AWS Bedrock 和 Google VertexAI请注意,某些 Anthropic 模型也可以通过 AWS Bedrock 和 Google VertexAI 访问。请参阅
ChatBedrock 和 ChatVertexAI 集成,通过这些服务使用 Anthropic 模型。对于 AWS Bedrock 上使用与 ChatAnthropic 相同 API 的 Anthropic 模型,请使用 langchain-aws 中的 ChatAnthropicBedrock。概述
集成详情
| 类 | 包 | 可序列化 | JS/TS 支持 | 下载量 | 最新版本 |
|---|---|---|---|---|---|
ChatAnthropic | langchain-anthropic | beta | ✅ (npm) |
模型功能
设置
要访问 Anthropic(Claude)模型,您需要安装langchain-anthropic 集成包并获取 Claude API 密钥。
安装
Copy
pip install -U langchain-anthropic
凭证
前往 Claude 控制台 注册并生成 Claude API 密钥。完成后设置ANTHROPIC_API_KEY 环境变量:
Copy
import getpass
import os
if "ANTHROPIC_API_KEY" not in os.environ:
os.environ["ANTHROPIC_API_KEY"] = getpass.getpass("Enter your Anthropic API key: ")
Copy
os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
os.environ["LANGSMITH_TRACING"] = "true"
实例化
现在我们可以实例化模型对象并生成聊天补全:Copy
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-haiku-4-5-20251001",
# temperature=,
# max_tokens=,
# timeout=,
# max_retries=,
# ...
)
ChatAnthropic API 参考。
调用
调用
调用
Copy
messages = [
(
"system",
"You are a helpful translator. Translate the user sentence to French.",
),
(
"human",
"I love programming.",
),
]
model.invoke(messages)
Copy
print(ai_msg.text)
Copy
J'adore la programmation.
流式传输
流式传输
Copy
for chunk in model.stream(messages):
print(chunk.text, end="")
Copy
AIMessageChunk(content="J", id="run-272ff5f9-8485-402c-b90d-eac8babc5b25")
AIMessageChunk(content="'", id="run-272ff5f9-8485-402c-b90d-eac8babc5b25")
AIMessageChunk(content="a", id="run-272ff5f9-8485-402c-b90d-eac8babc5b25")
AIMessageChunk(content="ime", id="run-272ff5f9-8485-402c-b90d-eac8babc5b25")
AIMessageChunk(content=" la", id="run-272ff5f9-8485-402c-b90d-eac8babc5b25")
AIMessageChunk(content=" programm", id="run-272ff5f9-8485-402c-b90d-eac8babc5b25")
AIMessageChunk(content="ation", id="run-272ff5f9-8485-402c-b90d-eac8babc5b25")
AIMessageChunk(content=".", id="run-272ff5f9-8485-402c-b90d-eac8babc5b25")
Copy
stream = model.stream(messages)
full = next(stream)
for chunk in stream:
full += chunk
full
Copy
AIMessageChunk(content="J'aime la programmation.", id="run-b34faef0-882f-4869-a19c-ed2b856e6361")
异步
异步
Copy
await model.ainvoke(messages)
# stream
async for chunk in (await model.astream(messages))
# batch
await model.abatch([messages])
Copy
AIMessage(
content="J'aime la programmation.",
response_metadata={
"id": "msg_01Trik66aiQ9Z1higrD5XFx3",
"model": "claude-sonnet-4-6",
"stop_reason": "end_turn",
"stop_sequence": None,
"usage": {"input_tokens": 25, "output_tokens": 11},
},
id="run-5886ac5f-3c2e-49f5-8a44-b1e92808c929-0",
usage_metadata={
"input_tokens": 25,
"output_tokens": 11,
"total_tokens": 36,
},
)
内容块
在使用工具、扩展思考和其他功能时,单个 AnthropicAIMessage 的内容可以是单个字符串或 Anthropic 内容块列表。
例如,当 Anthropic 模型调用工具时,工具调用是消息内容的一部分(同时也暴露在标准化的 AIMessage.tool_calls 中):
Copy
from langchain_anthropic import ChatAnthropic
from typing_extensions import Annotated
model = ChatAnthropic(model="claude-haiku-4-5-20251001")
def get_weather(
location: Annotated[str, ..., "Location as city and state."]
) -> str:
"""Get the weather at a location."""
return "It's sunny."
model_with_tools = model.bind_tools([get_weather])
response = model_with_tools.invoke("Which city is hotter today: LA or NY?")
response.content
Copy
[{'text': "I'll help you compare the temperatures of Los Angeles and New York by checking their current weather. I'll retrieve the weather for both cities.",
'type': 'text'},
{'id': 'toolu_01CkMaXrgmsNjTso7so94RJq',
'input': {'location': 'Los Angeles, CA'},
'name': 'get_weather',
'type': 'tool_use'},
{'id': 'toolu_01SKaTBk9wHjsBTw5mrPVSQf',
'input': {'location': 'New York, NY'},
'name': 'get_weather',
'type': 'tool_use'}]
content_blocks 将以 LangChain 的标准格式渲染内容,该格式在其他模型提供商之间保持一致。了解更多关于内容块的信息。
Copy
response.content_blocks
tool_calls 属性以标准格式专门访问工具调用:
Copy
response.tool_calls
Copy
[{'name': 'GetWeather',
'args': {'location': 'Los Angeles, CA'},
'id': 'toolu_01Ddzj5PkuZkrjF4tafzu54A'},
{'name': 'GetWeather',
'args': {'location': 'New York, NY'},
'id': 'toolu_012kz4qHZQqD4qg8sFPeKqpP'}]
工具
Anthropic 的工具使用功能允许您定义 Claude 在对话过程中可以调用的外部函数。这使得动态信息检索、计算以及与外部系统的交互成为可能。 有关如何将工具绑定到模型实例的详细信息,请参阅ChatAnthropic.bind_tools。
有关 Claude 内置工具(代码执行、网页浏览、Files API 等)的信息,请参阅内置工具。
Copy
from pydantic import BaseModel, Field
class GetWeather(BaseModel):
'''Get the current weather in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
class GetPopulation(BaseModel):
'''Get the current population in a given location'''
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
model_with_tools = model.bind_tools([GetWeather, GetPopulation])
ai_msg = model_with_tools.invoke("Which city is hotter today and which is bigger: LA or NY?")
ai_msg.tool_calls
Copy
[
{
"name": "GetWeather",
"args": {"location": "Los Angeles, CA"},
"id": "toolu_01KzpPEAgzura7hpBqwHbWdo",
},
{
"name": "GetWeather",
"args": {"location": "New York, NY"},
"id": "toolu_01JtgbVGVJbiSwtZk3Uycezx",
},
{
"name": "GetPopulation",
"args": {"location": "Los Angeles, CA"},
"id": "toolu_01429aygngesudV9nTbCKGuw",
},
{
"name": "GetPopulation",
"args": {"location": "New York, NY"},
"id": "toolu_01JPktyd44tVMeBcPPnFSEJG",
},
]
严格工具使用
严格工具使用需要:
- Claude Sonnet 4.5 或 Opus 4.1.
langchain-anthropic>=1.1.0
- 类型不匹配:
passengers: "2"而非passengers: 2 - 缺少必需字段:省略函数期望的字段
- 无效枚举值:超出允许集的值
- 模式违规:嵌套对象不匹配预期结构
- 工具输入严格遵循您的
input_schema - 保证字段类型和必需字段
- 消除格式错误输入的错误处理
- 使用的工具
name始终来自提供的工具
| 使用严格工具模式 | 使用标准工具调用 |
|---|---|
| 构建可靠性至关重要的智能体工作流 | 简单的单轮工具调用 |
| 具有许多参数或嵌套对象的工具 | 原型设计和实验 |
需要特定类型的函数(例如 int 与 str) |
bind_tools 时指定 strict=True。
Copy
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-sonnet-4-6")
def get_weather(location: str) -> str:
"""Get the weather at a location."""
return "It's sunny."
model_with_tools = model.bind_tools([get_weather], strict=True)
示例:类型安全的预订系统
示例:类型安全的预订系统
考虑一个
passengers 必须是整数的预订系统:Copy
from langchain_anthropic import ChatAnthropic
from typing import Literal
model = ChatAnthropic(model="claude-sonnet-4-6")
def book_flight(
destination: str,
departure_date: str,
passengers: int,
cabin_class: Literal["economy", "business", "first"]
) -> str:
"""Book a flight to a destination.
Args:
destination: The destination city
departure_date: Date in YYYY-MM-DD format
passengers: Number of passengers (must be an integer)
cabin_class: The cabin class for the flight
"""
return f"Booked {passengers} passengers to {destination}"
model_with_tools = model.bind_tools(
[book_flight],
strict=True,
tool_choice="any",
)
response = model_with_tools.invoke("Book 2 passengers to Tokyo, business class, 2025-01-15")
# With strict=True, passengers is guaranteed to be int, not "2" or "two"
print(response.tool_calls[0]["args"]["passengers"])
Copy
2
输入示例
对于复杂工具,您可以提供使用示例帮助 Claude 正确理解如何使用它们。这通过在工具的extras 参数中设置 input_examples 来实现。
Copy
from langchain_anthropic import ChatAnthropic
from langchain.tools import tool
@tool(
extras={
"input_examples": [
{
"query": "weather report",
"location": "San Francisco",
"format": "detailed"
},
{
"query": "temperature",
"location": "New York",
"format": "brief"
}
]
}
)
def search_weather_data(query: str, location: str, format: str = "brief") -> str:
"""Search weather database with specific query and format preferences.
Args:
query: The type of weather information to retrieve
location: City or region to search
format: Output format, either 'brief' or 'detailed'
"""
return f"{format.title()} {query} for {location}: Data found"
model = ChatAnthropic(model="claude-sonnet-4-6")
model_with_tools = model.bind_tools([search_weather_data])
response = model_with_tools.invoke(
"Get me a detailed weather report for Seattle"
)
extras 参数还支持:
细粒度工具流式传输
Anthropic 支持细粒度工具流式传输,这是一个减少具有大型参数的工具调用流式传输延迟的测试版功能。 细粒度流式传输不是在传输前缓冲整个参数值,而是在数据可用时立即发送。对于大型工具参数,这可以将初始延迟从 15 秒减少到约 3 秒。细粒度流式传输可能返回无效或不完整的 JSON 输入,特别是当响应在完成前达到
max_tokens 时。请为不完整的 JSON 数据实现适当的错误处理。fine-grained-tool-streaming-2025-05-14 测试版头:
Copy
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-sonnet-4-6",
betas=["fine-grained-tool-streaming-2025-05-14"],
)
def write_document(title: str, content: str) -> str:
"""Write a document with the given title and content."""
return f"Document '{title}' written successfully"
model_with_tools = model.bind_tools([write_document])
# Stream tool calls with reduced latency
for chunk in model_with_tools.stream(
"Write a detailed technical document about the benefits of streaming APIs"
):
print(chunk.content)
chunk.content 中的 input_json_delta 块到达。您可以累积这些数据以构建完整的工具参数:
Copy
import json
accumulated_json = ""
for chunk in model_with_tools.stream("Write a document about AI"):
for block in chunk.content:
if isinstance(block, dict) and block.get("type") == "input_json_delta":
accumulated_json += block.get("partial_json", "")
try:
# Try to parse accumulated JSON
parsed = json.loads(accumulated_json)
print(f"Complete args: {parsed}")
except json.JSONDecodeError:
# JSON is still incomplete, continue accumulating
pass
Copy
Complete args: {'title': 'Artificial Intelligence: An Overview', 'content': '# Artificial Intelligence: An Overview...
程序化工具调用
程序化工具调用需要:
- Claude Sonnet 4.5 或 Opus 4.5.
langchain-anthropic>=1.3.0
advanced-tool-use-2025-11-20 测试版头以启用程序化工具调用。- 在工具集中包含代码执行内置工具
- 在希望以程序化方式调用的工具上指定
extras={"allowed_callers": ["code_execution_20250825"]}
create_agent 完整示例。
您可以在初始化时指定
reuse_last_container 以自动重用来自先前模型响应的代码执行容器。Copy
from langchain.agents import create_agent
from langchain.tools import tool
from langchain_anthropic import ChatAnthropic
@tool(extras={"allowed_callers": ["code_execution_20250825"]})
def get_weather(location: str) -> str:
"""Get the weather at a location."""
return "It's sunny."
tools = [
{"type": "code_execution_20250825", "name": "code_execution"},
get_weather,
]
model = ChatAnthropic(
model="claude-sonnet-4-5",
betas=["advanced-tool-use-2025-11-20"],
reuse_last_container=True,
)
agent = create_agent(model, tools=tools)
input_query = {
"role": "user",
"content": "What's the weather in Boston?",
}
result = agent.invoke({"messages": [input_query]})
多模态
Claude 支持将图像和 PDF 作为内容块输入,既支持 Anthropic 的原生格式(请参阅视觉和 PDF 支持的文档),也支持 LangChain 的标准格式。支持的输入方式
| 方式 | 图像 | |
|---|---|---|
| Base64 内联数据 | ✅ | ✅ |
| HTTP/HTTPS URL | ✅ | ✅ |
| Files API | ✅ | ✅ |
Files API 也可用于将文件上传到容器,以供 Claude 的内置代码执行工具使用。有关详细信息,请参阅代码执行部分。
图像输入
使用带有列表内容格式的HumanMessage 提供图像输入以及文本。
Copy
from langchain_anthropic import ChatAnthropic
from langchain.messages import HumanMessage
model = ChatAnthropic(model="claude-sonnet-4-6")
message = HumanMessage(
content=[
{"type": "text", "text": "Describe the image at the URL."},
{
"type": "image",
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
},
]
)
response = model.invoke([message])
PDF 输入
提供 PDF 文件输入以及文本。Copy
from langchain_anthropic import ChatAnthropic
from langchain.messages import HumanMessage
model = ChatAnthropic(model="claude-sonnet-4-6")
message = HumanMessage(
content=[
{"type": "text", "text": "Summarize this document."},
{
"type": "file",
"url": "https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf",
"mime_type": "application/pdf",
},
]
)
response = model.invoke([message])
扩展思考
部分 Claude 模型支持扩展思考功能,该功能将输出得出最终答案的逐步推理过程。 在 Claude 文档中查看兼容模型。 要使用扩展思考,请在初始化ChatAnthropic 时指定 thinking 参数。如需要,也可以在调用时作为参数传入。
对于 Claude Sonnet 及更早版本的模型,您需要指定令牌预算。对于 Claude Opus 4.6+,可以使用自适应思考,该功能会自动确定预算。
Copy
import json
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-sonnet-4-6",
max_tokens=5000,
thinking={"type": "enabled", "budget_tokens": 2000},
)
response = model.invoke("What is the cube root of 50.653?")
print(json.dumps(response.content_blocks, indent=2))
Copy
[
{
"type": "reasoning",
"reasoning": "To find the cube root of 50.653, I need to find the value of $x$ such that $x^3 = 50.653$.\n\nI can try to estimate this first. \n$3^3 = 27$\n$4^3 = 64$\n\nSo the cube root of 50.653 will be somewhere between 3 and 4, but closer to 4.\n\nLet me try to compute this more precisely. I can use the cube root function:\n\ncube root of 50.653 = 50.653^(1/3)\n\nLet me calculate this:\n50.653^(1/3) \u2248 3.6998\n\nLet me verify:\n3.6998^3 \u2248 50.6533\n\nThat's very close to 50.653, so I'm confident that the cube root of 50.653 is approximately 3.6998.\n\nActually, let me compute this more precisely:\n50.653^(1/3) \u2248 3.69981\n\nLet me verify once more:\n3.69981^3 \u2248 50.652998\n\nThat's extremely close to 50.653, so I'll say that the cube root of 50.653 is approximately 3.69981.",
"extras": {"signature": "ErUBCkYIBxgCIkB0UjV..."}
},
{
"type": "text",
"text": "The cube root of 50.653 is approximately 3.6998.\n\nTo verify: 3.6998\u00b3 = 50.6530, which is very close to our original number.",
}
]
Claude Messages API 在 Claude Sonnet 3.7 和 Claude 4 模型之间以不同方式处理思考。更多信息请参阅 Claude 文档。
努力程度
某些 Claude 模型支持努力程度功能,用于控制 Claude 响应时使用的令牌数量。这对于平衡响应质量与延迟和成本非常有用。模型支持
- 正式可用:Claude Opus 4.6 和 Claude Opus 4.5
max努力程度级别仅受 Claude Opus 4.6 支持
Copy
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-opus-4-5-20251101",
effort="medium", # Options: "max", "high", "medium", "low"
)
response = model.invoke("Analyze the trade-offs between microservices and monolithic architectures")
将
effort 设置为 "high" 与完全省略该参数的行为完全相同。引用
Anthropic 支持引用功能,允许 Claude 根据用户提供的源文档将上下文附加到其答案中。 当查询中包含带有"citations": {"enabled": True} 的文档或 search_result 内容块时,Claude 可能会在响应中生成引用。
简单示例
在此示例中,我们传入一个纯文本文档。在后台,Claude 自动将输入文本分块为句子,生成引用时会用到这些句子。Copy
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-haiku-4-5-20251001")
messages = [
{
"role": "user",
"content": [
{
"type": "document",
"source": {
"type": "text",
"media_type": "text/plain",
"data": "The grass is green. The sky is blue.",
},
"title": "My Document",
"context": "This is a trustworthy document.",
"citations": {"enabled": True},
},
{"type": "text", "text": "What color is the grass and sky?"},
],
}
]
response = model.invoke(messages)
response.content
Copy
[{'text': 'Based on the document, ', 'type': 'text'},
{'text': 'the grass is green',
'type': 'text',
'citations': [{'type': 'char_location',
'cited_text': 'The grass is green. ',
'document_index': 0,
'document_title': 'My Document',
'start_char_index': 0,
'end_char_index': 20}]},
{'text': ', and ', 'type': 'text'},
{'text': 'the sky is blue',
'type': 'text',
'citations': [{'type': 'char_location',
'cited_text': 'The sky is blue.',
'document_index': 0,
'document_title': 'My Document',
'start_char_index': 20,
'end_char_index': 36}]},
{'text': '.', 'type': 'text'}]
在工具结果中(智能体 RAG)
Claude 支持 search_result 内容块,代表来自知识库或其他自定义来源查询的可引用结果。这些内容块既可以直接传递给 Claude(如上例),也可以在工具结果中传递。这允许 Claude 使用工具调用结果引用其响应中的元素。 要在响应工具调用时传递搜索结果,请定义一个工具,以 Anthropic 的原生格式返回search_result 内容块列表。例如:
Copy
def retrieval_tool(query: str) -> list[dict]:
"""Access my knowledge base."""
# Run a search (e.g., with a LangChain vector store)
results = vector_store.similarity_search(query=query, k=2)
# Package results into search_result blocks
return [
{
"type": "search_result",
# Customize fields as desired, using document metadata or otherwise
"title": "My Document Title",
"source": "Source description or provenance",
"citations": {"enabled": True},
"content": [{"type": "text", "text": doc.page_content}],
}
for doc in results
]
使用 LangGraph 的端到端示例
使用 LangGraph 的端到端示例
此处演示一个端到端示例,我们将示例文档填充到 LangChain 向量存储中,并为 Claude 配备一个查询这些文档的工具。此处的工具接受搜索查询和
category 字符串字面量,但可以使用任何有效的工具签名。此示例需要安装 langchain-openai 和 numpy:Copy
pip install langchain-openai numpy
Copy
from typing import Literal
from langchain.chat_models import init_chat_model
from langchain.embeddings import init_embeddings
from langchain_core.documents import Document
from langchain_core.vectorstores import InMemoryVectorStore
from langgraph.checkpoint.memory import InMemorySaver
from langchain.agents import create_agent
# Set up vector store
# Ensure you set your OPENAI_API_KEY environment variable
embeddings = init_embeddings("openai:text-embedding-3-small")
vector_store = InMemoryVectorStore(embeddings)
document_1 = Document(
id="1",
page_content=(
"To request vacation days, submit a leave request form through the "
"HR portal. Approval will be sent by email."
),
metadata={
"category": "HR Policy",
"doc_title": "Leave Policy",
"provenance": "Leave Policy - page 1",
},
)
document_2 = Document(
id="2",
page_content="Managers will review vacation requests within 3 business days.",
metadata={
"category": "HR Policy",
"doc_title": "Leave Policy",
"provenance": "Leave Policy - page 2",
},
)
document_3 = Document(
id="3",
page_content=(
"Employees with over 6 months tenure are eligible for 20 paid vacation days "
"per year."
),
metadata={
"category": "Benefits Policy",
"doc_title": "Benefits Guide 2025",
"provenance": "Benefits Policy - page 1",
},
)
documents = [document_1, document_2, document_3]
vector_store.add_documents(documents=documents)
# Define tool
async def retrieval_tool(
query: str, category: Literal["HR Policy", "Benefits Policy"]
) -> list[dict]:
"""Access my knowledge base."""
def _filter_function(doc: Document) -> bool:
return doc.metadata.get("category") == category
results = vector_store.similarity_search(
query=query, k=2, filter=_filter_function
)
return [
{
"type": "search_result",
"title": doc.metadata["doc_title"],
"source": doc.metadata["provenance"],
"citations": {"enabled": True},
"content": [{"type": "text", "text": doc.page_content}],
}
for doc in results
]
# Create agent
model = init_chat_model("claude-haiku-4-5-20251001")
checkpointer = InMemorySaver()
agent = create_agent(model, [retrieval_tool], checkpointer=checkpointer)
# Invoke on a query
config = {"configurable": {"thread_id": "session_1"}}
input_message = {
"role": "user",
"content": "How do I request vacation days?",
}
async for step in agent.astream(
{"messages": [input_message]},
config,
stream_mode="values",
):
step["messages"][-1].pretty_print()
与文本分割器一起使用
Anthropic 还允许您使用自定义文档类型指定自己的分割方式。LangChain 文本分割器可用于生成有意义的分割。请参阅下面的示例,我们将 LangChainREADME.md(一个 Markdown 文档)分割并作为上下文传递给 Claude:
此示例需要安装 langchain-text-splitters:
Copy
pip install langchain-text-splitters
Copy
import requests
from langchain_anthropic import ChatAnthropic
from langchain_text_splitters import MarkdownTextSplitter
def format_to_anthropic_documents(documents: list[str]):
return {
"type": "document",
"source": {
"type": "content",
"content": [{"type": "text", "text": document} for document in documents],
},
"citations": {"enabled": True},
}
# Pull readme
get_response = requests.get(
"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md"
)
readme = get_response.text
# Split into chunks
splitter = MarkdownTextSplitter(
chunk_overlap=0,
chunk_size=50,
)
documents = splitter.split_text(readme)
# Construct message
message = {
"role": "user",
"content": [
format_to_anthropic_documents(documents),
{"type": "text", "text": "Give me a link to LangChain's tutorials."},
],
}
# Query model
model = ChatAnthropic(model="claude-haiku-4-5-20251001")
response = model.invoke([message])
提示词缓存
Anthropic 支持对提示词元素进行缓存,包括消息、工具定义、工具结果、图像和文档。这允许您重用大型文档、指令、少样本文档和其他数据,以减少延迟和成本。 要启用提示词元素的缓存,请使用cache_control 键标记其关联的内容块。请参阅以下示例:
只有某些 Claude 模型支持提示词缓存。有关详细信息,请参阅 Claude 文档。
消息
Copy
import requests
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-sonnet-4-6")
# Pull LangChain readme
get_response = requests.get(
"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md"
)
readme = get_response.text
messages = [
{
"role": "system",
"content": [
{
"type": "text",
"text": "You are a technology expert.",
},
{
"type": "text",
"text": f"{readme}",
"cache_control": {"type": "ephemeral"},
},
],
},
{
"role": "user",
"content": "What's LangChain, according to its README?",
},
]
response_1 = model.invoke(messages)
response_2 = model.invoke(messages)
usage_1 = response_1.usage_metadata["input_token_details"]
usage_2 = response_2.usage_metadata["input_token_details"]
print(f"First invocation:\n{usage_1}")
print(f"\nSecond:\n{usage_2}")
Copy
First invocation:
{'cache_read': 0, 'cache_creation': 1458}
Second:
{'cache_read': 1458, 'cache_creation': 0}
Copy
response = model.invoke(
messages,
cache_control={"type": "ephemeral"},
)
扩展缓存缓存生命周期默认为 5 分钟。如果这太短,可以通过启用
缓存令牌数量的详细信息将包含在响应的
"extended-cache-ttl-2025-04-11" 测试版头并在消息上指定 "cache_control": {"type": "ephemeral", "ttl": "1h"} 来应用一小时缓存。示例
示例
Copy
model = ChatAnthropic(
model="claude-sonnet-4-6",
betas=["extended-cache-ttl-2025-04-11"],
)
messages = [
{
"role": "user",
"content": [
{
"type": "text",
"text": f"{long_text}",
"cache_control": {"type": "ephemeral", "ttl": "1h"},
},
],
}
]
usage_metadata 的 InputTokenDetails 中:Copy
response = model.invoke(messages)
response.usage_metadata
Copy
{
"input_tokens": 1500,
"output_tokens": 200,
"total_tokens": 1700,
"input_token_details": {
"cache_read": 0,
"cache_creation": 1000,
"ephemeral_1h_input_tokens": 750,
"ephemeral_5m_input_tokens": 250,
}
}
缓存工具
Copy
from langchain_anthropic import ChatAnthropic
from langchain.tools import tool
# For demonstration purposes, we artificially expand the
# tool description.
description = (
"Get the weather at a location. "
f"By the way, check out this readme: {readme}"
)
@tool(description=description, extras={"cache_control": {"type": "ephemeral"}})
def get_weather(location: str) -> str:
return "It's sunny."
model = ChatAnthropic(model="claude-sonnet-4-6")
model_with_tools = model.bind_tools([get_weather])
query = "What's the weather in San Francisco?"
response_1 = model_with_tools.invoke(query)
response_2 = model_with_tools.invoke(query)
usage_1 = response_1.usage_metadata["input_token_details"]
usage_2 = response_2.usage_metadata["input_token_details"]
print(f"First invocation:\n{usage_1}")
print(f"\nSecond:\n{usage_2}")
Copy
First invocation:
{'cache_read': 0, 'cache_creation': 1809}
Second:
{'cache_read': 1809, 'cache_creation': 0}
对话应用中的增量缓存
提示词缓存可用于多轮对话,以保持早期消息的上下文而无需冗余处理。 我们可以通过用cache_control 标记最后一条消息来启用增量缓存。Claude 将自动对后续消息使用最长的已缓存前缀。
下面,我们实现了一个集成此功能的简单聊天机器人。我们遵循 LangChain 聊天机器人教程,但添加了一个自定义归约器,该归约器自动用 cache_control 标记每条用户消息中的最后一个内容块:
带增量提示缓存的聊天机器人
带增量提示缓存的聊天机器人
Copy
import requests
from langchain_anthropic import ChatAnthropic
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import START, StateGraph, add_messages
from typing_extensions import Annotated, TypedDict
model = ChatAnthropic(model="claude-sonnet-4-6")
# Pull LangChain readme
get_response = requests.get(
"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md"
)
readme = get_response.text
def messages_reducer(left: list, right: list) -> list:
# Update last user message
for i in range(len(right) - 1, -1, -1):
if right[i].type == "human":
right[i].content[-1]["cache_control"] = {"type": "ephemeral"}
break
return add_messages(left, right)
class State(TypedDict):
messages: Annotated[list, messages_reducer]
workflow = StateGraph(state_schema=State)
# Define the function that calls the model
def call_model(state: State):
response = model.invoke(state["messages"])
return {"messages": [response]}
# Define the (single) node in the graph
workflow.add_edge(START, "model")
workflow.add_node("model", call_model)
# Add memory
memory = MemorySaver()
app = workflow.compile(checkpointer=memory)
Copy
from langchain.messages import HumanMessage
config = {"configurable": {"thread_id": "abc123"}}
query = "Hi! I'm Bob."
input_message = HumanMessage([{"type": "text", "text": query}])
output = app.invoke({"messages": [input_message]}, config)
output["messages"][-1].pretty_print()
print(f"\n{output['messages'][-1].usage_metadata['input_token_details']}")
Copy
================================== Ai Message ==================================
Hello, Bob! It's nice to meet you. How are you doing today? Is there something I can help you with?
{'cache_read': 0, 'cache_creation': 0, 'ephemeral_5m_input_tokens': 0, 'ephemeral_1h_input_tokens': 0}
Copy
query = f"Check out this readme: {readme}"
input_message = HumanMessage([{"type": "text", "text": query}])
output = app.invoke({"messages": [input_message]}, config)
output["messages"][-1].pretty_print()
print(f"\n{output['messages'][-1].usage_metadata['input_token_details']}")
Copy
================================== Ai Message ==================================
I can see you've shared the README from the LangChain GitHub repository. This is the documentation for LangChain, which is a popular framework for building applications powered by Large Language Models (LLMs). Here's a summary of what the README contains:
LangChain is:
- A framework for developing LLM-powered applications
- Helps chain together components and integrations to simplify AI application development
- Provides a standard interface for models, embeddings, vector stores, etc.
Key features/benefits:
- Real-time data augmentation (connect LLMs to diverse data sources)
- Model interoperability (swap models easily as needed)
- Large ecosystem of integrations
The LangChain ecosystem includes:
- LangSmith - For evaluations and observability
- LangGraph - For building complex agents with customizable architecture
- LangSmith - For deployment and scaling of agents
The README also mentions installation instructions (`pip install -U langchain`) and links to various resources including tutorials, how-to guides, conceptual guides, and API references.
Is there anything specific about LangChain you'd like to know more about, Bob?
{'cache_read': 0, 'cache_creation': 1846, 'ephemeral_5m_input_tokens': 1846, 'ephemeral_1h_input_tokens': 0}
Copy
query = "What was my name again?"
input_message = HumanMessage([{"type": "text", "text": query}])
output = app.invoke({"messages": [input_message]}, config)
output["messages"][-1].pretty_print()
print(f"\n{output['messages'][-1].usage_metadata['input_token_details']}")
Copy
================================== Ai Message ==================================
Your name is Bob. You introduced yourself at the beginning of our conversation.
{'cache_read': 1846, 'cache_creation': 278, 'ephemeral_5m_input_tokens': 278, 'ephemeral_1h_input_tokens': 0}
cache_control 键。令牌计数
您可以在将消息发送到模型之前使用get_num_tokens_from_messages() 计算令牌数。这使用 Anthropic 的官方令牌计数 API。
消息令牌计数
消息令牌计数
Copy
from langchain_anthropic import ChatAnthropic
from langchain.messages import HumanMessage, SystemMessage
model = ChatAnthropic(model="claude-sonnet-4-6")
messages = [
SystemMessage(content="You are a scientist"),
HumanMessage(content="Hello, Claude"),
]
token_count = model.get_num_tokens_from_messages(messages)
print(token_count)
Copy
14
工具令牌计数
工具令牌计数
使用工具时也可以计算令牌数:
Copy
from langchain.tools import tool
@tool(parse_docstring=True)
def get_weather(location: str) -> str:
"""Get the current weather in a given location
Args:
location: The city and state, e.g. San Francisco, CA
"""
return "Sunny"
messages = [
HumanMessage(content="What's the weather like in San Francisco?"),
]
token_count = model.get_num_tokens_from_messages(messages, tools=[get_weather])
print(token_count)
Copy
586
上下文管理
Anthropic 支持上下文管理功能,可自动管理模型的上下文窗口以优化性能和成本。 有关更多详细信息和配置选项,请参阅 Claude 文档。清除工具使用
从上下文中清除工具结果以减少令牌使用,同时保持对话流程。langchain-anthropic>=0.3.21 起支持上下文管理您必须指定 context-management-2025-06-27 测试版头才能对模型调用应用上下文管理。Copy
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-sonnet-4-6",
betas=["context-management-2025-06-27"],
context_management={"edits": [{"type": "clear_tool_uses_20250919"}]},
)
model_with_tools = model.bind_tools([{"type": "web_search_20250305", "name": "web_search"}])
response = model_with_tools.invoke("Search for recent developments in AI")
自动压缩
Claude Opus 4.6 支持自动服务器端压缩,当上下文窗口接近其限制时会智能地压缩对话历史。这允许进行更长的对话而无需手动管理上下文。自动压缩要求:
- Claude Opus 4.6
langchain-anthropic>=1.3.0compact-2026-01-12测试版头
Copy
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-opus-4-6",
betas=["compact-2026-01-12"],
max_tokens=4096,
context_management={
"edits": [
{
"type": "compact_20260112",
"trigger": {"type": "input_tokens", "value": 50000},
}
]
},
)
ChatAnthropic 将返回代表提示词状态的压缩块。这些内容应保留在多轮应用程序中传回给模型的消息历史记录中。
扩展上下文窗口
某些模型支持 100 万令牌的上下文窗口,目前以测试版形式提供给使用层级 4 的组织和具有自定义速率限制的组织。 要启用扩展上下文窗口,请指定context-1m-2025-08-07 测试版头:
Copy
from langchain_anthropic import ChatAnthropic
from langchain.messages import HumanMessage
model = ChatAnthropic(
model="claude-sonnet-4-6",
betas=["context-1m-2025-08-07"],
)
long_document = """
This is a very long document that would benefit from the extended 1M
context window...
[imagine this continues for hundreds of thousands of tokens]
"""
messages = [
HumanMessage(f"""
Please analyze this document and provide a summary:
{long_document}
What are the key themes and main conclusions?
""")
]
response = model.invoke(messages)
结构化输出
结构化输出需要:
- Claude Sonnet 4.5 或 Opus 4.1。
langchain-anthropic>=1.1.0
单个模型调用
单个模型调用
使用
with_structured_output 方法生成结构化模型响应。指定 method="json_schema" 以启用 Anthropic 的原生结构化输出功能;否则该方法默认使用函数调用。Copy
from langchain_anthropic import ChatAnthropic
from pydantic import BaseModel, Field
model = ChatAnthropic(model="claude-sonnet-4-6")
class Movie(BaseModel):
"""A movie with details."""
title: str = Field(..., description="The title of the movie")
year: int = Field(..., description="The year the movie was released")
director: str = Field(..., description="The director of the movie")
rating: float = Field(..., description="The movie's rating out of 10")
model_with_structure = model.with_structured_output(Movie, method="json_schema")
response = model_with_structure.invoke("Provide details about the movie Inception")
response
Copy
Movie(title='Inception', year=2010, director='Christopher Nolan', rating=8.8)
智能体响应格式
智能体响应格式
在生成最终响应时,使用
ProviderStrategy 指定 response_format 以启用 Anthropic 的结构化输出功能。Copy
from langchain.agents import create_agent
from langchain.agents.structured_output import ProviderStrategy
from pydantic import BaseModel
class Weather(BaseModel):
temperature: float
condition: str
def weather_tool(location: str) -> str:
"""Get the weather at a location."""
return "Sunny and 75 degrees F."
agent = create_agent(
model="anthropic:claude-sonnet-4-5",
tools=[weather_tool],
response_format=ProviderStrategy(Weather),
)
result = agent.invoke({
"messages": [{"role": "user", "content": "What's the weather in SF?"}]
})
result["structured_response"]
Copy
Weather(temperature=75.0, condition='Sunny')
内置工具
Anthropic 支持各种内置的客户端和服务器端工具。 服务器端工具(例如网络搜索)传递给模型并由 Anthropic 执行。客户端工具(例如 bash 工具)需要您在应用程序中实现回调执行逻辑并将结果返回给模型。 在任何情况下,您都可以通过在模型实例上使用bind_tools 使工具对聊天模型可用。
重要的是,客户端工具需要您实现执行逻辑。请参阅下面的相关部分以获取示例。
中间件与工具对于客户端工具(例如 bash、文本编辑器、记忆),您可以选择使用中间件,它提供了包含内置执行、状态管理和安全策略的生产就绪实现。当您需要一体化解决方案时使用中间件;当您需要自定义执行逻辑或想直接使用
bind_tools 时使用工具(如下所述)。测试版工具如果将测试版工具绑定到您的聊天模型,LangChain 将自动为您添加所需的测试版头。
Bash 工具
Claude 支持客户端 bash 工具,允许它在持久性 bash 会话中执行 shell 命令。这支持系统操作、脚本执行和命令行自动化。重要:您必须提供执行环境LangChain 处理 API 集成(发送/接收工具调用),但您负责:
- 设置沙盒计算环境(Docker、VM 等)
- 实现命令执行和输出捕获
- 在智能体循环中将结果传回给 Claude
要求:
- Claude 4 模型或 Claude Sonnet 3.7
- Anthropic 类型
- create_agent
- 字典
Copy
import subprocess
from anthropic.types.beta import BetaToolBash20250124Param
from langchain_anthropic import ChatAnthropic
from langchain.messages import HumanMessage, ToolMessage
from langchain.tools import tool
tool_spec = BetaToolBash20250124Param(
name="bash",
type="bash_20250124",
)
@tool(extras={"provider_tool_definition": tool_spec})
def bash(*, command: str, restart: bool = False, **kw):
"""Execute a bash command."""
if restart:
return "Bash session restarted"
try:
result = subprocess.run(
command,
shell=True,
capture_output=True,
text=True,
timeout=30,
)
return result.stdout + result.stderr
except Exception as e:
return f"Error: {e}"
model = ChatAnthropic(model="claude-sonnet-4-6")
model_with_bash = model.bind_tools([bash])
# Initial request
messages = [HumanMessage("List all files in the current directory")]
response = model_with_bash.invoke(messages)
print(response.content_blocks)
# Tool execution loop
while response.tool_calls:
# Execute each tool call
tool_messages = []
for tool_call in response.tool_calls:
result = bash.invoke(tool_call)
tool_messages.append(result)
# Continue conversation with tool results
messages = [*messages, response, *tool_messages]
response = model_with_bash.invoke(messages)
print(response.content_blocks)
Copy
import subprocess
from anthropic.types.beta import BetaToolBash20250124Param
from langchain.agents import create_agent
from langchain_anthropic import ChatAnthropic
from langchain.tools import tool
tool_spec = BetaToolBash20250124Param(
name="bash",
type="bash_20250124",
)
@tool(extras={"provider_tool_definition": tool_spec})
def bash(*, command: str, restart: bool = False, **kw):
"""Execute a bash command."""
if restart:
return "Bash session restarted"
result = subprocess.run(
command,
shell=True,
capture_output=True,
text=True,
)
return result.stdout + result.stderr
agent = create_agent(
model=ChatAnthropic(model="claude-sonnet-4-6"),
tools=[bash],
)
result = agent.invoke({"messages": [{"role": "user", "content": "List files"}]})
for message in result["messages"]:
message.pretty_print()
Copy
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-sonnet-4-6")
bash_tool = {
"type": "bash_20250124",
"name": "bash",
}
model_with_bash = model.bind_tools([bash_tool])
response = model_with_bash.invoke(
"List all Python files in the current directory"
)
# You must handle execution of the bash command in response.tool_calls via a tool execution loop
create_agent 会自动处理工具执行循环。response.tool_calls 将包含 Claude 想要执行的 bash 命令。您必须在您的环境中运行此命令并将结果传回。Copy
[{'type': 'text',
'text': "I'll list the Python files in the current directory for you."},
{'type': 'tool_call',
'name': 'bash',
'args': {'command': 'ls -la *.py'},
'id': 'toolu_01ABC123...'}]
command(必需):要执行的 bash 命令restart(可选):设置为true以重启 bash 会话
对于”开箱即用”的实现,请考虑使用
ClaudeBashToolMiddleware,它提供持久会话、Docker 隔离、输出脱敏以及启动/关闭命令。代码执行
Claude 可以使用服务器端代码执行工具在沙盒环境中执行代码。Anthropic 的
2025-08-25 代码执行工具从 langchain-anthropic>=1.0.3 起受支持。旧版 2025-05-22 工具从 langchain-anthropic>=0.3.14 起受支持。代码沙盒没有互联网访问,因此您只能使用环境中预安装的包。有关更多信息,请参阅 Claude 文档。
- Anthropic 类型
- create_agent
- 字典
Copy
from anthropic.types.beta import BetaCodeExecutionTool20250825Param
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-sonnet-4-6",
# (Optional) Enable the param below to automatically
# pass back in container IDs from previous response
reuse_last_container=True,
)
code_tool = BetaCodeExecutionTool20250825Param(
name="code_execution",
type="code_execution_20250825",
)
model_with_tools = model.bind_tools([code_tool])
response = model_with_tools.invoke(
"Calculate the mean and standard deviation of [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]"
)
Copy
from anthropic.types.beta import BetaCodeExecutionTool20250825Param
from langchain.agents import create_agent
from langchain_anthropic import ChatAnthropic
code_tool = BetaCodeExecutionTool20250825Param(
name="code_execution",
type="code_execution_20250825",
)
agent = create_agent(
model=ChatAnthropic(model="claude-sonnet-4-6"),
tools=[code_tool],
)
result = agent.invoke({
"messages": [{"role": "user", "content": "Calculate mean and std of [1,2,3,4,5]"}]
})
for message in result["messages"]:
message.pretty_print()
Copy
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-sonnet-4-6",
)
code_tool = {"type": "code_execution_20250825", "name": "code_execution"}
model_with_tools = model.bind_tools([code_tool])
response = model_with_tools.invoke(
"Calculate the mean and standard deviation of [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]"
)
与 Files API 配合使用
与 Files API 配合使用
使用 Files API,Claude 可以编写代码访问文件进行数据分析和其他用途。请参阅以下示例:注意 Claude 可能会在代码执行过程中生成文件。您可以使用 Files API 访问这些文件:
Copy
import anthropic
from anthropic.types.beta import BetaCodeExecutionTool20250825Param
from langchain_anthropic import ChatAnthropic
client = anthropic.Anthropic()
file = client.beta.files.upload(
file=open("/path/to/sample_data.csv", "rb")
)
file_id = file.id
# Run inference
model = ChatAnthropic(
model="claude-sonnet-4-6",
)
code_tool = BetaCodeExecutionTool20250825Param(
name="code_execution",
type="code_execution_20250825",
)
model_with_tools = model.bind_tools([code_tool])
input_message = {
"role": "user",
"content": [
{
"type": "text",
"text": "Please plot these data and tell me what you see.",
},
{
"type": "container_upload",
"file_id": file_id,
},
]
}
response = model_with_tools.invoke([input_message])
Copy
# Take all file outputs for demonstration purposes
file_ids = []
for block in response.content:
if block["type"] == "bash_code_execution_tool_result":
file_ids.extend(
content["file_id"]
for content in block.get("content", {}).get("content", [])
if "file_id" in content
)
for i, file_id in enumerate(file_ids):
file_content = client.beta.files.download(file_id)
file_content.write_to_file(f"/path/to/file_{i}.png")
可用工具版本:
code_execution_20250522(旧版)code_execution_20250825(推荐)
计算机操控
Claude 支持客户端计算机操控功能,允许通过截图、鼠标控制和键盘输入与桌面环境交互。重要:您必须提供执行环境LangChain 处理 API 集成(发送/接收工具调用),但您负责:
- 设置沙盒计算环境(Linux VM、Docker 容器等)
- 实现虚拟显示器(例如 Xvfb)
- 执行 Claude 的工具调用(截图、鼠标点击、键盘输入)
- 在智能体循环中将结果传回给 Claude
要求:
- Claude Opus 4.5、Claude 4 或 Claude Sonnet 3.7
- Anthropic 类型
- create_agent
- 字典
Copy
import base64
from typing import Literal
from anthropic.types.beta import BetaToolComputerUse20250124Param
from langchain_anthropic import ChatAnthropic
from langchain.messages import HumanMessage, ToolMessage
from langchain.tools import tool
DISPLAY_WIDTH = 1024
DISPLAY_HEIGHT = 768
tool_spec = BetaToolComputerUse20250124Param(
name="computer",
type="computer_20250124",
display_width_px=DISPLAY_WIDTH,
display_height_px=DISPLAY_HEIGHT,
display_number=1,
)
@tool(extras={"provider_tool_definition": tool_spec})
def computer(
*,
action: Literal[
"key", "type", "mouse_move", "left_click", "left_click_drag",
"right_click", "middle_click", "double_click", "screenshot",
"cursor_position", "scroll"
],
coordinate: list[int] | None = None,
text: str | None = None,
**kw
):
"""Control the computer display."""
if action == "screenshot":
# Take screenshot and return base64-encoded image
# Implementation depends on your display setup (e.g., Xvfb, pyautogui)
return {"type": "image", "data": "base64_screenshot_data..."}
elif action == "left_click" and coordinate:
# Execute click at coordinate
return f"Clicked at {coordinate}"
elif action == "type" and text:
# Type text
return f"Typed: {text}"
# ... implement other actions
return f"Executed {action}"
model = ChatAnthropic(model="claude-sonnet-4-6")
model_with_computer = model.bind_tools([computer])
# Initial request
messages = [HumanMessage("Take a screenshot to see what's on the screen")]
response = model_with_computer.invoke(messages)
print(response.content_blocks)
# Tool execution loop
while response.tool_calls:
tool_messages = []
for tool_call in response.tool_calls:
result = computer.invoke(tool_call["args"])
tool_messages.append(
ToolMessage(content=str(result), tool_call_id=tool_call["id"])
)
messages = [*messages, response, *tool_messages]
response = model_with_computer.invoke(messages)
print(response.content_blocks)
Copy
from typing import Literal
from anthropic.types.beta import BetaToolComputerUse20250124Param
from langchain.agents import create_agent
from langchain_anthropic import ChatAnthropic
from langchain.tools import tool
tool_spec = BetaToolComputerUse20250124Param(
name="computer",
type="computer_20250124",
display_width_px=1024,
display_height_px=768,
)
@tool(extras={"provider_tool_definition": tool_spec})
def computer(
*,
action: Literal[
"key", "type", "mouse_move", "left_click", "left_click_drag",
"right_click", "middle_click", "double_click", "screenshot",
"cursor_position", "scroll"
],
coordinate: list[int] | None = None,
text: str | None = None,
**kw
):
"""Control the computer display."""
if action == "screenshot":
return {"type": "image", "data": "base64_screenshot_data..."}
elif action == "left_click" and coordinate:
return f"Clicked at {coordinate}"
elif action == "type" and text:
return f"Typed: {text}"
return f"Executed {action}"
agent = create_agent(
model=ChatAnthropic(model="claude-sonnet-4-6"),
tools=[computer],
)
result = agent.invoke({
"messages": [{"role": "user", "content": "Take a screenshot"}]
})
for message in result["messages"]:
message.pretty_print()
Copy
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-sonnet-4-6")
computer_tool = {
"type": "computer_20250124",
"name": "computer",
"display_width_px": 1024,
"display_height_px": 768,
"display_number": 1,
}
model_with_computer = model.bind_tools([computer_tool])
response = model_with_computer.invoke(
"Take a screenshot to see what's on the screen"
)
# You must handle execution of the computer actions in response.tool_calls via a tool execution loop
create_agent 会自动处理工具执行循环。response.tool_calls 将包含 Claude 想要执行的计算机操作。您必须在您的环境中执行此操作并将结果传回。Copy
[{'type': 'text',
'text': "I'll take a screenshot to see what's currently on the screen."},
{'type': 'tool_call',
'name': 'computer',
'args': {'action': 'screenshot'},
'id': 'toolu_01RNsqAE7dDZujELtacNeYv9'}]
可用工具版本:
computer_20250124(适用于 Claude 4 和 Claude Sonnet 3.7)computer_20251124(适用于 Claude Opus 4.5)
远程 MCP
Claude 可以使用服务器端 MCP 连接器工具对远程 MCP 服务器进行模型生成的调用。远程 MCP 从
langchain-anthropic>=0.3.14 起受支持- Anthropic 类型
- create_agent
- 字典
Copy
from anthropic.types.beta import BetaMCPToolsetParam
from langchain_anthropic import ChatAnthropic
mcp_servers = [
{
"type": "url",
"url": "https://docs.langchain.com/mcp",
"name": "LangChain Docs",
}
]
model = ChatAnthropic(
model="claude-sonnet-4-6",
mcp_servers=mcp_servers,
)
mcp_tool = BetaMCPToolsetParam(
type="mcp_toolset",
mcp_server_name="LangChain Docs",
)
response = model.invoke(
"What are LangChain content blocks?",
tools=[mcp_tool],
)
Copy
from anthropic.types.beta import BetaMCPToolsetParam
from langchain.agents import create_agent
from langchain_anthropic import ChatAnthropic
mcp_servers = [
{
"type": "url",
"url": "https://docs.langchain.com/mcp",
"name": "LangChain Docs",
}
]
mcp_tool = BetaMCPToolsetParam(
type="mcp_toolset",
mcp_server_name="LangChain Docs",
)
agent = create_agent(
model=ChatAnthropic(
model="claude-sonnet-4-6",
mcp_servers=mcp_servers,
),
tools=[mcp_tool],
)
result = agent.invoke({
"messages": [{"role": "user", "content": "What are LangChain content blocks?"}]
})
for message in result["messages"]:
message.pretty_print()
Copy
from langchain_anthropic import ChatAnthropic
mcp_servers = [
{
"type": "url",
"url": "https://docs.langchain.com/mcp",
"name": "LangChain Docs",
# "tool_configuration": { # optional configuration
# "enabled": True,
# "allowed_tools": ["ask_question"],
# },
# "authorization_token": "PLACEHOLDER", # optional authorization if needed
}
]
model = ChatAnthropic(
model="claude-sonnet-4-6",
mcp_servers=mcp_servers,
)
response = model.invoke(
"What are LangChain content blocks?",
tools=[{"type": "mcp_toolset", "mcp_server_name": "LangChain Docs"}],
)
response.content_blocks
文本编辑器
Claude 支持客户端文本编辑器工具,可用于查看和修改本地文本文件。详细信息请参阅此处的文档。- Anthropic 类型
- create_agent
- 字典
Copy
from typing import Literal
from anthropic.types.beta import BetaToolTextEditor20250728Param
from langchain_anthropic import ChatAnthropic
from langchain.messages import HumanMessage, ToolMessage
from langchain.tools import tool
tool_spec = BetaToolTextEditor20250728Param(
name="str_replace_based_edit_tool",
type="text_editor_20250728",
)
# Simple in-memory file storage for demonstration
files: dict[str, str] = {
"/workspace/primes.py": "def is_prime(n):\n if n < 2\n return False\n return True"
}
@tool(extras={"provider_tool_definition": tool_spec})
def str_replace_based_edit_tool(
*,
command: Literal["view", "create", "str_replace", "insert", "undo_edit"],
path: str,
file_text: str | None = None,
old_str: str | None = None,
new_str: str | None = None,
insert_line: int | None = None,
view_range: list[int] | None = None,
**kw
):
"""View and edit text files."""
if command == "view":
if path not in files:
return f"Error: File {path} not found"
content = files[path]
if view_range:
lines = content.splitlines()
start, end = view_range[0] - 1, view_range[1]
return "\n".join(lines[start:end])
return content
elif command == "create":
files[path] = file_text or ""
return f"Created {path}"
elif command == "str_replace" and old_str is not None:
if path not in files:
return f"Error: File {path} not found"
files[path] = files[path].replace(old_str, new_str or "", 1)
return f"Replaced in {path}"
# ... implement other commands
return f"Executed {command} on {path}"
model = ChatAnthropic(model="claude-sonnet-4-6")
model_with_tools = model.bind_tools([str_replace_based_edit_tool])
# Initial request
messages = [HumanMessage("There's a syntax error in my primes.py file. Can you fix it?")]
response = model_with_tools.invoke(messages)
print(response.content_blocks)
# Tool execution loop
while response.tool_calls:
tool_messages = []
for tool_call in response.tool_calls:
result = str_replace_based_edit_tool.invoke(tool_call["args"])
tool_messages.append(
ToolMessage(content=result, tool_call_id=tool_call["id"])
)
messages = [*messages, response, *tool_messages]
response = model_with_tools.invoke(messages)
print(response.content_blocks)
Copy
from typing import Literal
from anthropic.types.beta import BetaToolTextEditor20250728Param
from langchain.agents import create_agent
from langchain_anthropic import ChatAnthropic
from langchain.tools import tool
# Simple in-memory file storage
files: dict[str, str] = {
"/workspace/primes.py": "def is_prime(n):\n if n < 2\n return False\n return True"
}
tool_spec = BetaToolTextEditor20250728Param(
name="str_replace_based_edit_tool",
type="text_editor_20250728",
)
@tool(extras={"provider_tool_definition": tool_spec})
def str_replace_based_edit_tool(
*,
command: Literal["view", "create", "str_replace", "insert", "undo_edit"],
path: str,
file_text: str | None = None,
old_str: str | None = None,
new_str: str | None = None,
**kw
):
"""View and edit text files."""
if command == "view":
return files.get(path, f"Error: File {path} not found")
elif command == "create":
files[path] = file_text or ""
return f"Created {path}"
elif command == "str_replace" and old_str is not None:
if path not in files:
return f"Error: File {path} not found"
files[path] = files[path].replace(old_str, new_str or "", 1)
return f"Replaced in {path}"
return f"Executed {command} on {path}"
agent = create_agent(
model=ChatAnthropic(model="claude-sonnet-4-6"),
tools=[str_replace_based_edit_tool],
)
result = agent.invoke({
"messages": [{"role": "user", "content": "Fix the syntax error in /workspace/primes.py"}]
})
for message in result["messages"]:
message.pretty_print()
Copy
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-sonnet-4-6")
editor_tool = {"type": "text_editor_20250728", "name": "str_replace_based_edit_tool"}
model_with_tools = model.bind_tools([editor_tool])
response = model_with_tools.invoke(
"There's a syntax error in my primes.py file. Can you help me fix it?"
)
# You must handle execution of the text editor commands in response.tool_calls via a tool execution loop
create_agent 会自动处理工具执行循环。Copy
[{'name': 'str_replace_based_edit_tool',
'args': {'command': 'view', 'path': '/root'},
'id': 'toolu_011BG5RbqnfBYkD8qQonS9k9',
'type': 'tool_call'}]
可用工具版本:
text_editor_20250124(旧版)text_editor_20250728(推荐)
对于”开箱即用”的实现,请考虑使用
StateClaudeTextEditorMiddleware 或 FilesystemClaudeTextEditorMiddleware,它们提供 LangGraph 状态集成或文件系统持久化、路径验证和其他功能。网页抓取
Claude 可以使用服务器端网页抓取工具从指定网页和 PDF 文档检索完整内容,并用引用来支撑其响应。- Anthropic 类型
- create_agent
- 字典
Copy
from anthropic.types.beta import BetaWebFetchTool20250910Param
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-haiku-4-5-20251001")
fetch_tool = BetaWebFetchTool20250910Param(
name="web_fetch",
type="web_fetch_20250910",
max_uses=3,
)
model_with_tools = model.bind_tools([fetch_tool])
response = model_with_tools.invoke(
"Please analyze the content at https://docs.langchain.com/"
)
Copy
from anthropic.types.beta import BetaWebFetchTool20250910Param
from langchain.agents import create_agent
from langchain_anthropic import ChatAnthropic
fetch_tool = BetaWebFetchTool20250910Param(
name="web_fetch",
type="web_fetch_20250910",
max_uses=3,
)
agent = create_agent(
model=ChatAnthropic(model="claude-haiku-4-5-20251001"),
tools=[fetch_tool],
)
result = agent.invoke({
"messages": [{"role": "user", "content": "Analyze https://docs.langchain.com/"}]
})
for message in result["messages"]:
message.pretty_print()
Copy
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-haiku-4-5-20251001")
fetch_tool = {"type": "web_fetch_20250910", "name": "web_fetch", "max_uses": 3}
model_with_tools = model.bind_tools([fetch_tool])
response = model_with_tools.invoke(
"Please analyze the content at https://docs.langchain.com/"
)
网络搜索
Claude 可以使用服务器端网络搜索工具运行搜索并用引用来支撑其响应。网络搜索工具从
langchain-anthropic>=0.3.13 起受支持- Anthropic 类型
- create_agent
- 字典
Copy
from anthropic.types.beta import BetaWebSearchTool20250305Param
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-sonnet-4-6")
search_tool = BetaWebSearchTool20250305Param(
name="web_search",
type="web_search_20250305",
max_uses=3,
)
model_with_tools = model.bind_tools([search_tool])
response = model_with_tools.invoke("How do I update a web app to TypeScript 5.5?")
Copy
from anthropic.types.beta import BetaWebSearchTool20250305Param
from langchain.agents import create_agent
from langchain_anthropic import ChatAnthropic
search_tool = BetaWebSearchTool20250305Param(
name="web_search",
type="web_search_20250305",
max_uses=3,
)
agent = create_agent(
model=ChatAnthropic(model="claude-sonnet-4-6"),
tools=[search_tool],
)
result = agent.invoke({
"messages": [{"role": "user", "content": "How do I update a web app to TypeScript 5.5?"}]
})
for message in result["messages"]:
message.pretty_print()
Copy
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-sonnet-4-6")
search_tool = {"type": "web_search_20250305", "name": "web_search", "max_uses": 3}
model_with_tools = model.bind_tools([search_tool])
response = model_with_tools.invoke("How do I update a web app to TypeScript 5.5?")
记忆工具
Claude 支持用于跨对话线程在客户端存储和检索上下文的记忆工具。详细信息请参阅此处的文档。Anthropic 的内置记忆工具从
langchain-anthropic>=0.3.21 起受支持- Anthropic 类型
- create_agent
- 字典
Copy
from typing import Literal
from anthropic.types.beta import BetaMemoryTool20250818Param
from langchain_anthropic import ChatAnthropic
from langchain.messages import HumanMessage, ToolMessage
from langchain.tools import tool
tool_spec = BetaMemoryTool20250818Param(
name="memory",
type="memory_20250818",
)
# Simple in-memory storage for demonstration purposes
memory_store: dict[str, str] = {
"/memories/interests": "User enjoys Python programming and hiking"
}
@tool(extras={"provider_tool_definition": tool_spec})
def memory(
*,
command: Literal["view", "create", "str_replace", "insert", "delete", "rename"],
path: str,
content: str | None = None,
old_str: str | None = None,
new_str: str | None = None,
insert_line: int | None = None,
new_path: str | None = None,
**kw,
):
"""Manage persistent memory across conversations."""
if command == "view":
if path == "/memories":
# List all memories
return "\n".join(memory_store.keys()) or "No memories stored"
return memory_store.get(path, f"No memory at {path}")
elif command == "create":
memory_store[path] = content or ""
return f"Created memory at {path}"
elif command == "str_replace" and old_str is not None:
if path in memory_store:
memory_store[path] = memory_store[path].replace(old_str, new_str or "", 1)
return f"Updated {path}"
elif command == "delete":
memory_store.pop(path, None)
return f"Deleted {path}"
# ... implement other commands
return f"Executed {command} on {path}"
model = ChatAnthropic(model="claude-sonnet-4-6")
model_with_tools = model.bind_tools([memory])
# Initial request
messages = [HumanMessage("What are my interests?")]
response = model_with_tools.invoke(messages)
print(response.content_blocks)
# Tool execution loop
while response.tool_calls:
tool_messages = []
for tool_call in response.tool_calls:
result = memory.invoke(tool_call["args"])
tool_messages.append(ToolMessage(content=result, tool_call_id=tool_call["id"]))
messages = [*messages, response, *tool_messages]
response = model_with_tools.invoke(messages)
print(response.content_blocks)
Copy
[{'type': 'text',
'text': "I'll check my memory to see what information I have about your interests."},
{'type': 'tool_call',
'name': 'memory',
'args': {'command': 'view', 'path': '/memories'},
'id': 'toolu_01XeP9sxx44rcZHFNqXSaKqh'}]
Copy
from typing import Literal
from anthropic.types.beta import BetaMemoryTool20250818Param
from langchain.agents import create_agent
from langchain_anthropic import ChatAnthropic
from langchain.tools import tool
# Simple in-memory storage
memory_store: dict[str, str] = {
"/memories/interests": "User enjoys Python programming and hiking"
}
tool_spec = BetaMemoryTool20250818Param(
name="memory",
type="memory_20250818",
)
@tool(extras={"provider_tool_definition": tool_spec})
def memory(
*,
command: Literal["view", "create", "str_replace", "insert", "delete", "rename"],
path: str,
content: str | None = None,
old_str: str | None = None,
new_str: str | None = None,
**kw
):
"""Manage persistent memory across conversations."""
if command == "view":
if path == "/memories":
return "\n".join(memory_store.keys()) or "No memories stored"
return memory_store.get(path, f"No memory at {path}")
elif command == "create":
memory_store[path] = content or ""
return f"Created memory at {path}"
elif command == "str_replace" and old_str is not None:
if path in memory_store:
memory_store[path] = memory_store[path].replace(old_str, new_str or "", 1)
return f"Updated {path}"
elif command == "delete":
memory_store.pop(path, None)
return f"Deleted {path}"
return f"Executed {command} on {path}"
agent = create_agent(
model=ChatAnthropic(model="claude-sonnet-4-6"),
tools=[memory],
)
result = agent.invoke({
"messages": [{"role": "user", "content": "What are my interests?"}]
})
for message in result["messages"]:
message.pretty_print()
create_agent 会自动处理工具执行循环。Copy
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-sonnet-4-6",
)
model_with_tools = model.bind_tools([{"type": "memory_20250818", "name": "memory"}])
response = model_with_tools.invoke("What are my interests?")
response.content_blocks
# You must handle execution of the memory commands in response.tool_calls via a tool execution loop
Copy
[{'type': 'text',
'text': "I'll check my memory to see what information I have about your interests."},
{'type': 'tool_call',
'name': 'memory',
'args': {'command': 'view', 'path': '/memories'},
'id': 'toolu_01XeP9sxx44rcZHFNqXSaKqh'}]
对于”开箱即用”的实现,请考虑使用
StateClaudeMemoryMiddleware 或 FilesystemClaudeMemoryMiddleware,它们提供 LangGraph 状态集成或文件系统持久化、自动系统提示词注入和其他功能。工具搜索
Claude 支持服务器端工具搜索功能,该功能支持动态工具发现和加载。Claude 可以搜索您的工具目录并仅加载所需工具,而不是预先将所有工具定义加载到上下文窗口中。 在以下情况下此功能非常有用:- 您的系统中有 10 个以上的工具
- 工具定义消耗了大量令牌
- 您在大型工具集上遇到工具选择准确性问题
- 正则表达式 (
tool_search_tool_regex_20251119):Claude 构建正则表达式模式来搜索工具 - BM25 (
tool_search_tool_bm25_20251119):Claude 使用自然语言查询来搜索工具
extras 参数在 LangChain 工具上指定 defer_loading:
- Anthropic 类型
- create_agent
- 字典
Copy
from anthropic.types.beta import BetaToolSearchToolRegex20251119Param
from langchain_anthropic import ChatAnthropic
from langchain.tools import tool
@tool(extras={"defer_loading": True})
def get_weather(location: str, unit: str = "fahrenheit") -> str:
"""Get the current weather for a location.
Args:
location: City name
unit: Temperature unit (celsius or fahrenheit)
"""
return f"Weather in {location}: Sunny"
@tool(extras={"defer_loading": True})
def search_files(query: str) -> str:
"""Search through files in the workspace.
Args:
query: Search query
"""
return f"Found files matching '{query}'"
model = ChatAnthropic(model="claude-sonnet-4-6")
tool_search = BetaToolSearchToolRegex20251119Param(
name="tool_search_tool_regex",
type="tool_search_tool_regex_20251119",
)
model_with_tools = model.bind_tools([
tool_search,
get_weather,
search_files,
])
response = model_with_tools.invoke("What's the weather in San Francisco?")
Copy
from anthropic.types.beta import BetaToolSearchToolRegex20251119Param
from langchain.agents import create_agent
from langchain_anthropic import ChatAnthropic
from langchain.tools import tool
tool_search = BetaToolSearchToolRegex20251119Param(
name="tool_search_tool_regex",
type="tool_search_tool_regex_20251119",
)
@tool(extras={"defer_loading": True})
def get_weather(location: str, unit: str = "fahrenheit") -> str:
"""Get the current weather for a location.
Args:
location: City name
unit: Temperature unit (celsius or fahrenheit)
"""
return f"Weather in {location}: Sunny"
@tool(extras={"defer_loading": True})
def search_files(query: str) -> str:
"""Search through files in the workspace.
Args:
query: Search query
"""
return f"Found files matching '{query}'"
agent = create_agent(
model=ChatAnthropic(model="claude-sonnet-4-6"),
tools=[
tool_search,
get_weather,
search_files,
],
)
result = agent.invoke({
"messages": [{"role": "user", "content": "What's the weather in San Francisco?"}]
})
for message in result["messages"]:
message.pretty_print()
Copy
from langchain_anthropic import ChatAnthropic
from langchain.tools import tool
@tool(extras={"defer_loading": True})
def get_weather(location: str, unit: str = "fahrenheit") -> str:
"""Get the current weather for a location.
Args:
location: City name
unit: Temperature unit (celsius or fahrenheit)
"""
return f"Weather in {location}: Sunny"
@tool(extras={"defer_loading": True})
def search_files(query: str) -> str:
"""Search through files in the workspace.
Args:
query: Search query
"""
return f"Found files matching '{query}'"
model = ChatAnthropic(model="claude-sonnet-4-6")
model_with_tools = model.bind_tools([
{"type": "tool_search_tool_regex_20251119", "name": "tool_search_tool_regex"},
get_weather,
search_files,
])
response = model_with_tools.invoke("What's the weather in San Francisco?")
- 设置
defer_loading: True的工具仅在 Claude 通过搜索发现它们时加载 - 保留您最常用的 3-5 个工具为非延迟加载以获得最佳性能
- 两种变体都搜索工具名称、描述、参数名称和参数描述
响应元数据
Copy
ai_msg = model.invoke(messages)
ai_msg.response_metadata
Copy
{
"id": "msg_013xU6FHEGEq76aP4RgFerVT",
"model": "claude-sonnet-4-6",
"stop_reason": "end_turn",
"stop_sequence": None,
"usage": {"input_tokens": 25, "output_tokens": 11},
}
令牌使用元数据
Copy
ai_msg = model.invoke(messages)
ai_msg.usage_metadata
Copy
{"input_tokens": 25, "output_tokens": 11, "total_tokens": 36}
Copy
stream = model.stream(messages)
full = next(stream)
for chunk in stream:
full += chunk
full.usage_metadata
Copy
{"input_tokens": 25, "output_tokens": 11, "total_tokens": 36}
ChatAnthropic 时设置 stream_usage=False 来禁用这些。
API 参考
有关所有功能和配置选项的详细文档,请参阅ChatAnthropic API 参考。
将这些文档连接到 Claude、VSCode 等工具,通过 MCP 获取实时答案。

