create_deep_agent 具有以下核心配置选项:
create_deep_agent(
model: str | BaseChatModel | None = None,
tools: Sequence[BaseTool | Callable | dict[str, Any]] | None = None,
*,
system_prompt: str | SystemMessage | None = None,
middleware: Sequence[AgentMiddleware] = (),
subagents: Sequence[SubAgent | CompiledSubAgent | AsyncSubAgent] | None = None,
skills: list[str] | None = None,
memory: list[str] | None = None,
response_format: ResponseFormat[ResponseT] | type[ResponseT] | dict[str, Any] | None = None,
backend: BackendProtocol | BackendFactory | None = None,
interrupt_on: dict[str, bool | InterruptOnConfig] | None = None,
...
) -> CompiledStateGraph
create_deep_agent API 参考。
模型
传递provider:model 格式的 model 字符串,或初始化的模型实例。默认为 anthropic:claude-sonnet-4-6。查看所有提供商的 支持模型 和经过测试推荐的 建议模型。
使用
provider:model 格式(例如 openai:gpt-5)快速切换模型。- OpenAI
- Anthropic
- Azure
- Google Gemini
- AWS Bedrock
- HuggingFace
- Other
👉 阅读 OpenAI 聊天模型集成文档
pip install -U "langchain[openai]"
import os
from deepagents import create_deep_agent
os.environ["OPENAI_API_KEY"] = "sk-..."
agent = create_deep_agent(model="openai:gpt-5.4")
# 这将使用默认参数为指定模型调用 init_chat_model
# 要使用特定模型参数,请直接使用 init_chat_model
👉 阅读 Anthropic 聊天模型集成文档
pip install -U "langchain[anthropic]"
import os
from deepagents import create_deep_agent
os.environ["ANTHROPIC_API_KEY"] = "sk-..."
agent = create_deep_agent(model="anthropic:claude-sonnet-4-6")
# 这将使用默认参数为指定模型调用 init_chat_model
# 要使用特定模型参数,请直接使用 init_chat_model
👉 阅读 Azure 聊天模型集成文档
pip install -U "langchain[openai]"
import os
from deepagents import create_deep_agent
os.environ["AZURE_OPENAI_API_KEY"] = "..."
os.environ["AZURE_OPENAI_ENDPOINT"] = "..."
os.environ["OPENAI_API_VERSION"] = "2025-03-01-preview"
agent = create_deep_agent(model="azure_openai:gpt-5.4")
# 这将使用默认参数为指定模型调用 init_chat_model
# 要使用特定模型参数,请直接使用 init_chat_model
👉 阅读 Google GenAI 聊天模型集成文档
pip install -U "langchain[google-genai]"
import os
from deepagents import create_deep_agent
os.environ["GOOGLE_API_KEY"] = "..."
agent = create_deep_agent(model="google_genai:gemini-3.1-pro-preview")
# 这将使用默认参数为指定模型调用 init_chat_model
# 要使用特定模型参数,请直接使用 init_chat_model
👉 阅读 AWS Bedrock 聊天模型集成文档
pip install -U "langchain[aws]"
from deepagents import create_deep_agent
# 按照此处步骤配置您的凭据:
# https://docs.aws.amazon.com/bedrock/latest/userguide/getting-started.html
agent = create_deep_agent(
model="anthropic.claude-sonnet-4-6",
model_provider="bedrock_converse",
)
# 这将使用默认参数为指定模型调用 init_chat_model
# 要使用特定模型参数,请直接使用 init_chat_model
👉 阅读 HuggingFace 聊天模型集成文档
pip install -U "langchain[huggingface]"
import os
from deepagents import create_deep_agent
os.environ["HUGGINGFACEHUB_API_TOKEN"] = "hf_..."
agent = create_deep_agent(
model="microsoft/Phi-3-mini-4k-instruct",
model_provider="huggingface",
temperature=0.7,
max_tokens=1024,
)
# 这将使用默认参数为指定模型调用 init_chat_model
# 要使用特定模型参数,请直接使用 init_chat_model
传递任何 支持的模型字符串,或已初始化的模型实例:
from deepagents import create_deep_agent
agent = create_deep_agent(model="provider:model-name")
连接弹性
LangChain 聊天模型会自动使用指数退避重试失败的 API 请求。默认情况下,模型针对网络错误、速率限制 (429) 和服务器错误 (5xx) 最多重试 6 次。客户端错误(如 401(未授权)或 404)不会重试。 创建模型时可以调整max_retries 参数以针对您的环境调整此行为:
from langchain.chat_models import init_chat_model
from deepagents import create_deep_agent
agent = create_deep_agent(
model=init_chat_model(
model="google_genai:gemini-3.1-pro-preview",
max_retries=10, # 针对不可靠网络增加(默认:6)
timeout=120, # 针对慢连接增加超时
),
)
对于不可靠网络上的长期代理任务,考虑将
max_retries 增加到 10–15,并与 checkpointer 配对,以便在故障之间保留进度。工具
除了用于规划、文件管理和子代理生成的 内置工具 之外,您还可以提供自定义工具:import os
from typing import Literal
from tavily import TavilyClient
from deepagents import create_deep_agent
tavily_client = TavilyClient(api_key=os.environ["TAVILY_API_KEY"])
def internet_search(
query: str,
max_results: int = 5,
topic: Literal["general", "news", "finance"] = "general",
include_raw_content: bool = False,
):
"""运行网络搜索"""
return tavily_client.search(
query,
max_results=max_results,
include_raw_content=include_raw_content,
topic=topic,
)
agent = create_deep_agent(
model="google_genai:gemini-3.1-pro-preview",
tools=[internet_search]
)
系统提示
Deep Agents 带有内置系统提示。默认系统提示包含使用内置规划工具、文件系统工具和子代理的详细指令。 当中间件添加特殊工具(如文件系统工具)时,会将它们附加到系统提示中。 每个 deep agent 还应包含针对其特定用例的自定义系统提示:from deepagents import create_deep_agent
research_instructions = """\
你是一位专家研究员。你的工作是进行彻底的研究,\
然后撰写一份润色的报告。\
"""
agent = create_deep_agent(
model="google_genai:gemini-3.1-pro-preview",
system_prompt=research_instructions,
)
中间件
默认情况下,Deep Agents 可以访问以下 中间件:TodoListMiddleware:跟踪和管理待办列表以组织代理任务和工作FilesystemMiddleware:处理文件系统操作,如读取、写入和导航目录SubAgentMiddleware:生成和协调子代理以将任务委派给专用代理SummarizationMiddleware:压缩消息历史以在对话变长时保持在上下文限制内AnthropicPromptCachingMiddleware:使用 Anthropic 模型时自动减少冗余 token 处理PatchToolCallsMiddleware:当工具调用在接收结果之前被中断或取消时自动修复消息历史
MemoryMiddleware:当提供memory参数时,跨会话持久化和检索对话上下文SkillsMiddleware:当提供skills参数时启用自定义技能HumanInTheLoopMiddleware:当提供interruptOn参数时,在指定点暂停以等待人类批准或输入
预构建中间件
LangChain 提供额外的预构建中间件,让您添加各种功能,如重试、回退或 PII 检测。参见 预构建中间件 了解更多。deepagents 库还提供 create_summarization_tool_middleware,使代理能够在合适时机触发总结——例如在任务之间——而不是在固定的 token 间隔。更多详情,参见 总结。
特定提供商中间件
针对特定 LLM 提供商优化的特定提供商中间件,参见 官方集成 和 社区集成。自定义中间件
您可以提供额外的中间件以扩展功能、添加工具或实现自定义钩子:from langchain.tools import tool
from langchain.agents.middleware import wrap_tool_call
from deepagents import create_deep_agent
@tool
def get_weather(city: str) -> str:
"""获取城市的天气。"""
return f"The weather in {city} is sunny."
call_count = [0] # 使用列表以允许在嵌套函数中修改
@wrap_tool_call
def log_tool_calls(request, handler):
"""拦截并记录每个工具调用 - 演示横切关注点。"""
call_count[0] += 1
tool_name = request.name if hasattr(request, 'name') else str(request)
print(f"[Middleware] Tool call #{call_count[0]}: {tool_name}")
print(f"[Middleware] Arguments: {request.args if hasattr(request, 'args') else 'N/A'}")
# 执行工具调用
result = handler(request)
# 记录结果
print(f"[Middleware] Tool call #{call_count[0]} completed")
return result
agent = create_deep_agent(
model="google_genai:gemini-3.1-pro-preview",
tools=[get_weather],
middleware=[log_tool_calls],
)
初始化后不要变异属性如果您需要在钩子调用之间跟踪值(例如计数器或累积数据),请使用 graph state。
Graph state 按设计作用于线程范围,因此更新在并发下是安全的。这样做:不要这样做:原地变异,例如在
class CustomMiddleware(AgentMiddleware):
def __init__(self):
pass
def before_agent(self, state, runtime):
return {"x": state.get("x", 0) + 1} # 改为更新 graph state
class CustomMiddleware(AgentMiddleware):
def __init__(self):
self.x = 1
def before_agent(self, state, runtime):
self.x += 1 # 变异会导致竞态条件
before_agent 中修改 self.x 或在钩子中更改其他共享值,可能导致隐蔽的 bug 和竞态条件,因为许多操作并发运行(子代理、并行工具和不同线程上的并行调用)。关于使用自定义属性扩展状态的完整详情,参见 自定义中间件 - 自定义状态模式。
如果您必须在自定义中间件中使用变异,请考虑当子代理、并行工具或并发代理调用同时运行时会发生什么。子代理
为了隔离详细工作并避免上下文膨胀,请使用子代理:import os
from typing import Literal
from tavily import TavilyClient
from deepagents import create_deep_agent
tavily_client = TavilyClient(api_key=os.environ["TAVILY_API_KEY"])
def internet_search(
query: str,
max_results: int = 5,
topic: Literal["general", "news", "finance"] = "general",
include_raw_content: bool = False,
):
"""运行网络搜索"""
return tavily_client.search(
query,
max_results=max_results,
include_raw_content=include_raw_content,
topic=topic,
)
research_subagent = {
"name": "research-agent",
"description": "用于更深入地研究问题",
"system_prompt": "你是一位出色的研究员",
"tools": [internet_search],
"model": "openai:gpt-5.2", # 可选覆盖,默认使用主代理模型
}
subagents = [research_subagent]
agent = create_deep_agent(
model="claude-sonnet-4-6",
subagents=subagents
)
后端
Deep agent 工具可以利用虚拟文件系统来存储、访问和编辑文件。默认情况下,Deep Agents 使用StateBackend。
如果您使用的是 技能 或 记忆,则必须在创建代理之前将预期的技能或记忆文件添加到后端。
- StateBackend
- FilesystemBackend
- LocalShellBackend
- StoreBackend
- CompositeBackend
临时文件系统后端,存储在
langgraph 状态中。此文件系统仅 针对单个线程 持久化。# 默认情况下,我们提供一个 StateBackend
agent = create_deep_agent(model="google_genai:gemini-3.1-pro-preview")
# 在底层,它看起来像这样
from deepagents.backends import StateBackend
agent = create_deep_agent(
model="google_genai:gemini-3.1-pro-preview",
backend=StateBackend()
)
本地机器的文件系统。
此后端授予代理直接文件系统读/写访问权限。
谨慎使用,仅适用于合适的环境。
更多信息,参见
FilesystemBackend。from deepagents.backends import FilesystemBackend
agent = create_deep_agent(
model="google_genai:gemini-3.1-pro-preview",
backend=FilesystemBackend(root_dir=".", virtual_mode=True)
)
带有直接在主机上执行 shell 的文件系统。提供文件系统工具以及用于运行命令的
execute 工具。此后端授予代理直接文件系统读/写访问权限 以及 主机上无限制的 shell 执行。
极度谨慎使用,仅适用于合适的环境。
更多信息,参见
LocalShellBackend。from deepagents.backends import LocalShellBackend
agent = create_deep_agent(
model="google_genai:gemini-3.1-pro-preview",
backend=LocalShellBackend(root_dir=".", env={"PATH": "/usr/bin:/bin"})
)
提供 跨线程持久化 长期存储的文件系统。
from langgraph.store.memory import InMemoryStore
from deepagents.backends import StoreBackend
agent = create_deep_agent(
model="google_genai:gemini-3.1-pro-preview",
backend=StoreBackend(
namespace=lambda ctx: (ctx.runtime.context.user_id,),
),
store=InMemoryStore() # Good for local dev; omit for LangSmith Deployment
)
When deploying to LangSmith Deployment, omit the
store parameter. The platform automatically provisions a store for your agent.namespace 参数控制数据隔离。对于多用户部署,始终设置 命名空间工厂 以按用户或租户隔离数据。灵活的后端,您可以在其中指定文件系统中的不同路由指向不同的后端。
from deepagents import create_deep_agent
from deepagents.backends import CompositeBackend, StateBackend, StoreBackend
from langgraph.store.memory import InMemoryStore
agent = create_deep_agent(
model="google_genai:gemini-3.1-pro-preview",
backend=CompositeBackend(
default=StateBackend(),
routes={
"/memories/": StoreBackend(),
}
),
store=InMemoryStore() # Store passed to create_deep_agent, not backend
)
沙箱
沙箱是专门的 后端,在隔离环境中运行代理代码,拥有自己的文件系统和用于 shell 命令的execute 工具。
当您希望 deep agent 写入文件、安装依赖和运行命令而不更改本地机器上的任何内容时,请使用沙箱后端。
通过在创建 deep agent 时将沙箱后端传递给 backend 来配置沙箱:
- Modal
- Runloop
- Daytona
- LangSmith
pip install langchain-modal
import modal
from deepagents import create_deep_agent
from langchain_anthropic import ChatAnthropic
from langchain_modal import ModalSandbox
app = modal.App.lookup("your-app")
modal_sandbox = modal.Sandbox.create(app=app)
backend = ModalSandbox(sandbox=modal_sandbox)
agent = create_deep_agent(
model=ChatAnthropic(model="claude-sonnet-4-6"),
system_prompt="You are a Python coding assistant with sandbox access.",
backend=backend,
)
try:
result = agent.invoke(
{
"messages": [
{
"role": "user",
"content": "Create a small Python package and run pytest",
}
]
}
)
finally:
modal_sandbox.terminate()
pip install langchain-runloop
import os
from deepagents import create_deep_agent
from langchain_anthropic import ChatAnthropic
from langchain_runloop import RunloopSandbox
from runloop_api_client import RunloopSDK
client = RunloopSDK(bearer_token=os.environ["RUNLOOP_API_KEY"])
devbox = client.devbox.create()
backend = RunloopSandbox(devbox=devbox)
agent = create_deep_agent(
model=ChatAnthropic(model="claude-sonnet-4-6"),
system_prompt="You are a Python coding assistant with sandbox access.",
backend=backend,
)
try:
result = agent.invoke(
{
"messages": [
{
"role": "user",
"content": "Create a small Python package and run pytest",
}
]
}
)
finally:
devbox.shutdown()
pip install langchain-daytona
from daytona import Daytona
from deepagents import create_deep_agent
from langchain_anthropic import ChatAnthropic
from langchain_daytona import DaytonaSandbox
sandbox = Daytona().create()
backend = DaytonaSandbox(sandbox=sandbox)
agent = create_deep_agent(
model=ChatAnthropic(model="claude-sonnet-4-6"),
system_prompt="You are a Python coding assistant with sandbox access.",
backend=backend,
)
try:
result = agent.invoke(
{
"messages": [
{
"role": "user",
"content": "Create a small Python package and run pytest",
}
]
}
)
finally:
sandbox.stop()
LangSmith sandboxes are currently in private beta.
pip install "langsmith[sandbox]"
from deepagents import create_deep_agent
from deepagents.backends import LangSmithSandbox
from langchain_anthropic import ChatAnthropic
from langsmith.sandbox import SandboxClient
client = SandboxClient()
ls_sandbox = client.create_sandbox(template_name="my-template")
backend = LangSmithSandbox(sandbox=ls_sandbox)
agent = create_deep_agent(
model=ChatAnthropic(model="claude-sonnet-4-6"),
system_prompt="You are a Python coding assistant with sandbox access.",
backend=backend,
)
try:
result = agent.invoke(
{
"messages": [
{
"role": "user",
"content": "Create a small Python package and run pytest",
}
]
}
)
finally:
client.delete_sandbox(ls_sandbox.name)
人机协同
某些工具操作可能敏感,需要在执行前获得人类批准。 您可以为每个工具配置批准:from langchain.tools import tool
from deepagents import create_deep_agent
from langgraph.checkpoint.memory import MemorySaver
@tool
def delete_file(path: str) -> str:
"""Delete a file from the filesystem."""
return f"Deleted {path}"
@tool
def read_file(path: str) -> str:
"""Read a file from the filesystem."""
return f"Contents of {path}"
@tool
def send_email(to: str, subject: str, body: str) -> str:
"""Send an email."""
return f"Sent email to {to}"
# Checkpointer is REQUIRED for human-in-the-loop
checkpointer = MemorySaver()
agent = create_deep_agent(
model="google_genai:gemini-3.1-pro-preview",
tools=[delete_file, read_file, send_email],
interrupt_on={
"delete_file": True, # Default: approve, edit, reject
"read_file": False, # No interrupts needed
"send_email": {"allowed_decisions": ["approve", "reject"]}, # No editing
},
checkpointer=checkpointer # Required!
)
技能
您可以使用 技能 为 deep agent 提供新能力和专业知识。 虽然 工具 倾向于涵盖底层功能(如原生文件系统动作或规划),但技能可以包含关于如何完成任务、参考信息和其他资产(如模板)的详细指令。 仅当代理确定技能对当前提示有用时,代理才会加载这些文件。 这种渐进式披露减少了代理在启动时必须考虑的 token 和上下文数量。 示例技能,参见 Deep Agent 示例技能。 要将技能添加到您的 deep agent,请将它们作为参数传递给create_deep_agent:
- StateBackend
- StoreBackend
- FilesystemBackend
from urllib.request import urlopen
from deepagents import create_deep_agent
from deepagents.backends.utils import create_file_data
from langgraph.checkpoint.memory import MemorySaver
checkpointer = MemorySaver()
skill_url = "https://raw.githubusercontent.com/langchain-ai/deepagents/refs/heads/main/libs/cli/examples/skills/langgraph-docs/SKILL.md"
with urlopen(skill_url) as response:
skill_content = response.read().decode('utf-8')
skills_files = {
"/skills/langgraph-docs/SKILL.md": create_file_data(skill_content)
}
agent = create_deep_agent(
model="google_genai:gemini-3.1-pro-preview",
skills=["/skills/"],
checkpointer=checkpointer,
)
result = agent.invoke(
{
"messages": [
{
"role": "user",
"content": "What is langgraph?",
}
],
# Seed the default StateBackend's in-state filesystem (virtual paths must start with "/").
"files": skills_files
},
config={"configurable": {"thread_id": "12345"}},
)
from urllib.request import urlopen
from deepagents import create_deep_agent
from deepagents.backends import StoreBackend
from deepagents.backends.utils import create_file_data
from langgraph.store.memory import InMemoryStore
store = InMemoryStore()
skill_url = "https://raw.githubusercontent.com/langchain-ai/deepagents/refs/heads/main/libs/cli/examples/skills/langgraph-docs/SKILL.md"
with urlopen(skill_url) as response:
skill_content = response.read().decode('utf-8')
store.put(
namespace=("filesystem",),
key="/skills/langgraph-docs/SKILL.md",
value=create_file_data(skill_content)
)
agent = create_deep_agent(
model="google_genai:gemini-3.1-pro-preview",
backend=StoreBackend(),
store=store,
skills=["/skills/"]
)
result = agent.invoke(
{
"messages": [
{
"role": "user",
"content": "What is langgraph?",
}
]
},
config={"configurable": {"thread_id": "12345"}},
)
from deepagents import create_deep_agent
from langgraph.checkpoint.memory import MemorySaver
from deepagents.backends.filesystem import FilesystemBackend
# Checkpointer is REQUIRED for human-in-the-loop
checkpointer = MemorySaver()
agent = create_deep_agent(
model="google_genai:gemini-3.1-pro-preview",
backend=FilesystemBackend(root_dir="/Users/user/{project}"),
skills=["/Users/user/{project}/skills/"],
interrupt_on={
"write_file": True, # Default: approve, edit, reject
"read_file": False, # No interrupts needed
"edit_file": True # Default: approve, edit, reject
},
checkpointer=checkpointer, # Required!
)
result = agent.invoke(
{
"messages": [
{
"role": "user",
"content": "What is langgraph?",
}
]
},
config={"configurable": {"thread_id": "12345"}},
)
记忆
使用AGENTS.md 文件 为 deep agent 提供额外上下文。
创建 deep agent 时,您可以将一个或多个文件路径传递给 memory 参数:
- StateBackend
- StoreBackend
- FilesystemBackend
from urllib.request import urlopen
from deepagents import create_deep_agent
from deepagents.backends.utils import create_file_data
from langgraph.checkpoint.memory import MemorySaver
with urlopen("https://raw.githubusercontent.com/langchain-ai/deepagents/refs/heads/main/examples/text-to-sql-agent/AGENTS.md") as response:
agents_md = response.read().decode("utf-8")
checkpointer = MemorySaver()
agent = create_deep_agent(
model="google_genai:gemini-3.1-pro-preview",
memory=[
"/AGENTS.md"
],
checkpointer=checkpointer,
)
result = agent.invoke(
{
"messages": [
{
"role": "user",
"content": "Please tell me what's in your memory files.",
}
],
# 种子默认 StateBackend 的状态内文件系统(虚拟路径必须以 "/" 开头)。
"files": {"/AGENTS.md": create_file_data(agents_md)},
},
config={"configurable": {"thread_id": "123456"}},
)
from urllib.request import urlopen
from deepagents import create_deep_agent
from deepagents.backends import StoreBackend
from deepagents.backends.utils import create_file_data
from langgraph.store.memory import InMemoryStore
with urlopen("https://raw.githubusercontent.com/langchain-ai/deepagents/refs/heads/main/examples/text-to-sql-agent/AGENTS.md") as response:
agents_md = response.read().decode("utf-8")
# 创建 store 并将文件添加到其中
store = InMemoryStore()
file_data = create_file_data(agents_md)
store.put(
namespace=("filesystem",),
key="/AGENTS.md",
value=file_data
)
agent = create_deep_agent(
model="google_genai:gemini-3.1-pro-preview",
backend=StoreBackend(),
store=store,
memory=[
"/AGENTS.md"
]
)
result = agent.invoke(
{
"messages": [
{
"role": "user",
"content": "Please tell me what's in your memory files.",
}
],
"files": {"/AGENTS.md": create_file_data(agents_md)},
},
config={"configurable": {"thread_id": "12345"}},
)
from deepagents import create_deep_agent
from deepagents.backends import FilesystemBackend
from langgraph.checkpoint.memory import MemorySaver
# Checkpointer 对于人机协同是必需的
checkpointer = MemorySaver()
agent = create_deep_agent(
model="google_genai:gemini-3.1-pro-preview",
backend=FilesystemBackend(root_dir="/Users/user/{project}"),
memory=[
"./AGENTS.md"
],
interrupt_on={
"write_file": True, # 默认:批准、编辑、拒绝
"read_file": False, # 不需要中断
"edit_file": True # 默认:批准、编辑、拒绝
},
checkpointer=checkpointer, # 必需!
)
结构化输出
Deep Agents 支持 结构化输出。 您可以通过将其作为response_format 参数传递给 create_deep_agent() 调用来设置期望的结构化输出模式。
当模型生成结构化数据时,它会被捕获、验证,并在 deep agent 状态的 ‘structured_response’ 键中返回。
import os
from typing import Literal
from pydantic import BaseModel, Field
from tavily import TavilyClient
from deepagents import create_deep_agent
tavily_client = TavilyClient(api_key=os.environ["TAVILY_API_KEY"])
def internet_search(
query: str,
max_results: int = 5,
topic: Literal["general", "news", "finance"] = "general",
include_raw_content: bool = False,
):
"""运行网络搜索"""
return tavily_client.search(
query,
max_results=max_results,
include_raw_content=include_raw_content,
topic=topic,
)
class WeatherReport(BaseModel):
"""带有当前状况和预报的结构化天气报告。"""
location: str = Field(description="此天气报告的位置")
temperature: float = Field(description="当前温度(摄氏度)")
condition: str = Field(description="当前天气状况(例如,晴朗、多云、下雨)")
humidity: int = Field(description="湿度百分比")
wind_speed: float = Field(description="风速(公里/小时)")
forecast: str = Field(description="未来 24 小时的简要预报")
agent = create_deep_agent(
model="google_genai:gemini-3.1-pro-preview",
response_format=WeatherReport,
tools=[internet_search]
)
result = agent.invoke({
"messages": [{
"role": "user",
"content": "What's the weather like in San Francisco?"
}]
})
print(result["structured_response"])
# location='San Francisco, California' temperature=18.3 condition='Sunny' humidity=48 wind_speed=7.6 forecast='Pleasant sunny conditions expected to continue with temperatures around 64°F (18°C) during the day, dropping to around 52°F (11°C) at night. Clear skies with minimal precipitation expected.'
连接这些文档 到 Claude、VSCode 等 via MCP 以获取实时答案。

