createDeepAgent 具有以下配置选项:
- 模型 (Model)
- 工具 (Tools)
- 系统提示 (System Prompt)
- 中间件 (Middleware)
- 子智能体 (Subagents)
- 后端 (虚拟文件系统)
- 人在回路 (Human-in-the-loop)
- 技能 (Skills)
- 记忆 (Memory)
Copy
const agent = createDeepAgent({
name?: string,
model?: BaseLanguageModel | string,
tools?: TTools | StructuredTool[],
systemPrompt?: string | SystemMessage,
});
连接弹性
LangChain 聊天模型会自动以指数退避重试失败的 API 请求。默认情况下,模型会对网络错误、速率限制 (429) 和服务器错误 (5xx) 重试最多 6 次。客户端错误如 401 (未经授权) 或 404 不会重试。 你可以通过在创建模型时调整maxRetries 参数来针对你的环境调整此行为:
Copy
import { ChatAnthropic } from "@langchain/anthropic";
import { createDeepAgent } from "deepagents";
const agent = createDeepAgent({
model: new ChatAnthropic({
model: "claude-sonnet-4-6",
maxRetries: 10, // 针对不可靠网络增加(默认:6)
timeout: 120_000, // 针对慢速连接增加超时
}),
});
对于不可靠网络上的长时间运行的智能体任务,考虑将
max_retries 增加到 10–15,并将其与 检查点 (checkpointer) 配对,以便在故障之间保存进度。模型 (Model)
默认情况下,deepagents 使用 claude-sonnet-4-6。你可以通过传递任何支持的 或 LangChain 模型对象 来自定义模型。
使用
provider:model 格式(例如 openai:gpt-5)在模型之间快速切换。- OpenAI
- Anthropic
- Azure
- Google Gemini
- Bedrock Converse
👉 Read the OpenAI chat model integration docs
Copy
npm install @langchain/openai deepagents
Copy
import { createDeepAgent } from "deepagents";
process.env.OPENAI_API_KEY = "your-api-key";
const agent = createDeepAgent({ model: "gpt-5.2" });
// this calls initChatModel for the specified model with default parameters
// to use specific model parameters, use initChatModel directly
👉 Read the Anthropic chat model integration docs
Copy
npm install @langchain/anthropic deepagents
Copy
import { createDeepAgent } from "deepagents";
process.env.ANTHROPIC_API_KEY = "your-api-key";
const agent = createDeepAgent({ model: "claude-sonnet-4-6" });
// this calls initChatModel for the specified model with default parameters
// to use specific model parameters, use initChatModel directly
👉 Read the Azure chat model integration docs
Copy
npm install @langchain/azure deepagents
Copy
import { createDeepAgent } from "deepagents";
process.env.AZURE_OPENAI_API_KEY = "your-api-key";
process.env.AZURE_OPENAI_ENDPOINT = "your-endpoint";
process.env.OPENAI_API_VERSION = "your-api-version";
const agent = createDeepAgent({ model: "azure_openai:gpt-5.2" });
// this calls initChatModel for the specified model with default parameters
// to use specific model parameters, use initChatModel directly
👉 Read the Google GenAI chat model integration docs
Copy
npm install @langchain/google-genai deepagents
Copy
import { createDeepAgent } from "deepagents";
process.env.GOOGLE_API_KEY = "your-api-key";
const agent = createDeepAgent({ model: "google-genai:gemini-2.5-flash-lite" });
// this calls initChatModel for the specified model with default parameters
// to use specific model parameters, use initChatModel directly
👉 Read the AWS Bedrock chat model integration docs
Copy
npm install @langchain/aws deepagents
Copy
import { createDeepAgent } from "deepagents";
// Follow the steps here to configure your credentials:
// https://docs.aws.amazon.com/bedrock/latest/userguide/getting-started.html
const agent = createDeepAgent({ model: "bedrock:gpt-5.2" });
// this calls initChatModel for the specified model with default parameters
// to use specific model parameters, use initChatModel directly
工具 (Tools)
除了用于规划、文件管理和子智能体生成的 内置工具 之外,你还可以提供自定义工具:Copy
import { tool } from "langchain";
import { TavilySearch } from "@langchain/tavily";
import { createDeepAgent } from "deepagents";
import { z } from "zod";
const internetSearch = tool(
async ({
query,
maxResults = 5,
topic = "general",
includeRawContent = false,
}: {
query: string;
maxResults?: number;
topic?: "general" | "news" | "finance";
includeRawContent?: boolean;
}) => {
const tavilySearch = new TavilySearch({
maxResults,
tavilyApiKey: process.env.TAVILY_API_KEY,
includeRawContent,
topic,
});
return await tavilySearch._call({ query });
},
{
name: "internet_search",
description: "Run a web search",
schema: z.object({
query: z.string().describe("The search query"),
maxResults: z.number().optional().default(5),
topic: z
.enum(["general", "news", "finance"])
.optional()
.default("general"),
includeRawContent: z.boolean().optional().default(false),
}),
},
);
const agent = createDeepAgent({
tools: [internetSearch],
});
系统提示 (System prompt)
Deep agents 带有内置的系统提示。默认系统提示包含有关使用内置规划工具、文件系统工具和子智能体的详细说明。 当中间件添加特殊工具(如文件系统工具)时,它会将它们附加到系统提示中。 每个 deep agent 还应包含针对其特定用例的自定义系统提示:Copy
import { createDeepAgent } from "deepagents";
const researchInstructions = `You are an expert researcher. ` +
`Your job is to conduct thorough research, and then ` +
`write a polished report.`;
const agent = createDeepAgent({
systemPrompt: researchInstructions,
});
中间件 (Middleware)
默认情况下,deep agents 可以访问以下 中间件:TodoListMiddleware: 跟踪和管理待办事项列表,用于组织智能体任务和工作FilesystemMiddleware: 处理文件系统操作,例如读取、写入和导航目录SubAgentMiddleware: 生成和协调子智能体,以将任务委托给专门的智能体SummarizationMiddleware: 当对话变长时压缩消息历史记录,以保持在上下文限制内AnthropicPromptCachingMiddleware: 使用 Anthropic 模型时自动减少冗余令牌处理PatchToolCallsMiddleware: 在收到结果之前中断或取消工具调用时,自动修复消息历史记录
MemoryMiddleware: 提供memory参数时,跨会话持久保存和检索对话上下文SkillsMiddleware: 提供skills参数时启用自定义技能HumanInTheLoopMiddleware: 提供interrupt_on参数时,在指定点暂停以进行人工批准或输入
Copy
import { tool, createMiddleware } from "langchain";
import { createDeepAgent } from "deepagents";
import * as z from "zod";
const getWeather = tool(
({ city }: { city: string }) => {
return `The weather in ${city} is sunny.`;
},
{
name: "get_weather",
description: "Get the weather in a city.",
schema: z.object({
city: z.string(),
}),
}
);
let callCount = 0;
const logToolCallsMiddleware = createMiddleware({
name: "LogToolCallsMiddleware",
wrapToolCall: async (request, handler) => {
// 拦截并记录每个工具调用 - 演示横切关注点
callCount += 1;
const toolName = request.toolCall.name;
console.log(`[Middleware] Tool call #${callCount}: ${toolName}`);
console.log(
`[Middleware] Arguments: ${JSON.stringify(request.toolCall.args)}`
);
// 执行工具调用
const result = await handler(request);
// 记录结果
console.log(`[Middleware] Tool call #${callCount} completed`);
return result;
},
});
const agent = await createDeepAgent({
model: "claude-sonnet-4-20250514",
tools: [getWeather] as any,
middleware: [logToolCallsMiddleware] as any,
});
初始化后不要更改属性如果需要在钩子调用之间跟踪值(例如,计数器或累积数据),请使用图状态 (graph state)。
图状态按设计限定在线程范围内,因此在并发下更新是安全的。这样做:不要这样做:原地突变——例如在
Copy
class CustomMiddleware(AgentMiddleware):
def __init__(self):
pass
def before_agent(self, state, runtime):
return {"x": state.get("x", 0) + 1} # 改为更新图状态
Copy
class CustomMiddleware(AgentMiddleware):
def __init__(self):
self.x = 1
def before_agent(self, state, runtime):
self.x += 1 # 突变会导致竞争条件
before_agent 或其他钩子中修改 self.x——可能会导致微妙的错误和竞争条件,因为许多操作是并发运行的(子智能体、并行工具和不同线程上的并行调用)。有关使用自定义属性扩展状态的完整详细信息,请参阅 自定义中间件 - 自定义状态模式。
如果你必须在自定义中间件中使用突变,请考虑当子智能体、并行工具或并发智能体调用同时运行时会发生什么。子智能体 (Subagents)
为了隔离详细工作并避免上下文膨胀,请使用子智能体:Copy
import os
from typing import Literal
from tavily import TavilyClient
from deepagents import create_deep_agent
tavily_client = TavilyClient(api_key=os.environ["TAVILY_API_KEY"])
def internet_search(
query: str,
max_results: int = 5,
topic: Literal["general", "news", "finance"] = "general",
include_raw_content: bool = False,
):
"""Run a web search"""
return tavily_client.search(
query,
max_results=max_results,
include_raw_content=include_raw_content,
topic=topic,
)
research_subagent = {
"name": "research-agent",
"description": "Used to research more in depth questions",
"system_prompt": "You are a great researcher",
"tools": [internet_search],
"model": "openai:gpt-5.2", # Optional override, defaults to main agent model
}
subagents = [research_subagent]
agent = create_deep_agent(
model="claude-sonnet-4-6",
subagents=subagents
)
后端 (Backends)
Deep agent 工具可以利用虚拟文件系统来存储、访问和编辑文件。默认情况下,deep agents 使用StateBackend。
如果你正在使用 技能 (skills) 或 记忆 (memory),则必须在创建智能体之前将预期的技能或记忆文件添加到后端。
- StateBackend
- FilesystemBackend
- LocalShellBackend
- StoreBackend
- CompositeBackend
存储在
langgraph 状态中的临时文件系统后端。此文件系统仅 针对单个线程 持久化。Copy
# By default we provide a StateBackend
agent = create_deep_agent()
# Under the hood, it looks like
from deepagents.backends import StateBackend
agent = create_deep_agent(
backend=(lambda rt: StateBackend(rt)) # Note that the tools access State through the runtime.state
)
本地机器的文件系统。
此后端授予智能体直接的文件系统读/写访问权限。
请谨慎使用,并仅在适当的环境中使用。
更多信息,请参阅
FilesystemBackend。Copy
from deepagents.backends import FilesystemBackend
agent = create_deep_agent(
backend=FilesystemBackend(root_dir=".", virtual_mode=True)
)
具有直接在主机上执行 shell 的文件系统。提供文件系统工具加上用于运行命令的
execute 工具。此后端授予智能体直接的文件系统读/写访问权限 以及 主机上不受限制的 shell 执行权限。
请极其谨慎地使用,并仅在适当的环境中使用。
更多信息,请参阅
LocalShellBackend。Copy
from deepagents.backends import LocalShellBackend
agent = create_deep_agent(
backend=LocalShellBackend(root_dir=".", env={"PATH": "/usr/bin:/bin"})
)
提供长期存储的文件系统,该存储 跨线程持久化。
Copy
from langgraph.store.memory import InMemoryStore
from deepagents.backends import StoreBackend
agent = create_deep_agent(
backend=(lambda rt: StoreBackend(rt)),
store=InMemoryStore() # Good for local dev; omit for LangSmith Deployment
)
When deploying to LangSmith Deployment, omit the
store parameter. The platform automatically provisions a store for your agent.一个灵活的后端,你可以在其中指定文件系统中的不同路由指向不同的后端。
Copy
from deepagents import create_deep_agent
from deepagents.backends import CompositeBackend, StateBackend, StoreBackend
from langgraph.store.memory import InMemoryStore
composite_backend = lambda rt: CompositeBackend(
default=StateBackend(rt),
routes={
"/memories/": StoreBackend(rt),
}
)
agent = create_deep_agent(
backend=composite_backend,
store=InMemoryStore() # Store passed to create_deep_agent, not backend
)
沙盒 (Sandboxes)
沙盒是专门的 后端,它在隔离环境中运行智能体代码,拥有自己的文件系统和用于 shell 命令的execute 工具。
当你希望 deep agent 编写文件、安装依赖项和运行命令而不更改本地机器上的任何内容时,请使用沙盒后端。
通过将沙盒后端传递给 backend 来配置沙盒:
Copy
import { createDeepAgent } from "deepagents";
import { ChatAnthropic } from "@langchain/anthropic";
import { DenoSandbox } from "@langchain/deno";
// Create and initialize the sandbox
const sandbox = await DenoSandbox.create({
memoryMb: 1024,
lifetime: "10m",
});
try {
const agent = createDeepAgent({
model: new ChatAnthropic({ model: "claude-opus-4-6" }),
systemPrompt: "You are a JavaScript coding assistant with sandbox access.",
backend: sandbox,
});
const result = await agent.invoke({
messages: [
{
role: "user",
content:
"Create a simple HTTP server using Deno.serve and test it with curl",
},
],
});
} finally {
await sandbox.close();
}
人在回路 (Human-in-the-loop)
某些工具操作可能很敏感,在执行前需要人工批准。 你可以为每个工具配置批准:Copy
from langchain.tools import tool
from deepagents import create_deep_agent
from langgraph.checkpoint.memory import MemorySaver
@tool
def delete_file(path: str) -> str:
"""Delete a file from the filesystem."""
return f"Deleted {path}"
@tool
def read_file(path: str) -> str:
"""Read a file from the filesystem."""
return f"Contents of {path}"
@tool
def send_email(to: str, subject: str, body: str) -> str:
"""Send an email."""
return f"Sent email to {to}"
# Checkpointer is REQUIRED for human-in-the-loop
checkpointer = MemorySaver()
agent = create_deep_agent(
model="claude-sonnet-4-6",
tools=[delete_file, read_file, send_email],
interrupt_on={
"delete_file": True, # Default: approve, edit, reject
"read_file": False, # No interrupts needed
"send_email": {"allowed_decisions": ["approve", "reject"]}, # No editing
},
checkpointer=checkpointer # Required!
)
技能 (Skills)
你可以使用 技能 为你的 deep agent 提供新功能和专业知识。 虽然 工具 倾向于涵盖较低级别的功能,如本机文件系统操作或规划,但技能可以包含有关如何完成任务的详细说明、参考信息和其他资产(如模板)。 只有当智能体确定技能对当前提示有用时,才会加载这些文件。 这种渐进式披露减少了智能体在启动时必须考虑的令牌和上下文数量。 有关技能示例,请参阅 Deep Agent 示例技能。 要向你的 deep agent 添加技能,请将它们作为参数传递给create_deep_agent:
- StateBackend
- StoreBackend
- FilesystemBackend
Copy
import { createDeepAgent, type FileData } from "deepagents";
import { MemorySaver } from "@langchain/langgraph";
const checkpointer = new MemorySaver();
function createFileData(content: string): FileData {
const now = new Date().toISOString();
return {
content: content.split("\n"),
created_at: now,
modified_at: now,
};
}
const skillsFiles: Record<string, FileData> = {};
const skillUrl =
"https://raw.githubusercontent.com/langchain-ai/deepagentsjs/refs/heads/main/examples/skills/langgraph-docs/SKILL.md";
const response = await fetch(skillUrl);
const skillContent = await response.text();
skillsFiles["/skills/langgraph-docs/SKILL.md"] = createFileData(skillContent);
const agent = await createDeepAgent({
checkpointer,
// IMPORTANT: deepagents skill source paths are virtual (POSIX) paths relative to the backend root.
skills: ["/skills/"],
});
const config = {
configurable: {
thread_id: `thread-${Date.now()}`,
},
};
const result = await agent.invoke(
{
messages: [
{
role: "user",
content: "what is langraph? Use the langgraph-docs skill if available.",
},
],
files: skillsFiles,
},
config,
);
Copy
import { createDeepAgent, StoreBackend, type FileData } from "deepagents";
import {
InMemoryStore,
MemorySaver,
type BaseStore,
} from "@langchain/langgraph";
const checkpointer = new MemorySaver();
const store = new InMemoryStore();
function createFileData(content: string): FileData {
const now = new Date().toISOString();
return {
content: content.split("\n"),
created_at: now,
modified_at: now,
};
}
const skillUrl =
"https://raw.githubusercontent.com/langchain-ai/deepagentsjs/refs/heads/main/examples/skills/langgraph-docs/SKILL.md";
const response = await fetch(skillUrl);
const skillContent = await response.text();
const fileData = createFileData(skillContent);
await store.put(["filesystem"], "/skills/langgraph-docs/SKILL.md", fileData);
const backendFactory = (config: { state: unknown; store?: BaseStore }) => {
return new StoreBackend({
state: config.state,
store: config.store ?? store,
});
};
const agent = await createDeepAgent({
backend: backendFactory,
store: store,
checkpointer,
// IMPORTANT: deepagents skill source paths are virtual (POSIX) paths relative to the backend root.
skills: ["/skills/"],
});
const config = {
recursionLimit: 50,
configurable: {
thread_id: `thread-${Date.now()}`,
},
};
const result = await agent.invoke(
{
messages: [
{
role: "user",
content: "what is langraph? Use the langgraph-docs skill if available.",
},
],
},
config,
);
Copy
import { createDeepAgent, FilesystemBackend } from "deepagents";
import { MemorySaver } from "@langchain/langgraph";
const checkpointer = new MemorySaver();
const backend = new FilesystemBackend({ rootDir: process.cwd() });
const agent = await createDeepAgent({
backend,
skills: ["./examples/skills/"],
interruptOn: {
read_file: true,
write_file: true,
delete_file: true,
},
checkpointer, // Required!
});
const config = {
configurable: {
thread_id: `thread-${Date.now()}`,
},
};
const result = await agent.invoke(
{
messages: [
{
role: "user",
content: "what is langraph? Use the langgraph-docs skill if available.",
},
],
},
config,
);
记忆 (Memory)
使用AGENTS.md 文件 为你的 deep agent 提供额外的上下文。
你可以在创建 deep agent 时将一个或多个文件路径传递给 memory 参数:
- StateBackend
- StoreBackend
- Filesystem
Copy
import { createDeepAgent, type FileData } from "deepagents";
import { MemorySaver } from "@langchain/langgraph";
const AGENTS_MD_URL =
"https://raw.githubusercontent.com/langchain-ai/deepagents/refs/heads/main/examples/text-to-sql-agent/AGENTS.md";
async function fetchText(url: string): Promise<string> {
const res = await fetch(url);
if (!res.ok) {
throw new Error(`Failed to fetch ${url}: ${res.status} ${res.statusText}`);
}
return await res.text();
}
const agentsMd = await fetchText(AGENTS_MD_URL);
const checkpointer = new MemorySaver();
function createFileData(content: string): FileData {
const now = new Date().toISOString();
return {
content: content.split("\n"),
created_at: now,
modified_at: now,
};
}
const agent = await createDeepAgent({
memory: ["/AGENTS.md"],
checkpointer: checkpointer,
});
const result = await agent.invoke(
{
messages: [
{
role: "user",
content: "Please tell me what's in your memory files.",
},
],
// Seed the default StateBackend's in-state filesystem (virtual paths must start with "/").
files: { "/AGENTS.md": createFileData(agentsMd) },
},
{ configurable: { thread_id: "12345" } }
);
Copy
import { createDeepAgent, StoreBackend, type FileData } from "deepagents";
import {
InMemoryStore,
MemorySaver,
type BaseStore,
} from "@langchain/langgraph";
const AGENTS_MD_URL =
"https://raw.githubusercontent.com/langchain-ai/deepagents/refs/heads/main/examples/text-to-sql-agent/AGENTS.md";
async function fetchText(url: string): Promise<string> {
const res = await fetch(url);
if (!res.ok) {
throw new Error(`Failed to fetch ${url}: ${res.status} ${res.statusText}`);
}
return await res.text();
}
const agentsMd = await fetchText(AGENTS_MD_URL);
function createFileData(content: string): FileData {
const now = new Date().toISOString();
return {
content: content.split("\n"),
created_at: now,
modified_at: now,
};
}
const store = new InMemoryStore();
const fileData = createFileData(agentsMd);
await store.put(["filesystem"], "/AGENTS.md", fileData);
const checkpointer = new MemorySaver();
const backendFactory = (config: { state: unknown; store?: BaseStore }) => {
return new StoreBackend({
state: config.state,
store: config.store ?? store,
});
};
const agent = await createDeepAgent({
backend: backendFactory,
store: store,
checkpointer: checkpointer,
memory: ["/AGENTS.md"],
});
const result = await agent.invoke(
{
messages: [
{
role: "user",
content: "Please tell me what's in your memory files.",
},
],
},
{ configurable: { thread_id: "12345" } }
);
Copy
import { createDeepAgent, FilesystemBackend } from "deepagents";
import { MemorySaver } from "@langchain/langgraph";
// Checkpointer is REQUIRED for human-in-the-loop
const checkpointer = new MemorySaver();
const agent = await createDeepAgent({
backend: (config) =>
new FilesystemBackend({ rootDir: "/Users/user/{project}" }),
memory: ["./AGENTS.md", "./.deepagents/AGENTS.md"],
interruptOn: {
read_file: true,
write_file: true,
delete_file: true,
},
checkpointer, // Required!
});
结构化输出 (Structured output)
Deep agents 支持 结构化输出。 你可以通过将所需的结构化输出模式作为responseFormat 参数传递给 createDeepAgent() 调用来设置它。
当模型生成结构化数据时,它会被捕获、验证并在智能体状态的 ‘structuredResponse’ 键中返回。
Copy
import { tool } from "langchain";
import { TavilySearch } from "@langchain/tavily";
import { createDeepAgent } from "deepagents";
import { z } from "zod";
const internetSearch = tool(
async ({
query,
maxResults = 5,
topic = "general",
includeRawContent = false,
}: {
query: string;
maxResults?: number;
topic?: "general" | "news" | "finance";
includeRawContent?: boolean;
}) => {
const tavilySearch = new TavilySearch({
maxResults,
tavilyApiKey: process.env.TAVILY_API_KEY,
includeRawContent,
topic,
});
return await tavilySearch._call({ query });
},
{
name: "internet_search",
description: "Run a web search",
schema: z.object({
query: z.string().describe("The search query"),
maxResults: z.number().optional().default(5),
topic: z
.enum(["general", "news", "finance"])
.optional()
.default("general"),
includeRawContent: z.boolean().optional().default(false),
}),
},
);
const weatherReportSchema = z.object({
location: z.string().describe("The location for this weather report"),
temperature: z.number().describe("Current temperature in Celsius"),
condition: z
.string()
.describe("Current weather condition (e.g., sunny, cloudy, rainy)"),
humidity: z.number().describe("Humidity percentage"),
windSpeed: z.number().describe("Wind speed in km/h"),
forecast: z.string().describe("Brief forecast for the next 24 hours"),
});
const agent = await createDeepAgent({
responseFormat: weatherReportSchema,
tools: [internetSearch],
});
const result = await agent.invoke({
messages: [
{
role: "user",
content: "What's the weather like in San Francisco?",
},
],
});
console.log(result.structuredResponse);
// {
// location: 'San Francisco, California',
// temperature: 18.3,
// condition: 'Sunny',
// humidity: 48,
// windSpeed: 7.6,
// forecast: 'Clear skies with temperatures remaining mild. High of 18°C (64°F) during the day, dropping to around 11°C (52°F) at night.'
// }
将这些文档连接 到 Claude, VSCode, 以及更多通过 MCP 获取实时答案。

