Skip to main content
深度智能体建立在 LangGraph 的流式基础设施之上,并对子智能体流提供了一流的支持。当深度智能体将工作委托给子智能体时,您可以独立地流式传输每个子智能体的更新——实时跟踪进度、LLM token 和工具调用。 深度智能体流式传输的可能性:

启用子图流式传输

深度智能体使用 LangGraph 的子图流式传输来展示子智能体执行的事件。要接收子智能体事件,请在流式传输时启用 stream_subgraphs
import { createDeepAgent } from "deepagents";

const agent = createDeepAgent({
  systemPrompt: "You are a helpful research assistant",
  subagents: [
    {
      name: "researcher",
      description: "Researches a topic in depth",
      systemPrompt: "You are a thorough researcher.",
    },
  ],
});

for await (const [namespace, chunk] of await agent.stream(
  { messages: [{ role: "user", content: "Research quantum computing advances" }] },
  {
    streamMode: "updates",
    subgraphs: true,
  }
)) {
  if (namespace.length > 0) {
    // Subagent event — namespace identifies the source
    // 子智能体事件 — 命名空间标识来源
    console.log(`[subagent: ${namespace.join("|")}]`);
  } else {
    // Main agent event
    // 主智能体事件
    console.log("[main agent]");
  }
  console.log(chunk);
}

命名空间

当启用 subgraphs 时,每个流式事件都包含一个 命名空间,用于标识产生该事件的智能体。命名空间是由节点名称和任务 ID 组成的路径,表示智能体的层级结构。
命名空间来源
() (空)主智能体
("tools:abc123",)由主智能体的 task 工具调用 abc123 生成的子智能体
("tools:abc123", "model_request:def456")子智能体内部的模型请求节点
使用命名空间将事件路由到正确的 UI 组件:
for await (const [namespace, chunk] of await agent.stream(
  { messages: [{ role: "user", content: "Plan my vacation" }] },
  { streamMode: "updates", subgraphs: true }
)) {
  // Check if this event came from a subagent
  // 检查此事件是否来自子智能体
  const isSubagent = namespace.some(
    (segment: string) => segment.startsWith("tools:")
  );

  if (isSubagent) {
    // Extract the tool call ID from the namespace
    // 从命名空间中提取工具调用 ID
    const toolCallId = namespace
      .find((s: string) => s.startsWith("tools:"))
      ?.split(":")[1];
    console.log(`Subagent ${toolCallId}:`, chunk);
  } else {
    console.log("Main agent:", chunk);
  }
}

子智能体进度

使用 stream_mode="updates" 可以在每个步骤完成时跟踪子智能体的进度。这对于显示哪些子智能体处于活动状态以及它们完成了什么工作非常有用。
import { createDeepAgent } from "deepagents";

const agent = createDeepAgent({
  systemPrompt:
    "You are a project coordinator. Always delegate research tasks " +
    "to your researcher subagent using the task tool. Keep your final response to one sentence.",
  subagents: [
    {
      name: "researcher",
      description: "Researches topics thoroughly",
      systemPrompt:
        "You are a thorough researcher. Research the given topic " +
        "and provide a concise summary in 2-3 sentences.",
    },
  ],
});

for await (const [namespace, chunk] of await agent.stream(
  {
    messages: [
      { role: "user", content: "Write a short summary about AI safety" },
    ],
  },
  { streamMode: "updates", subgraphs: true },
)) {
  // Main agent updates (empty namespace)
  // 主智能体更新(空命名空间)
  if (namespace.length === 0) {
    for (const [nodeName, data] of Object.entries(chunk)) {
      if (nodeName === "tools") {
        // Subagent results returned to main agent
        // 返回给主智能体的子智能体结果
        for (const msg of (data as any).messages ?? []) {
          if (msg.type === "tool") {
            console.log(`\nSubagent complete: ${msg.name}`);
            console.log(`  Result: ${String(msg.content).slice(0, 200)}...`);
          }
        }
      } else {
        console.log(`[main agent] step: ${nodeName}`);
      }
    }
  }
  // Subagent updates (non-empty namespace)
  // 子智能体更新(非空命名空间)
  else {
    for (const [nodeName] of Object.entries(chunk)) {
      console.log(`  [${namespace[0]}] step: ${nodeName}`);
    }
  }
}
Output
Main agent step: model_request
  [tools:call_abc123] step: model_request
  [tools:call_abc123] step: tools
  [tools:call_abc123] step: model_request
Subagent complete: task
Result: ## AI Safety Report...
Main agent step: model_request
  [tools:call_def456] step: model_request
  [tools:call_def456] step: model_request
Subagent complete: task
Result: # Comprehensive Report on AI Safety...
Main agent step: model_request

LLM token

使用 stream_mode="messages" 可以从主智能体和子智能体流式传输单个 token。每个消息事件都包含标识源智能体的元数据。
let currentSource = "";

for await (const [namespace, chunk] of await agent.stream(
  {
    messages: [
      {
        role: "user",
        content: "Research quantum computing advances",
      },
    ],
  },
  { streamMode: "messages", subgraphs: true },
)) {
  const [message] = chunk;

  // Check if this event came from a subagent (namespace contains "tools:")
  // 检查此事件是否来自子智能体(命名空间包含 "tools:")
  const isSubagent = namespace.some((s: string) => s.startsWith("tools:"));

  if (isSubagent) {
    // Token from a subagent
    // 来自子智能体的 token
    const subagentNs = namespace.find((s: string) => s.startsWith("tools:"))!;
    if (subagentNs !== currentSource) {
      process.stdout.write(`\n\n--- [subagent: ${subagentNs}] ---\n`);
      currentSource = subagentNs;
    }
    if (message.text) {
      process.stdout.write(message.text);
    }
  } else {
    // Token from the main agent
    // 来自主智能体的 token
    if ("main" !== currentSource) {
      process.stdout.write(`\n\n--- [main agent] ---\n`);
      currentSource = "main";
    }
    if (message.text) {
      process.stdout.write(message.text);
    }
  }
}

process.stdout.write("\n");

工具调用

当子智能体使用工具时,您可以流式传输工具调用事件以显示每个子智能体正在做什么。工具调用块出现在 messages 流模式中。
import { AIMessageChunk, ToolMessage } from "langchain";

for await (const [namespace, chunk] of await agent.stream(
  {
    messages: [
      {
        role: "user",
        content: "Research recent quantum computing advances",
      },
    ],
  },
  { streamMode: "messages", subgraphs: true },
)) {
  const [message] = chunk;

  // Identify source: "main" or the subagent namespace segment
  // 标识来源:"main" 或子智能体命名空间段
  const isSubagent = namespace.some((s: string) => s.startsWith("tools:"));
  const source = isSubagent
    ? namespace.find((s: string) => s.startsWith("tools:"))!
    : "main";

  // Tool call chunks (streaming tool invocations)
  // 工具调用块(流式传输工具调用)
  if (AIMessageChunk.isInstance(message) && message.tool_call_chunks?.length) {
    for (const tc of message.tool_call_chunks) {
      if (tc.name) {
        console.log(`\n[${source}] Tool call: ${tc.name}`);
      }
      // Args stream in chunks — write them incrementally
      // 参数分块流式传输 — 增量写入
      if (tc.args) {
        process.stdout.write(tc.args);
      }
    }
  }

  // Tool results
  // 工具结果
  if (ToolMessage.isInstance(message)) {
    console.log(
      `\n[${source}] Tool result [${message.name}]: ${message.text?.slice(0, 150)}`,
    );
  }

  // Regular AI content (skip tool call messages)
  // 常规 AI 内容(跳过工具调用消息)
  if (
    AIMessageChunk.isInstance(message) &&
    message.text &&
    !message.tool_call_chunks?.length
  ) {
    process.stdout.write(message.text);
  }
}

process.stdout.write("\n");

自定义更新

在子智能体工具中使用 config.writer 来发出自定义进度事件:
import { createDeepAgent } from "deepagents";
import { tool, type ToolRuntime } from "langchain";
import { z } from "zod";

/**
 * A tool that emits custom progress events via config.writer.
 * The writer sends data to the "custom" stream mode.
 */
/**
 * 通过 config.writer 发出自定义进度事件的工具。
 * writer 将数据发送到 "custom" 流模式。
 */
const analyzeData = tool(
  async ({ topic }: { topic: string }, config: ToolRuntime) => {
    const writer = config.writer;

    writer?.({ status: "starting", topic, progress: 0 });
    await new Promise((r) => setTimeout(r, 500));

    writer?.({ status: "analyzing", progress: 50 });
    await new Promise((r) => setTimeout(r, 500));

    writer?.({ status: "complete", progress: 100 });
    return `Analysis of "${topic}": Customer sentiment is 85% positive, driven by product quality and support response times.`;
  },
  {
    name: "analyze_data",
    description:
      "Run a data analysis on a given topic. " +
      "This tool performs the actual analysis and emits progress updates. " +
      "You MUST call this tool for any analysis request.",
    schema: z.object({
      topic: z.string().describe("The topic or subject to analyze"),
    }),
  },
);

const agent = createDeepAgent({
  systemPrompt:
    "You are a coordinator. For any analysis request, you MUST delegate " +
    "to the analyst subagent using the task tool. Never try to answer directly. " +
    "After receiving the result, summarize it in one sentence.",
  subagents: [
    {
      name: "analyst",
      description: "Performs data analysis with real-time progress tracking",
      systemPrompt:
        "You are a data analyst. You MUST call the analyze_data tool " +
        "for every analysis request. Do not use any other tools. " +
        "After the analysis completes, report the result.",
      tools: [analyzeData],
    },
  ],
});

for await (const [namespace, chunk] of await agent.stream(
  {
    messages: [
      {
        role: "user",
        content: "Analyze customer satisfaction trends",
      },
    ],
  },
  { streamMode: "custom", subgraphs: true },
)) {
  const isSubagent = namespace.some((s: string) => s.startsWith("tools:"));
  if (isSubagent) {
    const subagentNs = namespace.find((s: string) => s.startsWith("tools:"))!;
    console.log(`[${subagentNs}]`, chunk);
  } else {
    console.log("[main]", chunk);
  }
}
Output
[tools:call_abc123] { status: 'fetching', progress: 0 }
[tools:call_abc123] { status: 'analyzing', progress: 50 }
[tools:call_abc123] { status: 'complete', progress: 100 }

流式传输多种模式

结合多种流模式以获得智能体执行的完整画面:
// Skip internal middleware steps — only show meaningful node names
// 跳过内部中间件步骤 — 仅显示有意义的节点名称
const INTERESTING_NODES = new Set(["model_request", "tools"]);

let lastSource = "";
let midLine = false; // true when we've written tokens without a trailing newline

for await (const [namespace, mode, data] of await agent.stream(
  {
    messages: [
      {
        role: "user",
        content: "Analyze the impact of remote work on team productivity",
      },
    ],
  },
  { streamMode: ["updates", "messages", "custom"], subgraphs: true },
)) {
  const isSubagent = namespace.some((s: string) => s.startsWith("tools:"));
  const source = isSubagent ? "subagent" : "main";

  if (mode === "updates") {
    for (const nodeName of Object.keys(data)) {
      if (!INTERESTING_NODES.has(nodeName)) continue;
      if (midLine) {
        process.stdout.write("\n");
        midLine = false;
      }
      console.log(`[${source}] step: ${nodeName}`);
    }
  } else if (mode === "messages") {
    const [message] = data;
    if (message.text) {
      // Print a header when the source changes
      // 当来源改变时打印标题
      if (source !== lastSource) {
        if (midLine) {
          process.stdout.write("\n");
          midLine = false;
        }
        process.stdout.write(`\n[${source}] `);
        lastSource = source;
      }
      process.stdout.write(message.text);
      midLine = true;
    }
  } else if (mode === "custom") {
    if (midLine) {
      process.stdout.write("\n");
      midLine = false;
    }
    console.log(`[${source}] custom event:`, data);
  }
}

process.stdout.write("\n");

常见模式

跟踪子智能体生命周期

监控子智能体何时启动、运行和完成:
for await (const [namespace, chunk] of await agent.stream(
  {
    messages: [
      { role: "user", content: "Research the latest AI safety developments" },
    ],
  },
  { streamMode: "updates", subgraphs: true },
)) {
  for (const [nodeName, data] of Object.entries(chunk)) {
    // ─── Phase 1: Detect subagent starting ────────────────────────
    // ─── 阶段 1:检测子智能体启动 ────────────────────────
    // When the main agent's model_request contains task tool calls,
    // a subagent has been spawned.
    // 当主智能体的 model_request 包含 task 工具调用时,
    // 子智能体已生成。
    if (namespace.length === 0 && nodeName === "model_request") {
      for (const msg of (data as any).messages ?? []) {
        for (const tc of msg.tool_calls ?? []) {
          if (tc.name === "task") {
            activeSubagents.set(tc.id, {
              type: tc.args?.subagent_type,
              description: tc.args?.description?.slice(0, 80),
              status: "pending",
            });
            console.log(
              `[lifecycle] PENDING  → subagent "${tc.args?.subagent_type}" (${tc.id})`,
            );
          }
        }
      }
    }

    // ─── Phase 2: Detect subagent running ─────────────────────────
    // ─── 阶段 2:检测子智能体运行 ─────────────────────────
    // When we receive events from a tools:UUID namespace, that
    // subagent is actively executing.
    // 当我们收到来自 tools:UUID 命名空间的事件时,
    // 该子智能体正在积极执行。
    if (namespace.length > 0 && namespace[0].startsWith("tools:")) {
      const pregelId = namespace[0].split(":")[1];
      // Check if any pending subagent needs to be marked running.
      // Note: the pregel task ID differs from the tool_call_id,
      // so we mark any pending subagent as running on first subagent event.
      // 检查是否有任何挂起的子智能体需要标记为运行。
      // 注意:pregel 任务 ID 与 tool_call_id 不同,
      // 因此我们在第一个子智能体事件上将任何挂起的子智能体标记为运行。
      for (const [id, sub] of activeSubagents) {
        if (sub.status === "pending") {
          sub.status = "running";
          console.log(
            `[lifecycle] RUNNING  → subagent "${sub.type}" (pregel: ${pregelId})`,
          );
          break;
        }
      }
    }

    // ─── Phase 3: Detect subagent completing ──────────────────────
    // ─── 阶段 3:检测子智能体完成 ──────────────────────
    // When the main agent's tools node returns a tool message,
    // the subagent has completed and returned its result.
    // 当主智能体的 tools 节点返回工具消息时,
    // 子智能体已完成并返回其结果。
    if (namespace.length === 0 && nodeName === "tools") {
      for (const msg of (data as any).messages ?? []) {
        if (msg.type === "tool") {
          const subagent = activeSubagents.get(msg.tool_call_id);
          if (subagent) {
            subagent.status = "complete";
            console.log(
              `[lifecycle] COMPLETE → subagent "${subagent.type}" (${msg.tool_call_id})`,
            );
            console.log(
              `  Result preview: ${String(msg.content).slice(0, 120)}...`,
            );
          }
        }
      }
    }
  }
}

// Print final state
// 打印最终状态
console.log("\n--- Final subagent states ---");
for (const [id, sub] of activeSubagents) {
  console.log(`  ${sub.type}: ${sub.status}`);
}

相关内容