Skip to main content
useStream React hook 提供了与 LangGraph 流式传输功能的无缝集成。它处理流式传输、状态管理和分支逻辑的所有复杂性,让您可以专注于构建出色的生成式 UI 体验。 主要功能:
  • 消息流 — 处理消息块流以形成完整的消息
  • 自动状态管理 — 针对消息、中断、加载状态和错误
  • 对话分支 — 从聊天历史记录中的任何点创建备用对话路径
  • UI 无关设计 — 自带组件和样式

安装

安装 LangGraph SDK 以在您的 React 应用程序中使用 useStream hook:
npm install @langchain/langgraph-sdk

基本用法

useStream hook 连接到任何 LangGraph graph,无论是从您自己的端点运行,还是使用 LangSmith deployments 部署的。
import { useStream } from "@langchain/langgraph-sdk/react";

function Chat() {
  const stream = useStream({
    assistantId: "agent",
    // Local development
    apiUrl: "http://localhost:2024",
    // Production deployment (LangSmith hosted)
    // apiUrl: "https://your-deployment.us.langgraph.app"
  });

  const handleSubmit = (message: string) => {
    stream.submit({
      messages: [
        { content: message, type: "human" }
      ],
    });
  };

  return (
    <div>
      {stream.messages.map((message, idx) => (
        <div key={message.id ?? idx}>
          {message.type}: {message.content}
        </div>
      ))}

      {stream.isLoading && <div>Loading...</div>}
      {stream.error && <div>Error: {stream.error.message}</div>}
    </div>
  );
}
了解如何 将您的 agent 部署到 LangSmith 以获得具有内置可观测性、身份验证和扩展性的生产级托管。
assistantId
string
required
要连接的 agent 的 ID。当使用 LangSmith 部署时,这必须与您的部署仪表板中显示的 agent ID 匹配。对于自定义 API 部署或本地开发,这可以是您的服务器用于标识 agent 的任何字符串。
apiUrl
string
Agent Server 的 URL。对于本地开发,默认为 http://localhost:2024
apiKey
string
用于身份验证的 API 密钥。连接到 LangSmith 上的已部署 agent 时是必需的。
threadId
string
连接到现有线程而不是创建新线程。用于恢复对话。
onThreadId
(id: string) => void
创建新线程时调用的回调。使用此回调将线程 ID 持久化以供以后使用。
reconnectOnMount
boolean | (() => Storage)
组件挂载时自动恢复正在进行的运行。设置为 true 以使用会话存储,或提供自定义存储函数。
onCreated
(run: Run) => void
创建新运行时调用的回调。用于持久化运行元数据以进行恢复。
onError
(error: Error) => void
流式传输期间发生错误时调用的回调。
onFinish
(state: StateType, run?: Run) => void
流式传输成功完成并带有最终状态时调用的回调。
onCustomEvent
(data: unknown, context: { mutate }) => void
处理使用 writer 从您的 agent 发出的自定义事件。请参阅 自定义流式事件
onUpdateEvent
(data: unknown, context: { mutate }) => void
处理每个图步骤后的状态更新事件。
onMetadataEvent
(metadata: { run_id, thread_id }) => void
处理带有运行和线程信息的元数据事件。
messagesKey
string
default:"messages"
图状态中包含消息数组的键。
throttle
boolean
default:"true"
批量处理状态更新以获得更好的渲染性能。禁用以进行即时更新。
initialValues
StateType | null
第一个流加载时显示的初始状态值。用于立即显示缓存的线程数据。
messages
Message[]
当前线程中的所有消息,包括人类和 AI 消息。
values
StateType
当前图状态值。类型是从 agent 或 graph 类型参数推断出来的。
isLoading
boolean
当前是否有流正在进行中。使用此选项显示加载指示器。
error
Error | null
流式传输期间发生的任何错误。无错误时为 null
interrupt
Interrupt | undefined
需要用户输入的当前中断,例如 human-in-the-loop 批准请求。
toolCalls
ToolCallWithResult[]
所有消息中的所有工具调用,及其结果和状态(pendingcompletederror)。
submit
(input, options?) => Promise<void>
向 agent 提交新输入。当使用命令从中断恢复时,将 null 作为输入传递。选项包括用于分支的 checkpoint、用于乐观更新的 optimisticValues 和用于乐观线程创建的 threadId
stop
() => void
立即停止当前流。
joinStream
(runId: string) => void
按运行 ID 恢复现有流。与 onCreated 配合使用以进行手动流恢复。
setBranch
(branch: string) => void
切换到对话历史记录中的不同分支。
getToolCalls
(message) => ToolCall[]
获取特定 AI 消息的所有工具调用。
getMessagesMetadata
(message) => MessageMetadata
获取消息的元数据,包括流式传输信息(如用于识别源节点的 langgraph_node)和用于分支的 firstSeenState
experimental_branchTree
BranchTree
线程的树形表示,用于非基于消息的图中的高级分支控制。

线程管理

使用内置线程管理跟踪对话。您可以访问当前线程 ID 并在创建新线程时获得通知:
import { useState } from "react";
import { useStream } from "@langchain/langgraph-sdk/react";

function Chat() {
  const [threadId, setThreadId] = useState<string | null>(null);

  const stream = useStream({
    apiUrl: "http://localhost:2024",
    assistantId: "agent",
    threadId: threadId,
    onThreadId: setThreadId,
  });

  // threadId is updated when a new thread is created
  // Store it in URL params or localStorage for persistence
}
我们建议存储 threadId 以让用户在页面刷新后恢复对话。

页面刷新后恢复

通过设置 reconnectOnMount: trueuseStream hook 可以在挂载时自动恢复正在进行的运行。这对于在页面刷新后继续流式传输非常有用,确保不会丢失停机期间生成的消息和事件。
const stream = useStream({
  apiUrl: "http://localhost:2024",
  assistantId: "agent",
  reconnectOnMount: true,
});
默认情况下,创建的运行 ID 存储在 window.sessionStorage 中,可以通过传递自定义存储函数进行交换:
const stream = useStream({
  apiUrl: "http://localhost:2024",
  assistantId: "agent",
  reconnectOnMount: () => window.localStorage,
});
要手动控制恢复过程,请使用运行回调来持久化元数据并使用 joinStream 进行恢复:
import { useStream } from "@langchain/langgraph-sdk/react";
import { useEffect, useRef } from "react";

function Chat({ threadId }: { threadId: string | null }) {
  const stream = useStream({
    apiUrl: "http://localhost:2024",
    assistantId: "agent",
    threadId,
    onCreated: (run) => {
      // Persist run ID when stream starts
      window.sessionStorage.setItem(`resume:${run.thread_id}`, run.run_id);
    },
    onFinish: (_, run) => {
      // Clean up when stream completes
      window.sessionStorage.removeItem(`resume:${run?.thread_id}`);
    },
  });

  // Resume stream on mount if there's a stored run ID
  const joinedThreadId = useRef<string | null>(null);
  useEffect(() => {
    if (!threadId) return;
    const runId = window.sessionStorage.getItem(`resume:${threadId}`);
    if (runId && joinedThreadId.current !== threadId) {
      stream.joinStream(runId);
      joinedThreadId.current = threadId;
    }
  }, [threadId]);

  const handleSubmit = (text: string) => {
    // Use streamResumable to ensure events aren't lost
    stream.submit(
      { messages: [{ type: "human", content: text }] },
      { streamResumable: true }
    );
  };
}

尝试会话持久化示例

查看 session-persistence 示例中带有 reconnectOnMount 和线程持久化的流恢复的完整实现。

乐观更新

您可以在执行网络请求之前乐观地更新客户端状态,为用户提供即时反馈:
const stream = useStream({
  apiUrl: "http://localhost:2024",
  assistantId: "agent",
});

const handleSubmit = (text: string) => {
  const newMessage = { type: "human" as const, content: text };

  stream.submit(
    { messages: [newMessage] },
    {
      optimisticValues(prev) {
        const prevMessages = prev.messages ?? [];
        return { ...prev, messages: [...prevMessages, newMessage] };
      },
    }
  );
};

乐观线程创建

submit 中使用 threadId 选项以启用乐观 UI 模式,在这种模式下,您需要在创建线程之前知道线程 ID:
import { useState } from "react";
import { useStream } from "@langchain/langgraph-sdk/react";

function Chat() {
  const [threadId, setThreadId] = useState<string | null>(null);
  const [optimisticThreadId] = useState(() => crypto.randomUUID());

  const stream = useStream({
    apiUrl: "http://localhost:2024",
    assistantId: "agent",
    threadId,
    onThreadId: setThreadId,
  });

  const handleSubmit = (text: string) => {
    // Navigate immediately without waiting for thread creation
    window.history.pushState({}, "", `/threads/${optimisticThreadId}`);

    // Create thread with the predetermined ID
    stream.submit(
      { messages: [{ type: "human", content: text }] },
      { threadId: optimisticThreadId }
    );
  };
}

缓存线程显示

使用 initialValues 选项在从服务器加载历史记录时立即显示缓存的线程数据:
function Chat({ threadId, cachedData }) {
  const stream = useStream({
    apiUrl: "http://localhost:2024",
    assistantId: "agent",
    threadId,
    initialValues: cachedData?.values,
  });

  // Shows cached messages instantly, then updates when server responds
}

分支

通过编辑以前的消息或重新生成 AI 响应来创建备用对话路径。使用 getMessagesMetadata() 访问用于分支的检查点信息:
import { useStream } from "@langchain/langgraph-sdk/react";
import { BranchSwitcher } from "./BranchSwitcher";

function Chat() {
  const stream = useStream({
    apiUrl: "http://localhost:2024",
    assistantId: "agent",
  });

  return (
    <div>
      {stream.messages.map((message) => {
        const meta = stream.getMessagesMetadata(message);
        const parentCheckpoint = meta?.firstSeenState?.parent_checkpoint;

        return (
          <div key={message.id}>
            <div>{message.content as string}</div>

            {/* Edit human messages */}
            {message.type === "human" && (
              <button
                onClick={() => {
                  const newContent = prompt("Edit message:", message.content as string);
                  if (newContent) {
                    stream.submit(
                      { messages: [{ type: "human", content: newContent }] },
                      { checkpoint: parentCheckpoint }
                    );
                  }
                }}
              >
                Edit
              </button>
            )}

            {/* Regenerate AI messages */}
            {message.type === "ai" && (
              <button
                onClick={() => stream.submit(undefined, { checkpoint: parentCheckpoint })}
              >
                Regenerate
              </button>
            )}

            {/* Switch between branches */}
            <BranchSwitcher
              branch={meta?.branch}
              branchOptions={meta?.branchOptions}
              onSelect={(branch) => stream.setBranch(branch)}
            />
          </div>
        );
      })}
    </div>
  );
}
对于高级用例,使用 experimental_branchTree 属性获取非基于消息的图的线程树形表示。

尝试分支示例

查看 branching-chat 示例中包含编辑、重新生成和分支切换的对话分支的完整实现。

类型安全流式传输

当与通过 createAgent 创建的 agent 或通过 StateGraph 创建的 graph 一起使用时,useStream hook 支持完整的类型推断。传递 typeof agenttypeof graph 作为类型参数以自动推断工具调用类型。

使用 createAgent

当使用 createAgent 时,工具调用类型会自动从您注册到 agent 的工具中推断出来:
import { createAgent, tool } from "langchain";
import { z } from "zod";

const getWeather = tool(
  async ({ location }) => `Weather in ${location}: Sunny, 72°F`,
  {
    name: "get_weather",
    description: "Get weather for a location",
    schema: z.object({
      location: z.string().describe("The city to get weather for"),
    }),
  }
);

export const agent = createAgent({
  model: "openai:gpt-4.1-mini",
  tools: [getWeather],
});

使用 StateGraph

对于自定义 StateGraph 应用程序,状态类型从图的注解中推断:
import { StateGraph, MessagesAnnotation, START, END } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({ model: "gpt-4.1-mini" });

const workflow = new StateGraph(MessagesAnnotation)
  .addNode("agent", async (state) => {
    const response = await model.invoke(state.messages);
    return { messages: [response] };
  })
  .addEdge(START, "agent")
  .addEdge("agent", END);

export const graph = workflow.compile();

使用 Annotation 类型

如果您使用的是 LangGraph.js,您可以重用您的图的 annotation 类型。确保仅导入类型以避免导入整个 LangGraph.js 运行时:
import {
  Annotation,
  MessagesAnnotation,
  type StateType,
  type UpdateType,
} from "@langchain/langgraph/web";

const AgentState = Annotation.Root({
  ...MessagesAnnotation.spec,
  context: Annotation<string>(),
});

const stream = useStream<
  StateType<typeof AgentState.spec>,
  { UpdateType: UpdateType<typeof AgentState.spec> }
>({
  apiUrl: "http://localhost:2024",
  assistantId: "agent",
});

高级类型配置

您可以为中断、自定义事件和可配置选项指定其他类型参数:
import type { Message } from "@langchain/langgraph-sdk";

type State = { messages: Message[]; context?: string };

const stream = useStream<
  State,
  {
    UpdateType: { messages: Message[] | Message; context?: string };
    InterruptType: string;
    CustomEventType: { type: "progress" | "debug"; payload: unknown };
    ConfigurableType: { model: string };
  }
>({
  apiUrl: "http://localhost:2024",
  assistantId: "agent",
});

// stream.interrupt is typed as string | undefined
// onCustomEvent receives typed events

渲染工具调用

使用 getToolCalls 从 AI 消息中提取并渲染工具调用。工具调用包括调用详细信息、结果(如果已完成)和状态。
import { useStream } from "@langchain/langgraph-sdk/react";
import type { agent } from "./agent";
import { ToolCallCard } from "./ToolCallCard";
import { MessageBubble } from "./MessageBubble";

function Chat() {
  const stream = useStream<typeof agent>({
    assistantId: "agent",
    apiUrl: "http://localhost:2024",
  });

  return (
    <div className="flex flex-col gap-4">
      {stream.messages.map((message, idx) => {
        if (message.type === "ai") {
          const toolCalls = stream.getToolCalls(message);

          if (toolCalls.length > 0) {
            return (
              <div key={message.id ?? idx} className="flex flex-col gap-2">
                {toolCalls.map((toolCall) => (
                  <ToolCallCard key={toolCall.id} toolCall={toolCall} />
                ))}
              </div>
            );
          }
        }

        return <MessageBubble key={message.id ?? idx} message={message} />;
      })}
    </div>
  );
}

尝试工具调用示例

查看 tool-calling-agent 示例中带有天气、计算器和笔记工具的工具调用渲染的完整实现。

自定义流式事件

使用您工具或节点中的 writer 从您的 agent 流式传输自定义数据。使用 onCustomEvent 回调在 UI 中处理这些事件。
import { tool, type ToolRuntime } from "langchain";
import { z } from "zod";

// Define your custom event types
interface ProgressData {
  type: "progress";
  id: string;
  message: string;
  progress: number;
}

const analyzeDataTool = tool(
  async ({ dataSource }, config: ToolRuntime) => {
    const steps = ["Connecting...", "Fetching...", "Processing...", "Done!"];

    for (let i = 0; i < steps.length; i++) {
      // Emit progress events during execution
      config.writer?.({
        type: "progress",
        id: `analysis-${Date.now()}`,
        message: steps[i],
        progress: ((i + 1) / steps.length) * 100,
      } satisfies ProgressData);

      await new Promise((resolve) => setTimeout(resolve, 500));
    }

    return JSON.stringify({ result: "Analysis complete" });
  },
  {
    name: "analyze_data",
    description: "Analyze data with progress updates",
    schema: z.object({
      dataSource: z.string().describe("Data source to analyze"),
    }),
  }
);

尝试自定义流式传输示例

查看 custom-streaming 示例中带有进度条、状态徽章和文件操作卡的自定义事件的完整实现。

事件处理

useStream hook 提供了回调选项,让您可以访问不同类型的流式传输事件。您无需显式配置流模式——只需为您想要处理的事件类型传递回调:
const stream = useStream({
  apiUrl: "http://localhost:2024",
  assistantId: "agent",

  // Handle state updates after each graph step
  onUpdateEvent: (update, options) => {
    console.log("Graph update:", update);
  },

  // Handle custom events streamed from your graph
  onCustomEvent: (event, options) => {
    console.log("Custom event:", event);
  },

  // Handle metadata events with run/thread info
  onMetadataEvent: (metadata) => {
    console.log("Run ID:", metadata.run_id);
    console.log("Thread ID:", metadata.thread_id);
  },

  onError: (error) => {
    console.error("Stream error:", error);
  },

  onFinish: (state, options) => {
    console.log("Stream finished with final state:", state);
  },
});

可用回调

回调描述流模式
onUpdateEvent每个图步骤后收到状态更新时调用updates
onCustomEvent从您的图中收到自定义事件时调用custom
onMetadataEvent带有运行和线程元数据时调用metadata
onError发生错误时调用-
onFinish流完成时调用-

多智能体流式传输

当使用多智能体系统或具有多个节点的图时,使用消息元数据来识别哪个节点生成了每条消息。当多个 LLM 并行运行并且您希望以不同的视觉样式显示其输出时,这特别有用。
import { useStream } from "@langchain/langgraph-sdk/react";
import type { agent } from "./agent";
import { MessageBubble } from "./MessageBubble";

// Node configuration for visual display
const NODE_CONFIG: Record<string, { label: string; color: string }> = {
  researcher_analytical: { label: "Analytical Research", color: "cyan" },
  researcher_creative: { label: "Creative Research", color: "purple" },
  researcher_practical: { label: "Practical Research", color: "emerald" },
};

function MultiAgentChat() {
  const stream = useStream<typeof agent>({
    assistantId: "parallel-research",
    apiUrl: "http://localhost:2024",
  });

  return (
    <div className="flex flex-col gap-4">
      {stream.messages.map((message, idx) => {
        if (message.type !== "ai") {
          return <MessageBubble key={message.id ?? idx} message={message} />;
        }

        // Get streaming metadata to identify the source node
        const metadata = stream.getMessagesMetadata?.(message);
        const nodeName =
          (metadata?.streamMetadata?.langgraph_node as string) ||
          (message as { name?: string }).name;

        const config = nodeName ? NODE_CONFIG[nodeName] : null;

        if (!config) {
          return <MessageBubble key={message.id ?? idx} message={message} />;
        }

        return (
          <div
            key={message.id ?? idx}
            className={`bg-${config.color}-950/30 border border-${config.color}-500/30 rounded-xl p-4`}
          >
            <div className={`text-sm font-semibold text-${config.color}-400 mb-2`}>
              {config.label}
            </div>
            <div className="text-neutral-200 whitespace-pre-wrap">
              {typeof message.content === "string" ? message.content : ""}
            </div>
          </div>
        );
      })}
    </div>
  );
}

尝试并行研究示例

查看 parallel-research 示例中带有三位并行研究人员和不同视觉样式的多智能体流式传输的完整实现。

Human-in-the-loop

当 agent 需要人工批准工具执行时,处理中断。在 如何处理中断 指南中了解更多信息。
import { useState } from "react";
import { useStream } from "@langchain/langgraph-sdk/react";
import type { HITLRequest, HITLResponse } from "langchain";
import type { agent } from "./agent";
import { MessageBubble } from "./MessageBubble";

function HumanInTheLoopChat() {
  const stream = useStream<typeof agent, { InterruptType: HITLRequest }>({
    assistantId: "human-in-the-loop",
    apiUrl: "http://localhost:2024",
  });

  const [isProcessing, setIsProcessing] = useState(false);

  // Type assertion for interrupt value
  const hitlRequest = stream.interrupt?.value as HITLRequest | undefined;

  const handleApprove = async (index: number) => {
    if (!hitlRequest) return;
    setIsProcessing(true);

    try {
      const decisions: HITLResponse["decisions"] =
        hitlRequest.actionRequests.map((_, i) =>
          i === index ? { type: "approve" } : { type: "approve" }
        );

      await stream.submit(null, {
        command: {
          resume: { decisions } as HITLResponse,
        },
      });
    } finally {
      setIsProcessing(false);
    }
  };

  const handleReject = async (index: number, reason: string) => {
    if (!hitlRequest) return;
    setIsProcessing(true);

    try {
      const decisions: HITLResponse["decisions"] =
        hitlRequest.actionRequests.map((_, i) =>
          i === index
            ? { type: "reject", message: reason }
            : { type: "reject", message: "Rejected along with other actions" }
        );

      await stream.submit(null, {
        command: {
          resume: { decisions } as HITLResponse,
        },
      });
    } finally {
      setIsProcessing(false);
    }
  };

  return (
    <div>
      {/* Render messages */}
      {stream.messages.map((message, idx) => (
        <MessageBubble key={message.id ?? idx} message={message} />
      ))}

      {/* Render approval UI when interrupted */}
      {hitlRequest && hitlRequest.actionRequests.length > 0 && (
        <div className="bg-amber-900/20 border border-amber-500/30 rounded-xl p-4 mt-4">
          <h3 className="text-amber-400 font-semibold mb-4">
            Action requires approval
          </h3>

          {hitlRequest.actionRequests.map((action, idx) => (
            <div
              key={idx}
              className="bg-neutral-900 rounded-lg p-4 mb-4 last:mb-0"
            >
              <div className="flex items-center gap-2 mb-2">
                <span className="text-sm font-mono text-white">
                  {action.name}
                </span>
              </div>

              <pre className="text-xs bg-black rounded p-2 mb-3 overflow-x-auto">
                {JSON.stringify(action.args, null, 2)}
              </pre>

              <div className="flex gap-2">
                <button
                  onClick={() => handleApprove(idx)}
                  disabled={isProcessing}
                  className="px-3 py-1.5 bg-green-600 hover:bg-green-700 text-white text-sm rounded disabled:opacity-50"
                >
                  Approve
                </button>
                <button
                  onClick={() => handleReject(idx, "User rejected")}
                  disabled={isProcessing}
                  className="px-3 py-1.5 bg-red-600 hover:bg-red-700 text-white text-sm rounded disabled:opacity-50"
                >
                  Reject
                </button>
              </div>
            </div>
          ))}
        </div>
      )}
    </div>
  );
}

尝试 human-in-the-loop 示例

查看 human-in-the-loop 示例中带有批准、拒绝和编辑操作的审批工作流的完整实现。

推理模型

扩展推理/思考支持目前是实验性的。推理 token 的流式接口因提供商(OpenAI 与 Anthropic)而异,并且随着抽象的发展可能会发生变化。
当使用具有扩展推理能力的模型(如 OpenAI 的推理模型或 Anthropic 的扩展思考)时,思考过程嵌入在消息内容中。您需要单独提取并显示它。
import { useStream } from "@langchain/langgraph-sdk/react";
import type { Message } from "@langchain/langgraph-sdk";
import type { agent } from "./agent";
import { getReasoningFromMessage, getTextContent } from "./utils";

function ReasoningChat() {
  const stream = useStream<typeof agent>({
    assistantId: "reasoning-agent",
    apiUrl: "http://localhost:2024",
  });

  return (
    <div className="flex flex-col gap-4">
      {stream.messages.map((message, idx) => {
        if (message.type === "ai") {
          const reasoning = getReasoningFromMessage(message);
          const textContent = getTextContent(message);

          return (
            <div key={message.id ?? idx}>
              {/* Render reasoning bubble if present */}
              {reasoning && (
                <div className="mb-4">
                  <div className="text-xs font-medium text-amber-400/80 mb-2">
                    Reasoning
                  </div>
                  <div className="bg-amber-950/50 border border-amber-500/20 rounded-2xl px-4 py-3">
                    <div className="text-sm text-amber-100/90 whitespace-pre-wrap">
                      {reasoning}
                    </div>
                  </div>
                </div>
              )}

              {/* Render text content */}
              {textContent && (
                <div className="text-neutral-100 whitespace-pre-wrap">
                  {textContent}
                </div>
              )}
            </div>
          );
        }

        return <MessageBubble key={message.id ?? idx} message={message} />;
      })}

      {stream.isLoading && (
        <div className="flex items-center gap-2 text-amber-400/70">
          <span className="text-sm">Thinking...</span>
        </div>
      )}
    </div>
  );
}

尝试推理示例

查看 reasoning-agent 示例中带有 OpenAI 和 Anthropic 模型的推理 token 显示的完整实现。

自定义状态类型

对于自定义 LangGraph 应用程序,将您的工具调用类型嵌入到您的状态消息属性中。
import { Message } from "@langchain/langgraph-sdk";
import { useStream } from "@langchain/langgraph-sdk/react";

// Define your tool call types as a discriminated union
type MyToolCalls =
  | { name: "search"; args: { query: string }; id?: string }
  | { name: "calculate"; args: { expression: string }; id?: string };

// Embed tool call types in your state's messages
interface MyGraphState {
  messages: Message<MyToolCalls>[];
  context?: string;
}

function CustomGraphChat() {
  const stream = useStream<MyGraphState>({
    assistantId: "my-graph",
    apiUrl: "http://localhost:2024",
  });

  // stream.values is typed as MyGraphState
  // stream.toolCalls[0].call.name is typed as "search" | "calculate"
}
您还可以为中断和可配置选项指定其他类型配置:
interface MyGraphState {
  messages: Message<MyToolCalls>[];
}

function CustomGraphChat() {
  const stream = useStream<
    MyGraphState,
    {
      InterruptType: { question: string };
      ConfigurableType: { userId: string };
    }
  >({
    assistantId: "my-graph",
    apiUrl: "http://localhost:2024",
  });

  // stream.interrupt is typed as { question: string } | undefined
}

自定义传输

对于自定义 API 端点或非标准部署,请使用带有 FetchStreamTransporttransport 选项连接到任何流式传输 API。
import { useMemo } from "react";
import { useStream, FetchStreamTransport } from "@langchain/langgraph-sdk/react";

function CustomAPIChat({ apiKey }: { apiKey: string }) {
  // Create transport with custom request handling
  const transport = useMemo(() => {
    return new FetchStreamTransport({
      apiUrl: "/api/my-agent",
      onRequest: async (url: string, init: RequestInit) => {
        // Inject API key or other custom data into requests
        const customBody = JSON.stringify({
          ...(JSON.parse(init.body as string) || {}),
          apiKey,
        });

        return {
          ...init,
          body: customBody,
          headers: {
            ...init.headers,
            "X-Custom-Header": "value",
          },
        };
      },
    });
  }, [apiKey]);

  const stream = useStream({
    transport,
  });

  // Use stream as normal
  return (
    <div>
      {stream.messages.map((message, idx) => (
        <MessageBubble key={message.id ?? idx} message={message} />
      ))}
    </div>
  );
}

示例:从 Next.js 端点流式传输

您可以在 Next.js API 路由中托管您的 agent,而不是运行单独的 Agent Server。useStream hook 通过服务器发送事件 (SSE) 进行通信,因此任何返回正确事件格式的端点都可以工作——包括您自己的 Next.js 路由。
何时使用您自己的 API 路由与 LangSmith对于基本的 agent 交互(流式传输、工具调用),滚动您自己的 Next.js API 路由效果很好。对于持久化对话、加载线程历史记录和对话分支,请考虑 LangSmith deployments,它们提供了这些开箱即用的功能。

Server: 从 API 路由流式传输

在您的 Next.js API 路由中,通过返回具有 text/event-stream 编码的 Response 来流式传输 agent 的输出:
// app/api/agent/route.ts
import { NextRequest } from "next/server";
import { createAgent, tool } from "langchain";
import { ChatAnthropic } from "@langchain/anthropic";
import { MemorySaver } from "@langchain/langgraph";
import type { BaseMessage } from "@langchain/core/messages";

const checkpointer = new MemorySaver();

export async function POST(request: NextRequest) {
  const body = await request.json();

  const agent = createAgent({
    model: new ChatAnthropic({ model: "claude-sonnet-4-6" }),
    tools: [/* your tools */],
    checkpointer,
  });

  const stream = await agent.stream(
    { messages: body.messages },
    {
      encoding: "text/event-stream",
      streamMode: ["values", "updates", "messages"],
      configurable: body.config?.configurable,
      recursionLimit: 10,
    }
  );

  return new Response(stream, {
    headers: { "Content-Type": "text/event-stream" },
  });
}

Client: 使用 FetchStreamTransport 连接

使用 FetchStreamTransportuseStream 指向您的 Next.js API 路由。当使用自定义端点时,传递 transport 而不是 apiUrlassistantId
import { useMemo } from "react";
import { useStream, FetchStreamTransport } from "@langchain/langgraph-sdk/react";

function ChatInterface({ apiKey }: { apiKey: string }) {
  const transport = useMemo(() => {
    return new FetchStreamTransport({
      apiUrl: "/api/agent",
      onRequest: async (url: string, init: RequestInit) => {
        // Inject API key or other data into the request body
        const customBody = JSON.stringify({
          ...(JSON.parse(init.body as string) || {}),
          apiKey,
        });
        return { ...init, body: customBody };
      },
    });
  }, [apiKey]);

  const stream = useStream({
    transport,
  });

  return (
    <div>
      {stream.messages.map((message, idx) => (
        <div key={message.id ?? idx}>{/* render message */}</div>
      ))}
      {stream.isLoading && <div>Loading...</div>}
    </div>
  );
}
对于线程历史记录,将 threadId 传递给 useStream 并在从 API 路由流式传输时将其包含在 configurable 中。Agent 的检查点程序将加载并按线程持久化状态。

完整的 Next.js 示例

langchain-nextjs 存储库中查看带有 LangChain agents、流式聊天和工具调用的完整 Next.js 应用程序。

相关