Skip to main content
SerpApi 允许您将搜索引擎结果集成到您的 LLM 应用程序中 本指南简要概述了如何开始使用 SerpApi 工具。有关所有 SerpAPI 功能和配置的详细文档,请访问 API 参考

概览

集成详情

设置

集成位于 @langchain/community 包中,您可以如下所示安装它:
npm install @langchain/community @langchain/core

凭据

此处 设置 API 密钥,并将其设置为名为 SERPAPI_API_KEY 的环境变量。
process.env.SERPAPI_API_KEY = "YOUR_API_KEY"
设置 LangSmith 以获得一流的可观察性也很有帮助(但不是必需的):
process.env.LANGSMITH_TRACING="true"
process.env.LANGSMITH_API_KEY="your-api-key"

实例化

您可以像这样导入和实例化 SerpAPI 工具的一个实例:
import { SerpAPI } from "@langchain/community/tools/serpapi";

const tool = new SerpAPI();

调用

直接使用参数调用

您可以直接调用该工具,如下所示:
await tool.invoke({
  input: "what is the current weather in SF?"
});
{"type":"weather_result","temperature":"63","unit":"Fahrenheit","precipitation":"3%","humidity":"91%","wind":"5 mph","location":"San Francisco, CA","date":"Sunday 9:00 AM","weather":"Mostly cloudy"}

使用 ToolCall 调用

我们还可以使用模型生成的 ToolCall 来调用工具,在这种情况下,将返回一个 ToolMessage
// This is usually generated by a model, but we'll create a tool call directly for demo purposes.
const modelGeneratedToolCall = {
  args: {
    input: "what is the current weather in SF?"
  },
  id: "1",
  name: tool.name,
  type: "tool_call",
}

await tool.invoke(modelGeneratedToolCall)
ToolMessage {
  "content": "{\"type\":\"weather_result\",\"temperature\":\"63\",\"unit\":\"Fahrenheit\",\"precipitation\":\"3%\",\"humidity\":\"91%\",\"wind\":\"5 mph\",\"location\":\"San Francisco, CA\",\"date\":\"Sunday 9:00 AM\",\"weather\":\"Mostly cloudy\"}",
  "name": "search",
  "additional_kwargs": {},
  "response_metadata": {},
  "tool_call_id": "1"
}

链式调用

我们可以通过先将工具绑定到 工具调用模型 然后调用它,来在链中使用我们的工具:
// @lc-docs-hide-cell

import { ChatOpenAI } from "@langchain/openai"

const llm = new ChatOpenAI({
  model: "gpt-4.1-mini",
  temperature: 0,
})
import { HumanMessage } from "@langchain/core/messages";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnableLambda } from "@langchain/core/runnables";

const prompt = ChatPromptTemplate.fromMessages(
  [
    ["system", "You are a helpful assistant."],
    ["placeholder", "{messages}"],
  ]
)

const llmWithTools = llm.bindTools([tool]);

const chain = prompt.pipe(llmWithTools);

const toolChain = RunnableLambda.from(
  async (userInput: string, config) => {
    const humanMessage = new HumanMessage(userInput,);
    const aiMsg = await chain.invoke({
      messages: [new HumanMessage(userInput)],
    }, config);
    const toolMsgs = await tool.batch(aiMsg.tool_calls, config);
    return chain.invoke({
      messages: [humanMessage, aiMsg, ...toolMsgs],
    }, config);
  }
);

const toolChainResult = await toolChain.invoke("what is the current weather in sf?");
const { tool_calls, content } = toolChainResult;

console.log("AIMessage", JSON.stringify({
  tool_calls,
  content,
}, null, 2));
AIMessage {
  "tool_calls": [],
  "content": "The current weather in San Francisco is mostly cloudy, with a temperature of 64°F. The humidity is at 90%, there is a 3% chance of precipitation, and the wind is blowing at 5 mph."
}

代理

有关如何在代理中使用 LangChain 工具的指南,请参阅 LangGraph.js 文档。

API 参考

有关所有 SerpAPI 功能和配置的详细文档,请访问 API 参考