Skip to main content
Ollama 允许您在本地运行开源大型语言模型 (LLM),例如 Llama 3.1。 Ollama 将模型权重、配置和数据打包到一个由 Modelfile 定义的单一包中。它优化了设置和配置细节,包括 GPU 使用。 本指南将帮助您开始使用 ChatOllama 聊天模型。有关所有 ChatOllama 功能和配置的详细文档,请前往 API 参考

概述

集成详情

Ollama 允许您使用具有不同功能的广泛模型。下表详情中的某些字段仅适用于 Ollama 提供的部分模型。 有关支持的模型和模型变体的完整列表,请参阅 Ollama 模型库 并按标签搜索。
可序列化PY 支持下载量版本
ChatOllama@langchain/ollamabetaNPM - DownloadsNPM - Version

模型功能

请参阅下表标题中的链接,了解如何使用特定功能的指南。

设置

请按照这些说明设置并运行本地 Ollama 实例。然后,下载 @langchain/ollama 包。

凭证

如果您想自动跟踪模型调用,还可以通过取消注释以下内容来设置您的 LangSmith API 密钥:
# export LANGSMITH_TRACING="true"
# export LANGSMITH_API_KEY="your-api-key"

安装

LangChain ChatOllama 集成位于 @langchain/ollama 包中:
npm install @langchain/ollama @langchain/core

实例化

现在我们可以实例化我们的模型对象并生成聊天补全:
import { ChatOllama } from "@langchain/ollama"

const llm = new ChatOllama({
    model: "llama3",
    temperature: 0,
    maxRetries: 2,
    // other params...
})

调用

const aiMsg = await llm.invoke([
    [
        "system",
        "You are a helpful assistant that translates English to French. Translate the user sentence.",
    ],
    ["human", "I love programming."],
])
aiMsg
AIMessage {
  "content": "Je adore le programmation.\n\n(Note: \"programmation\" is the feminine form of the noun in French, but if you want to use the masculine form, it would be \"le programme\" instead.)",
  "additional_kwargs": {},
  "response_metadata": {
    "model": "llama3",
    "created_at": "2024-08-01T16:59:17.359302Z",
    "done_reason": "stop",
    "done": true,
    "total_duration": 6399311167,
    "load_duration": 5575776417,
    "prompt_eval_count": 35,
    "prompt_eval_duration": 110053000,
    "eval_count": 43,
    "eval_duration": 711744000
  },
  "tool_calls": [],
  "invalid_tool_calls": [],
  "usage_metadata": {
    "input_tokens": 35,
    "output_tokens": 43,
    "total_tokens": 78
  }
}
console.log(aiMsg.content)
Je adore le programmation.

(Note: "programmation" is the feminine form of the noun in French, but if you want to use the masculine form, it would be "le programme" instead.)

工具

Ollama 现在为其可用模型的一个子集提供原生工具调用支持。下面的示例演示了如何从 Ollama 模型调用工具。
import { tool } from "@langchain/core/tools";
import { ChatOllama } from "@langchain/ollama";
import * as z from "zod";

const weatherTool = tool((_) => "Da weather is weatherin", {
  name: "get_current_weather",
  description: "Get the current weather in a given location",
  schema: z.object({
    location: z.string().describe("The city and state, e.g. San Francisco, CA"),
  }),
});

// Define the model
const llmForTool = new ChatOllama({
  model: "llama3-groq-tool-use",
});

// Bind the tool to the model
const llmWithTools = llmForTool.bindTools([weatherTool]);

const resultFromTool = await llmWithTools.invoke(
  "What's the weather like today in San Francisco? Ensure you use the 'get_current_weather' tool."
);

console.log(resultFromTool);
AIMessage {
  "content": "",
  "additional_kwargs": {},
  "response_metadata": {
    "model": "llama3-groq-tool-use",
    "created_at": "2024-08-01T18:43:13.2181Z",
    "done_reason": "stop",
    "done": true,
    "total_duration": 2311023875,
    "load_duration": 1560670292,
    "prompt_eval_count": 177,
    "prompt_eval_duration": 263603000,
    "eval_count": 30,
    "eval_duration": 485582000
  },
  "tool_calls": [
    {
      "name": "get_current_weather",
      "args": {
        "location": "San Francisco, CA"
      },
      "id": "c7a9d590-99ad-42af-9996-41b90efcf827",
      "type": "tool_call"
    }
  ],
  "invalid_tool_calls": [],
  "usage_metadata": {
    "input_tokens": 177,
    "output_tokens": 30,
    "total_tokens": 207
  }
}

结构化输出

Ollama 原生支持所有模型的结构化输出,允许您通过调用 .withStructuredOutput() 来强制模型返回特定格式。
import { ChatOllama } from "@langchain/ollama";
import { z } from "zod";

// Define the schema
const Country = z.object({
  name: z.string(),
  capital: z.string(),
  languages: z.array(z.string()),
});

// Define the model
const llm = new ChatOllama({
  model: "llama3.1",
  temperature: 0,
});

// Pass the schema to enforce a specific output format
const structuredLlm = llm.withStructuredOutput(Country);

const result = await structuredLlm.invoke("Tell me about Canada.");
console.log(result);
{
  name: 'Canada',
  capital: 'Ottawa',
  languages: [ 'English', 'French' ]
}
如果您更喜欢通过工具调用使用结构化输出,请传递 method: "functionCalling" 选项:
import { ChatOllama } from "@langchain/ollama";
import { z } from "zod";

// Define the schema
const Sentence = z.object({
  nouns: z.array(z.string()),
});

// Define the model
const llm = new ChatOllama({
  model: "llama3.1",
  temperature: 0,
});

// Use structured output via tool calling
const structuredLlm = llm.withStructuredOutput(Sentence, { method: "functionCalling" });

const result = await structuredLlm.invoke("Extract all nouns: A cat named Luna who is 5 years old and loves playing with yarn. She has grey fur");
console.log(result);
{ nouns: [ 'cat', 'Luna', 'years', 'yarn', 'fur' ] }

多模态模型

Ollama 支持开源多模态模型,例如 LLaVA,版本 0.1.15 及以上。 您可以将图像作为消息 content 字段的一部分传递给支持多模态的模型,如下所示:
import { ChatOllama } from "@langchain/ollama";
import { HumanMessage } from "@langchain/core/messages";
import * as fs from "node:fs/promises";

const imageData = await fs.readFile("../../../../../examples/hotdog.jpg");
const llmForMultiModal = new ChatOllama({
  model: "llava",
  baseUrl: "http://127.0.0.1:11434",
});
const multiModalRes = await llmForMultiModal.invoke([
  new HumanMessage({
    content: [
      {
        type: "text",
        text: "What is in this image?",
      },
      {
        type: "image_url",
        image_url: `data:image/jpeg;base64,${imageData.toString("base64")}`,
      },
    ],
  }),
]);
console.log(multiModalRes);
AIMessage {
  "content": " The image shows a hot dog in a bun, which appears to be a footlong. It has been cooked or grilled to the point where it's browned and possibly has some blackened edges, indicating it might be slightly overcooked. Accompanying the hot dog is a bun that looks toasted as well. There are visible char marks on both the hot dog and the bun, suggesting they have been cooked directly over a source of heat, such as a grill or broiler. The background is white, which puts the focus entirely on the hot dog and its bun. ",
  "additional_kwargs": {},
  "response_metadata": {
    "model": "llava",
    "created_at": "2024-08-01T17:25:02.169957Z",
    "done_reason": "stop",
    "done": true,
    "total_duration": 5700249458,
    "load_duration": 2543040666,
    "prompt_eval_count": 1,
    "prompt_eval_duration": 1032591000,
    "eval_count": 127,
    "eval_duration": 2114201000
  },
  "tool_calls": [],
  "invalid_tool_calls": [],
  "usage_metadata": {
    "input_tokens": 1,
    "output_tokens": 127,
    "total_tokens": 128
  }
}

API 参考

有关所有 ChatOllama 功能和配置的详细文档,请前往 API 参考