Skip to main content
本文将帮助您快速上手 Outlines 聊天模型。有关 ChatOutlines 所有特性和配置的详细文档,请参阅 API 参考 Outlines 是一个用于约束语言生成的库。它允许您在各种后端上使用大型语言模型(LLM),同时对生成的输出施加约束。

概述

集成详情

可序列化JS 支持下载量版本
ChatOutlineslangchain-communityPyPI - DownloadsPyPI - Version

模型功能

工具调用结构化输出图像输入音频输入视频输入Token 级流式输出原生异步Token 用量对数概率

安装配置

要访问 Outlines 模型,您需要联网从 Hugging Face 下载模型权重。根据所选后端,您需要安装相应的依赖(参见 Outlines 文档)。

凭据

Outlines 没有内置的认证机制。

安装

LangChain Outlines 集成位于 langchain-community 包中,同时需要 outlines 库:
pip install -qU langchain-community outlines

实例化

现在可以实例化模型对象并生成聊天补全:
from langchain_community.chat_models.outlines import ChatOutlines

# 使用 llamacpp 后端
model = ChatOutlines(model="TheBloke/phi-2-GGUF/phi-2.Q4_K_M.gguf", backend="llamacpp")

# 使用 vllm 后端(Mac 上不可用)
model = ChatOutlines(model="meta-llama/Llama-3.2-1B", backend="vllm")

# 使用 mlxlm 后端(仅 Mac 可用)
model = ChatOutlines(model="mistralai/Ministral-8B-Instruct-2410", backend="mlxlm")

# 使用 huggingface transformers 后端
model = ChatOutlines(model="microsoft/phi-2")  # 默认使用 transformers 后端

调用

from langchain.messages import HumanMessage

messages = [HumanMessage(content="What will the capital of mars be called?")]
response = model.invoke(messages)

response.content

流式输出

ChatOutlines 支持 token 流式输出:
messages = [HumanMessage(content="Count to 10 in French:")]

for chunk in model.stream(messages):
    print(chunk.content, end="", flush=True)

链式调用

from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages(
    [
        (
            "system",
            "You are a helpful assistant that translates {input_language} to {output_language}.",
        ),
        ("human", "{input}"),
    ]
)

chain = prompt | model
chain.invoke(
    {
        "input_language": "English",
        "output_language": "German",
        "input": "I love programming.",
    }
)

约束生成

ChatOutlines 允许您对生成的输出施加各种约束:

正则表达式约束

model.regex = r"((25[0-5]|2[0-4]\d|[01]?\d\d?)\.){3}(25[0-5]|2[0-4]\d|[01]?\d\d?)"

response = model.invoke("What is the IP address of Google's DNS server?")

response.content

类型约束

model.type_constraints = int
response = model.invoke("What is the answer to life, the universe, and everything?")

response.content

Pydantic 与 JSON Schema

from pydantic import BaseModel


class Person(BaseModel):
    name: str


model.json_schema = Person
response = model.invoke("Who are the main contributors to LangChain?")
person = Person.model_validate_json(response.content)

person

上下文无关文法

model.grammar = """
?start: expression
?expression: term (("+" | "-") term)*
?term: factor (("*" | "/") factor)*
?factor: NUMBER | "-" factor | "(" expression ")"
%import common.NUMBER
%import common.WS
%ignore WS
"""
response = model.invoke("Give me a complex arithmetic expression:")

response.content

LangChain 结构化输出

您也可以将 LangChain 的结构化输出与 ChatOutlines 配合使用:
from pydantic import BaseModel


class AnswerWithJustification(BaseModel):
    answer: str
    justification: str


_model = model.with_structured_output(AnswerWithJustification)
result = _model.invoke("What weighs more, a pound of bricks or a pound of feathers?")

result

API 参考

有关 ChatOutlines 所有特性和配置的详细文档,请参阅 API 参考:python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.outlines.ChatOutlines.html

完整 Outlines 文档

dottxt-ai.github.io/outlines/latest/