Skip to main content
turbopuffer 是一个快速、高性价比的向量数据库,专为搜索与检索场景而设计。
本指南介绍如何在 LangChain 中使用 TurbopufferVectorStore

安装配置

要使用 turbopuffer 向量存储,需要安装 langchain-turbopuffer 集成包。
pip install -qU langchain-turbopuffer

凭据

turbopuffer.com 创建账户并获取 API 密钥。
import getpass
import os

if not os.getenv("TURBOPUFFER_API_KEY"):
    os.environ["TURBOPUFFER_API_KEY"] = getpass.getpass("Enter your turbopuffer API key: ")
如需自动追踪模型调用,可取消注释以下代码并设置 LangSmith API 密钥:
os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
os.environ["LANGSMITH_TRACING"] = "true"

初始化

创建 turbopuffer 客户端和命名空间,然后初始化向量存储:
from langchain_openai import OpenAIEmbeddings
from turbopuffer import Turbopuffer

tpuf = Turbopuffer(region="gcp-us-central1")
ns = tpuf.namespace("langchain-test")

embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
from langchain_turbopuffer import TurbopufferVectorStore

vector_store = TurbopufferVectorStore(embedding=embeddings, namespace=ns)

管理向量存储

创建向量存储后,可以通过添加和删除条目与其进行交互。

向向量存储添加条目

from uuid import uuid4

from langchain_core.documents import Document

document_1 = Document(
    page_content="I had chocolate chip pancakes and scrambled eggs for breakfast this morning.",
    metadata={"source": "tweet"},
)

document_2 = Document(
    page_content="The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees.",
    metadata={"source": "news"},
)

document_3 = Document(
    page_content="Building an exciting new project with LangChain - come check it out!",
    metadata={"source": "tweet"},
)

document_4 = Document(
    page_content="Robbers broke into the city bank and stole $1 million in cash.",
    metadata={"source": "news"},
)

document_5 = Document(
    page_content="Wow! That was an amazing movie. I can't wait to see it again.",
    metadata={"source": "tweet"},
)

document_6 = Document(
    page_content="Is the new iPhone worth the price? Read this review to find out.",
    metadata={"source": "website"},
)

document_7 = Document(
    page_content="The top 10 soccer players in the world right now.",
    metadata={"source": "website"},
)

document_8 = Document(
    page_content="LangGraph is the best framework for building stateful, agentic applications!",
    metadata={"source": "tweet"},
)

document_9 = Document(
    page_content="The stock market is down 500 points today due to fears of a recession.",
    metadata={"source": "news"},
)

document_10 = Document(
    page_content="I have a bad feeling I am going to get deleted :(",
    metadata={"source": "tweet"},
)

documents = [
    document_1,
    document_2,
    document_3,
    document_4,
    document_5,
    document_6,
    document_7,
    document_8,
    document_9,
    document_10,
]
uuids = [str(uuid4()) for _ in range(len(documents))]
vector_store.add_documents(documents=documents, ids=uuids)

从向量存储删除条目

vector_store.delete(ids=[uuids[-1]])

查询向量存储

创建向量存储并添加相关文档后,通常需要在运行链或智能体时对其进行查询。

直接查询

可以如下执行简单的相似度搜索:
results = vector_store.similarity_search(
    "LangChain provides abstractions to make working with LLMs easy",
    k=2,
    filters=("source", "Eq", "tweet"),
)
for res in results:
    print(f"* {res.page_content} [{res.metadata}]")

带分数的相似度搜索

也可以返回带分数的搜索结果。距离越小表示越相似:
results = vector_store.similarity_search_with_score(
    "Will it be hot tomorrow?", k=1, filters=("source", "Eq", "news")
)
for res, score in results:
    print(f"* [SIM={score:3f}] {res.page_content} [{res.metadata}]")

转换为检索器后查询

也可以将向量存储转换为检索器,以便在链中更方便地使用。
retriever = vector_store.as_retriever(
    search_type="similarity_score_threshold",
    search_kwargs={"k": 1, "score_threshold": 0.4},
)
retriever.invoke("Stealing from the bank is a crime")

过滤

turbopuffer 支持使用元组表达式进行元数据过滤。可将过滤条件传入任意搜索方法:
results = vector_store.similarity_search(
    "interesting articles",
    k=2,
    filters=("source", "Eq", "website"),
)
有关支持的过滤运算符完整列表,请参阅 turbopuffer 过滤文档

相关资源