Skip to main content
Activeloop Deep Lake 是一个多模态向量存储,可存储嵌入向量及其元数据,包括文本、JSON、图像、音频、视频等多种类型的数据。它支持将数据保存在本地、您的云端或 Activeloop 的存储服务上,并支持包含嵌入向量及其属性的混合搜索。
本 notebook 展示了 Activeloop Deep Lake 的基本功能。Deep Lake 不仅能存储嵌入向量,还能存储任何类型的数据。它是一个无服务器的数据湖,具备版本控制、查询引擎以及面向深度学习框架的流式数据加载器。 更多信息请参阅 Deep Lake 文档

环境准备

pip install -qU  langchain-openai langchain-deeplake tiktoken

Activeloop 官方示例

与 LangChain 集成

本地使用 Deep Lake

from langchain_deeplake.vectorstores import DeeplakeVectorStore
from langchain_openai import OpenAIEmbeddings
from langchain_text_splitters import CharacterTextSplitter
import getpass
import os

if "OPENAI_API_KEY" not in os.environ:
    os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")

if "ACTIVELOOP_TOKEN" not in os.environ:
    os.environ["ACTIVELOOP_TOKEN"] = getpass.getpass("activeloop token:")
from langchain_community.document_loaders import TextLoader

loader = TextLoader("../../how_to/state_of_the_union.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)

embeddings = OpenAIEmbeddings()

创建本地数据集

./my_deeplake/ 本地创建数据集,然后执行相似度搜索。Deeplake+LangChain 集成在底层使用 Deep Lake 数据集,因此 dataset(数据集)和 vector store(向量存储)可互换使用。要在自己的云端或 Deep Lake 存储中创建数据集,请相应调整路径
db = DeeplakeVectorStore(
    dataset_path="./my_deeplake/", embedding_function=embeddings, overwrite=True
)
db.add_documents(docs)
# or shorter
# db = DeepLake.from_documents(docs, dataset_path="./my_deeplake/", embedding_function=embeddings, overwrite=True)

查询数据集

query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
print(docs[0].page_content)
之后,您可以在不重新计算嵌入向量的情况下重新加载数据集
db = DeeplakeVectorStore(
    dataset_path="./my_deeplake/", embedding_function=embeddings, read_only=True
)
docs = db.similarity_search(query)
设置 read_only=True 可防止在不需要更新时意外修改向量存储,确保数据保持不变,除非有明确的修改意图。通常建议指定此参数以避免意外更新。

检索问答

from langchain_classic.chains import RetrievalQA
from langchain_openai import ChatOpenAI

qa = RetrievalQA.from_chain_type(
    llm=ChatOpenAI(model="gpt-3.5-turbo"),
    chain_type="stuff",
    retriever=db.as_retriever(),
)
query = "What did the president say about Ketanji Brown Jackson"
qa.run(query)

基于元数据属性的过滤

让我们创建另一个向量存储,其元数据中包含文档创建年份。
import random

for d in docs:
    d.metadata["year"] = random.randint(2012, 2014)

db = DeeplakeVectorStore.from_documents(
    docs, embeddings, dataset_path="./my_deeplake/", overwrite=True
)
db.similarity_search(
    "What did the president say about Ketanji Brown Jackson",
    filter={"metadata": {"year": 2013}},
)

选择距离函数

距离函数:L2 表示欧氏距离,cos 表示余弦相似度
db.similarity_search(
    "What did the president say about Ketanji Brown Jackson?", distance_metric="l2"
)

最大边际相关性

使用最大边际相关性
db.max_marginal_relevance_search(
    "What did the president say about Ketanji Brown Jackson?"
)

删除数据集

db.delete_dataset()

云端 Deep Lake 数据集(Activeloop、AWS、GCS 等)或内存模式

默认情况下,Deep Lake 数据集存储在本地。要将其存储在内存、Deep Lake 托管数据库或任何对象存储中,您可以在创建向量存储时提供对应的路径和凭证。某些路径需要在 Activeloop 注册并创建 API token,可在此处获取
os.environ["ACTIVELOOP_TOKEN"] = activeloop_token
# Embed and store the texts
username = "<USERNAME_OR_ORG>"  # your username on app.activeloop.ai
dataset_path = f"hub://{username}/langchain_testing_python"  # could be also ./local/path (much faster locally), s3://bucket/path/to/dataset, gcs://path/to/dataset, etc.

docs = text_splitter.split_documents(documents)

embedding = OpenAIEmbeddings()
db = DeeplakeVectorStore(
    dataset_path=dataset_path, embedding_function=embeddings, overwrite=True
)
ids = db.add_documents(docs)
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
print(docs[0].page_content)
# Embed and store the texts
username = "<USERNAME_OR_ORG>"  # your username on app.activeloop.ai
dataset_path = f"hub://{username}/langchain_testing"

docs = text_splitter.split_documents(documents)

embedding = OpenAIEmbeddings()
db = DeeplakeVectorStore(
    dataset_path=dataset_path,
    embedding_function=embeddings,
    overwrite=True,
)
ids = db.add_documents(docs)

TQL 搜索

此外,similarity_search 方法支持执行查询,可利用 Deep Lake 的张量查询语言(TQL)来指定查询条件。
search_id = db.dataset["ids"][0]
docs = db.similarity_search(
    query=None,
    tql=f"SELECT * WHERE ids == '{search_id}'",
)
db.dataset.summary()

在 AWS S3 上创建向量存储

dataset_path = "s3://BUCKET/langchain_test"  # could be also ./local/path (much faster locally), hub://bucket/path/to/dataset, gcs://path/to/dataset, etc.

embedding = OpenAIEmbeddings()
db = DeeplakeVectorStore.from_documents(
    docs,
    dataset_path=dataset_path,
    embedding=embeddings,
    overwrite=True,
    creds={
        "aws_access_key_id": os.environ["AWS_ACCESS_KEY_ID"],
        "aws_secret_access_key": os.environ["AWS_SECRET_ACCESS_KEY"],
        "aws_session_token": os.environ["AWS_SESSION_TOKEN"],  # Optional
    },
)

Deep Lake API

您可以通过 db.vectorstore 访问 Deep Lake 数据集
# get structure of the dataset
db.dataset.summary()
# get embeddings numpy array
embeds = db.dataset["embeddings"][:]

将本地数据集迁移至云端

将已创建的数据集复制到云端。您也可以从云端迁移到本地。
import deeplake

username = "<USERNAME_OR_ORG>"  # your username on app.activeloop.ai
source = f"hub://{username}/langchain_testing"  # could be local, s3, gcs, etc.
destination = f"hub://{username}/langchain_test_copy"  # could be local, s3, gcs, etc.


deeplake.copy(src=source, dst=destination)
db = DeeplakeVectorStore(dataset_path=destination, embedding_function=embeddings)
db.add_documents(docs)