Skip to main content
您当前所在的页面介绍的是将 Fireworks 模型用作文本补全模型。许多流行的 Fireworks 模型是对话补全模型您可能正在寻找这个页面
Fireworks 通过打造创新的 AI 实验与生产平台,加速生成式 AI 的产品开发。
本示例介绍如何使用 LangChain 与 Fireworks 模型进行交互。

概述

集成详情

本地可序列化JS 支持下载量版本
Fireworkslangchain-fireworksPyPI - DownloadsPyPI - Version

设置

凭证

登录 Fireworks AI 获取 API Key 以访问我们的模型,并确保将其设置为 FIREWORKS_API_KEY 环境变量。 3. 使用模型 ID 配置您的模型。如果未设置模型,默认模型为 fireworks-llama-v2-7b-chat。请在 fireworks.ai 上查看最新完整的模型列表。
import getpass
import os

if "FIREWORKS_API_KEY" not in os.environ:
    os.environ["FIREWORKS_API_KEY"] = getpass.getpass("Fireworks API Key:")

安装

运行本 notebook 其余部分需要安装 langchain-fireworks Python 包。
pip install -qU langchain-fireworks

实例化

from langchain_fireworks import Fireworks

# Initialize a Fireworks model
llm = Fireworks(
    model="accounts/fireworks/models/llama-v3p1-8b-instruct", # Model library in: https://app.fireworks.ai/models
    base_url="https://api.fireworks.ai/inference/v1/completions",
)

调用

您可以直接使用字符串提示词调用模型来获取补全结果。
output = llm.invoke("Who's the best quarterback in the NFL?")
print(output)
  That's an easy one. It's Aaron Rodgers. Rodgers has consistently been one

使用多个提示词调用

# Calling multiple prompts
output = llm.generate(
    [
        "Who's the best cricket player in 2016?",
        "Who's the best basketball player in the league?",
    ]
)
print(output.generations)
[[Generation(text=' You could choose one of the top performers in 2016, such as Vir')], [Generation(text=' -- Keith Jackson\nA: LeBron James, Chris Paul and Kobe Bryant are the')]]

使用额外参数调用

# Setting additional parameters: temperature, max_tokens, top_p
llm = Fireworks(
    model="accounts/fireworks/models/llama-v3p1-8b-instruct",
    temperature=0.7,
    max_tokens=15,
    top_p=1.0,
)
print(llm.invoke("What's the weather like in Kansas City in December?"))
December is a cold month in Kansas City, with temperatures of

链式调用

您可以使用 LangChain 表达式语言为非对话模型创建简单的链。
from langchain_core.prompts import PromptTemplate
from langchain_fireworks import Fireworks

llm = Fireworks(
    model="accounts/fireworks/models/llama-v3p1-8b-instruct",
    temperature=0.7,
    max_tokens=15,
    top_p=1.0,
)
prompt = PromptTemplate.from_template("Tell me a joke about {topic}?")
chain = prompt | llm

print(chain.invoke({"topic": "bears"}))
 What do you call a bear with no teeth? A gummy bear!

流式输出

如果需要,您可以流式输出结果。
for token in chain.stream({"topic": "bears"}):
    print(token, end="", flush=True)
 Why do bears hate shoes so much? They like to run around in their

API 参考

有关 Fireworks LLM 所有功能和配置的详细文档,请参阅 API 参考