Skip to main content
免责声明:`LangChain decorators` 并非由 LangChain 团队创建,也不受其支持。
LangChain decorators 是建立在 LangChain 之上的一层,为编写自定义 LangChain 提示词和链提供语法糖 🍭 如需反馈、提交 Issue 或贡献代码,请在此处提交 Issue: ju-bezdek/langchain-decorators
主要原则与优势:
  • 更符合 Python 风格的代码编写方式
  • 编写多行提示词时不会因缩进破坏代码流程
  • 充分利用 IDE 内置的提示类型检查文档弹窗支持,快速查看函数中的提示词、参数等信息
  • 充分发挥 🦜🔗 LangChain 生态系统的全部能力
  • 支持可选参数
  • 通过将参数绑定到同一个类,轻松在提示词之间共享参数
以下是用 LangChain Decorators ✨ 编写的简单示例:
@llm_prompt
def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers")->str:
    """
    Write me a short header for my post about {topic} for {platform} platform.
    It should be for {audience} audience.
    (Max 15 words)
    """
    return

# run it naturally
write_me_short_post(topic="starwars")
# or
write_me_short_post(topic="starwars", platform="redit")

快速开始

安装

pip install langchain_decorators

示例

建议从以下示例开始:

定义其他参数

这里我们只是用 llm_prompt 装饰器将函数标记为提示词,有效地将其转化为 LLMChain,而不是直接运行它。 标准 LLMChain 需要比 inputs_variables 和 prompt 更多的初始化参数……这里这些实现细节被隐藏在装饰器中。 工作原理如下:
  1. 使用全局设置
# define global settings for all prompts (if not set - chatGPT is the current default)
from langchain_decorators import GlobalSettings

GlobalSettings.define_settings(
    default_llm=ChatOpenAI(temperature=0.0), # this is default... can change it here globally
    default_streaming_llm=ChatOpenAI(temperature=0.0,streaming=True), # this is default... can change it here for all
)
  1. 使用预定义的提示词类型
#You can change the default prompt types
from langchain_decorators import PromptTypes, PromptTypeSettings

PromptTypes.AGENT_REASONING.llm = ChatOpenAI()

# Or you can just define your own ones:
class MyCustomPromptTypes(PromptTypes):
    GPT4=PromptTypeSettings(llm=ChatOpenAI(model="gpt-4"))

@llm_prompt(prompt_type=MyCustomPromptTypes.GPT4)
def write_a_complicated_code(app_idea:str)->str:
    ...

  1. 直接在装饰器中定义设置
from langchain_openai import OpenAI

@llm_prompt(
    llm=OpenAI(temperature=0.7),
    stop_tokens=["\nObservation"],
    ...
    )
def creative_writer(book_title:str)->str:
    ...

传递 memory 和/或 callbacks:

只需在函数中声明它们(或使用 kwargs 传递任何内容)

@llm_prompt()
async def write_me_short_post(topic:str, platform:str="twitter", memory:SimpleMemory = None):
    """
    {history_key}
    Write me a short header for my post about {topic} for {platform} platform.
    It should be for {audience} audience.
    (Max 15 words)
    """
    pass

await write_me_short_post(topic="old movies")

简化流式传输

如果我们想利用流式传输:
  • 需要将提示词定义为异步函数
  • 在装饰器上启用流式传输,或者定义启用了流式传输的 PromptType
  • 使用 StreamingContext 捕获流
这样我们只需标记哪个提示词应被流式传输,无需费心考虑应使用哪个 LLM、如何在链的特定部分创建和分发流式处理器……只需在提示词/提示词类型上开启/关闭流式传输即可。 流式传输只有在流式上下文中调用时才会发生……在这里我们可以定义一个简单的函数来处理流。
# this code example is complete and should run as it is

from langchain_decorators import StreamingContext, llm_prompt

# this will mark the prompt for streaming (useful if we want stream just some prompts in our app)
# note that only async functions can be streamed (will get an error if it's not)
@llm_prompt(capture_stream=True)
async def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers"):
    """
    Write me a short header for my post about {topic} for {platform} platform.
    It should be for {audience} audience.
    (Max 15 words)
    """
    pass



# just an arbitrary function to demonstrate the streaming
tokens=[]
def capture_stream_func(new_token:str):
    tokens.append(new_token)

# if we want to capture the stream, we need to wrap the execution into StreamingContext...
with StreamingContext(stream_to_stdout=True, callback=capture_stream_func):
    result = await run_prompt()
    print("Stream finished ... we can distinguish tokens thanks to alternating colors")


print("\nWe've captured",len(tokens),"tokens🎉\n")
print("Here is the result:")
print(result)

提示词声明

默认情况下,提示词是整个函数文档字符串,除非您标记了您的提示词。

为提示词添加文档

我们可以通过使用 <prompt> 语言标签标记代码块来指定文档的哪个部分是提示词定义:
@llm_prompt
def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers"):
    """
    Here is a good way to write a prompt as part of a function docstring.

    It needs to be a code block, marked as a `<prompt>` language
    ```<prompt>
    Write me a short header for my post about {topic} for {platform} platform.
    It should be for {audience} audience.
    (Max 15 words)
Now only the code block above will be used as a prompt. """ return

## 对话消息提示词

对于对话模型,将提示词定义为一组消息模板非常有用……以下是具体方法:

``` python
@llm_prompt
def simulate_conversation(human_input:str, agent_role:str="a pirate"):
    """
    ## System message
     - note the `:system` suffix inside the <prompt:_role_> tag

    ```<prompt:system>
    You are a {agent_role} hacker. You must act like one.
    You reply always in code, using python or javascript code block...
Hello, who are you
{history}
{human_input}
""" pass

这里的角色是模型原生角色(chatGPT 中为 assistant、user、system)



# 可选部分
- 您可以定义提示词中整块应为可选的部分
- 如果该部分中有任何输入为空,则整个部分将不会被渲染

语法如下:

``` python
@llm_prompt
def prompt_with_optional_partials():
    """
    this text will be rendered always, but

    {? anything inside this block will be rendered only if all the {value}s parameters are not empty (None | "")   ?}

    you can also place it in between the words
    this too will be rendered{? , but
        this  block will be rendered only if {this_value} and {this_value}
        is not empty?} !
    """

输出解析器

  • llm_prompt 装饰器会根据输出类型自动检测最佳输出解析器(如果未设置,则返回原始字符串)
  • 原生支持 list、dict 和 pydantic 输出(自动处理)
# this code example is complete and should run as it is

from langchain_decorators import llm_prompt

@llm_prompt
def write_name_suggestions(company_business:str, count:int)->list:
    """ Write me {count} good name suggestions for company that {company_business}
    """
    pass

write_name_suggestions(company_business="sells cookies", count=5)

更复杂的结构

对于 dict/pydantic,您需要指定格式化指令……
from langchain_decorators import llm_prompt
from pydantic import BaseModel, Field


class TheOutputStructureWeExpect(BaseModel):
    name:str = Field (description="The name of the company")
    headline:str = Field( description="The description of the company (for landing page)")
    employees:list[str] = Field(description="5-8 fake employee names with their positions")

@llm_prompt()
def fake_company_generator(company_business:str)->TheOutputStructureWeExpect:
    """ Generate a fake company that {company_business}
    {FORMAT_INSTRUCTIONS}
    """
    return

company = fake_company_generator(company_business="sells cookies")

# print the result nicely formatted
print("Company name: ",company.name)
print("company headline: ",company.headline)
print("company employees: ",company.employees)

将提示词绑定到对象

from pydantic import BaseModel
from langchain_decorators import llm_prompt

class AssistantPersonality(BaseModel):
    assistant_name:str
    assistant_role:str
    field:str

    @property
    def a_property(self):
        return "whatever"

    def hello_world(self, function_kwarg:str=None):
        """
        We can reference any {field} or {a_property} inside our prompt... and combine it with {function_kwarg} in the method
        """


    @llm_prompt
    def introduce_your_self(self)->str:
        """
        ``` <prompt:system>
        You are an assistant named {assistant_name}.
        Your role is to act as {assistant_role}
Introduce your self (in less than 20 words)
""" personality = AssistantPersonality(assistant_name=“John”, assistant_role=“a pirate”) print(personality.introduce_your_self(personality))


# 更多示例:

- 这些及其他更多示例也可在[此 colab notebook](https://colab.research.google.com/drive/1no-8WfeP6JaLD9yUtkPgym6x0G9ZYZOG#scrollTo=N4cf__D0E2Yk)中查看
- 包括使用纯 LangChain decorators 重新实现的 [ReAct Agent](https://colab.research.google.com/drive/1no-8WfeP6JaLD9yUtkPgym6x0G9ZYZOG#scrollTo=3bID5fryE2Yp)

---

<div className="source-links">
<Callout icon="edit">
    [在 GitHub 上编辑此页面](https://github.com/langchain-ai/docs/edit/main/src/oss/python/integrations/providers/langchain_decorators.mdx) 或 [提交 issue](https://github.com/langchain-ai/docs/issues/new/choose)。
</Callout>
<Callout icon="terminal-2">
    [将这些文档连接](/use-these-docs) 到 Claude、VSCode 等,通过 MCP 获取实时答案。
</Callout>
</div>