基本用法
使用消息的最简单方法是创建消息对象,并在 调用 模型时将其传递给模型。文本提示
文本提示是字符串 - 适用于不需要保留对话历史的简单生成任务。- 您有一个独立的请求
- 您不需要对话历史
- 您想要最小的代码复杂性
消息提示
或者,您可以通过提供消息对象的列表来将消息传递给模型。- 管理多轮对话
- 处理多模态内容(图像、音频、文件)
- 包括系统指令
字典格式
您也可以直接在 OpenAI 的聊天完成格式中指定消息。消息类型
- System message - 告诉模型如何行为并提供交互的上下文
- Human message - 表示用户输入和与模型的交互
- AI message - 模型生成的响应,包括文本内容、工具调用和元数据
- Tool message - 表示 工具调用 的输出
System message
ASystemMessage represent an initial set of instructions that primes the model’s behavior. You can use a system message to set the tone, define the model’s role, and establish guidelines for responses.
Basic instructions
Detailed persona
Human message
AHumanMessage represents user input and interactions. They can contain text, images, audio, files, and any other amount of multimodal content.
Text content
Message metadata
Add metadata
The
name field behavior varies by provider – some use it for user
identification, others ignore it. To check, refer to the model provider’s
reference.AI message
AnAIMessage represents the output of a model invocation. They can include multimodal data, tool calls, and provider-specific metadata that you can later access.
AIMessage objects are returned by the model when calling it, which contains all of the associated metadata in the response.
Providers weigh/contextualize types of messages differently, which means it is sometimes helpful to manually create a new AIMessage object and insert it into the message history as if it came from the model.
Attributes
Attributes
The text content of the message.
The raw content of the message.
The standardized content blocks of the message.
The tool calls made by the model.Empty if no tools are called.
A unique identifier for the message (either automatically generated by LangChain or returned in the provider response)
The usage metadata of the message, which can contain token counts when available.
The response metadata of the message.
Tool calls
When models make tool calls, they’re included in theAIMessage:
Token usage
AnAIMessage can hold token counts and other usage metadata in its usage_metadata field:
UsageMetadata for details.
Streaming and chunks
During streaming, you’ll receiveAIMessageChunk objects that can be combined into a full message object:
Tool message
For models that support tool calling, AI messages can contain tool calls. Tool messages are used to pass the results of a single tool execution back to the model. Tools can generateToolMessage objects directly. Below, we show a simple example. Read more in the tools guide.
Attributes
Attributes
The stringified output of the tool call.
The ID of the tool call that this message is responding to. Must match the
ID of the tool call in the
AIMessage.The name of the tool that was called.
Additional data not sent to the model but can be accessed programmatically.
The
artifact field stores supplementary data that won’t be sent to the model but can be accessed programmatically. This is useful for storing raw results, debugging information, or data for downstream processing without cluttering the model’s context.Example: Using artifact for retrieval metadata
Example: Using artifact for retrieval metadata
For example, a retrieval tool could retrieve a passage from a document for reference by a model. Where message See the RAG tutorial for an end-to-end example of building retrieval agents with LangChain.
content contains text that the model will reference, an artifact can contain document identifiers or other metadata that an application can use (e.g., to render a page). See example below:Message content
You can think of a message’s content as the payload of data that gets sent to the model. Messages have acontent attribute that is loosely-typed, supporting strings and lists of untyped objects (e.g., dictionaries). This allows support for provider-native structures directly in LangChain chat models, such as multimodal content and other data.
Separately, LangChain provides dedicated content types for text, reasoning, citations, multi-modal data, server-side tool calls, and other message content. See content blocks below.
LangChain chat models accept message content in the content attribute.
This may contain either:
- A string
- A list of content blocks in a provider-native format
- A list of LangChain’s standard content blocks
Standard content blocks
LangChain provides a standard representation for message content that works across providers. Message objects implement acontent_blocks property that will lazily parse the content attribute into a standard, type-safe representation. For example, messages generated from ChatAnthropic or ChatOpenAI will include thinking or reasoning blocks in the format of the respective provider, but can be lazily parsed into a consistent ReasoningContentBlock representation:
- Anthropic
- OpenAI
Serializing standard contentIf an application outside of LangChain needs access to the standard content block
representation, you can opt-in to storing content blocks in message content.To do this, you can set the
LC_OUTPUT_VERSION environment variable to v1. Or,
initialize any chat model with output_version="v1":Multimodal
Multimodality refers to the ability to work with data that comes in different forms, such as text, audio, images, and video. LangChain includes standard types for these data that can be used across providers. Chat models can accept multimodal data as input and generate it as output. Below we show short examples of input messages featuring multimodal data.Extra keys can be included top-level in the content block or nested in
"extras": {"key": value}.OpenAI and AWS Bedrock Converse,
for example, require a filename for PDFs. See the provider page
for your chosen model for specifics.Content block reference
Content blocks are represented (either when creating a message or accessing thecontent_blocks property) as a list of typed dictionaries. Each item in the list must adhere to one of the following block types:
Core
Core
TextContentBlock
TextContentBlock
Multimodal
Multimodal
ImageContentBlock
ImageContentBlock
Purpose: Image data
Always
"image"URL pointing to the image location.
Base64-encoded image data.
Unique identifier for this content block (either generated by the provider or by LangChain).
AudioContentBlock
AudioContentBlock
Purpose: Audio data
Always
"audio"URL pointing to the audio location.
Base64-encoded audio data.
Unique identifier for this content block (either generated by the provider or by LangChain).
VideoContentBlock
VideoContentBlock
Purpose: Video data
Always
"video"URL pointing to the video location.
Base64-encoded video data.
Unique identifier for this content block (either generated by the provider or by LangChain).
FileContentBlock
FileContentBlock
Purpose: Generic files (PDF, etc)
Always
"file"URL pointing to the file location.
Base64-encoded file data.
Unique identifier for this content block (either generated by the provider or by LangChain).
Tool Calling
Tool Calling
ToolCall
ToolCall
ToolCallChunk
ToolCallChunk
Server-Side Tool Execution
Server-Side Tool Execution
ServerToolCall
ServerToolCall
ServerToolCallChunk
ServerToolCallChunk
Purpose: Streaming server-side tool call fragments
Always
"server_tool_call_chunk"An identifier associated with the tool call.
Name of the tool being called
Partial tool arguments (may be incomplete JSON)
Position of this chunk in the stream
ServerToolResult
ServerToolResult
Purpose: Search results
Always
"server_tool_result"Identifier of the corresponding server tool call.
Identifier associated with the server tool result.
Execution status of the server-side tool.
"success" or "error".output
Output of the executed tool.
Provider-Specific Blocks
Provider-Specific Blocks
NonStandardContentBlock
NonStandardContentBlock
Content blocks were introduced as a new property on messages in LangChain v1 to standardize content formats across providers while maintaining backward compatibility with existing code.Content blocks are not a replacement for the
content property, but rather a new property that can be used to access the content of a message in a standardized format.Use with chat models
Chat models accept a sequence of message objects as input and return anAIMessage as output. Interactions are often stateless, so that a simple conversational loop involves invoking a model with a growing list of messages.
Refer to the below guides to learn more:
- Built-in features for persisting and managing conversation histories
- Strategies for managing context windows, including trimming and summarizing messages
Connect these docs to Claude, VSCode, and more via MCP
for real-time answers.

