Skip to main content
LangChain LangGraph DeepAgent中文文档 home page
构建智能体
Search...
⌘K
GitHub
Search...
Navigation
General integrations
Cache integrations
Deep Agents
LangChain
LangGraph
Integrations
Learn
Reference
Contribute
TypeScript
概览
All providers
Popular Providers
OpenAI
Anthropic
Google
AWS
Microsoft
General integrations
Chat models
Tools and Toolkits
LLMs
Middleware
Key-value stores
Document transformers
Model caches
Callbacks
RAG integrations
检索器
Text splitters
Embedding models
向量存储
Document loaders
Key-value stores
General integrations
Cache integrations
Copy page
Integrate with caches using LangChain JavaScript.
Copy page
Caching LLM calls
can be useful for testing, cost savings, and speed.
Below are some integrations that allow you to cache results of individual LLM calls using different caches with different strategies.
Azure Cosmos DB NoSQL Semantic Cache
View guide
Edit this page on GitHub
or
file an issue
.
Connect these docs
to Claude, VSCode, and more via MCP for real-time answers.
Document transformer integrations
Previous
回调集成
Next
⌘I