Skip to content

Commit 3d55037

Browse files
committed
chore: Add poetry check to make lint to match CI
1 parent 8501e8b commit 3d55037

File tree

8 files changed

+653
-247
lines changed

8 files changed

+653
-247
lines changed

libs/redis/Makefile

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,7 @@ lint lint_diff lint_package lint_tests:
3232
poetry run ruff format $(PYTHON_FILES) --diff
3333
poetry run ruff check $(PYTHON_FILES) --select I $(PYTHON_FILES)
3434
mkdir -p $(MYPY_CACHE); poetry run mypy $(PYTHON_FILES) --cache-dir $(MYPY_CACHE)
35+
poetry check
3536

3637
format format_diff:
3738
poetry run ruff format $(PYTHON_FILES)

libs/redis/README.md

Lines changed: 15 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -148,12 +148,12 @@ docs = vector_store.max_marginal_relevance_search(query, k=2, fetch_k=10)
148148

149149
### 2. Cache
150150

151-
The `RedisCache` and `RedisSemanticCache` classes provide caching mechanisms for LLM calls.
151+
The `RedisCache`, `RedisSemanticCache`, and `LangCacheSemanticCache` classes provide caching mechanisms for LLM calls.
152152

153153
#### Usage
154154

155155
```python
156-
from langchain_redis import RedisCache, RedisSemanticCache
156+
from langchain_redis import RedisCache, RedisSemanticCache, LangCacheSemanticCache
157157
from langchain_core.language_models import LLM
158158
from langchain_core.embeddings import Embeddings
159159

@@ -168,8 +168,15 @@ semantic_cache = RedisSemanticCache(
168168
distance_threshold=0.1
169169
)
170170

171+
# LangChain cache - manages embeddings for you
172+
langchain_cache = LangCacheSemanticCache(
173+
cache_id="your-cache-id",
174+
api_key="your-api-key",
175+
distance_threshold=0.1
176+
)
177+
171178
# Using cache with an LLM
172-
llm = LLM(cache=cache) # or LLM(cache=semantic_cache)
179+
llm = LLM(cache=cache) # or LLM(cache=semantic_cache) or LLM(cache=langchain_cache)
173180

174181
# Async cache operations
175182
await cache.aupdate("prompt", "llm_string", [Generation(text="cached_response")])
@@ -182,6 +189,11 @@ cached_result = await cache.alookup("prompt", "llm_string")
182189
- Semantic caching for similarity-based retrieval
183190
- Asynchronous cache operations
184191

192+
#### What is Redis LangCache?
193+
- LangCache is a fully managed, cloud-based service that provides a semantic cache for LLM applications.
194+
- It manages embeddings and vector search for you, allowing you to focus on your application logic.
195+
- See [our docs](https://redis.io/docs/latest/develop/ai/langcache/) to learn more, or [try LangCache on Redis Cloud today](https://redis.io/docs/latest/operate/rc/langcache/#get-started-with-langcache-on-redis-cloud).
196+
185197
### 3. Chat History
186198

187199
The `RedisChatMessageHistory` class provides a Redis-based storage for chat message history with efficient search capabilities.

0 commit comments

Comments
 (0)