Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 42 additions & 0 deletions CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,48 @@ The project is structured as a monorepo with the main library at `libs/redis/`.
cd libs/redis
```

## Virtual Environments

Poetry manages dependencies; some users also use it to automatically manage Python virtual environments.

In this repository you may encounter a project virtual environment already created locally. Common locations:
- The repository root
- `libs/redis/env/`

Common directory names:
- `.venv`
- `env`
- `venv`

Recommended workflow:
1) If a virtual environment exists in the repo, activate it first, then run Python or `make` commands:
```bash
source .venv/bin/activate # or: source libs/redis/env/bin/activate
make test # or any other Make target
```

2) If `poetry` is available on your PATH without activating a venv, you can try to use it directly:
```bash
# From libs/redis/
make test
# or explicitly
poetry run pytest tests/unit_tests/test_specific.py
```

3) If you run `poetry` or `make` and see `poetry: command not found`, Poetry is
not on your PATH. Try to activate the project's virtual environment to see if it
already contains Poetry (e.g., `source libs/redis/env/bin/activate`). If it
doesn't, ask the user if you should install it.

Notes:
- Makefile targets call `poetry run ...`. When a venv is activated and contains
Poetry, `make` will use that Poetry and run inside that venv. When Poetry is
on PATH globally, it will use its managed venv and you do not need to activate
one manually.
- Quick checks:
- `which poetry`
- `TEST_FILE=tests/unit_tests/test_specific.py make test`

### Testing
- `make test` - Run unit tests
- `make integration_tests` - Run integration tests (requires OPENAI_API_KEY)
Expand Down
1 change: 1 addition & 0 deletions libs/redis/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ lint lint_diff lint_package lint_tests:
poetry run ruff format $(PYTHON_FILES) --diff
poetry run ruff check $(PYTHON_FILES) --select I $(PYTHON_FILES)
mkdir -p $(MYPY_CACHE); poetry run mypy $(PYTHON_FILES) --cache-dir $(MYPY_CACHE)
poetry check

format format_diff:
poetry run ruff format $(PYTHON_FILES)
Expand Down
18 changes: 15 additions & 3 deletions libs/redis/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,12 +148,12 @@ docs = vector_store.max_marginal_relevance_search(query, k=2, fetch_k=10)

### 2. Cache

The `RedisCache` and `RedisSemanticCache` classes provide caching mechanisms for LLM calls.
The `RedisCache`, `RedisSemanticCache`, and `LangCacheSemanticCache` classes provide caching mechanisms for LLM calls.

#### Usage

```python
from langchain_redis import RedisCache, RedisSemanticCache
from langchain_redis import RedisCache, RedisSemanticCache, LangCacheSemanticCache
from langchain_core.language_models import LLM
from langchain_core.embeddings import Embeddings

Expand All @@ -168,8 +168,15 @@ semantic_cache = RedisSemanticCache(
distance_threshold=0.1
)

# LangChain cache - manages embeddings for you
langchain_cache = LangCacheSemanticCache(
cache_id="your-cache-id",
api_key="your-api-key",
distance_threshold=0.1
)

# Using cache with an LLM
llm = LLM(cache=cache) # or LLM(cache=semantic_cache)
llm = LLM(cache=cache) # or LLM(cache=semantic_cache) or LLM(cache=langchain_cache)

# Async cache operations
await cache.aupdate("prompt", "llm_string", [Generation(text="cached_response")])
Expand All @@ -182,6 +189,11 @@ cached_result = await cache.alookup("prompt", "llm_string")
- Semantic caching for similarity-based retrieval
- Asynchronous cache operations

#### What is Redis LangCache?
- LangCache is a fully managed, cloud-based service that provides a semantic cache for LLM applications.
- It manages embeddings and vector search for you, allowing you to focus on your application logic.
- See [our docs](https://redis.io/docs/latest/develop/ai/langcache/) to learn more, or [try LangCache on Redis Cloud today](https://redis.io/docs/latest/operate/rc/langcache/#get-started-with-langcache-on-redis-cloud).

### 3. Chat History

The `RedisChatMessageHistory` class provides a Redis-based storage for chat message history with efficient search capabilities.
Expand Down
Loading
Loading