Redis Vector Similarity Search Workshop Materials.
Video Tutorial (FR) | Slides Deck |
- Pre-requisites
- Text Vector Search
- Visual Vector Search
- Hybrid Search
- Semantic Search
- Retieval-Augmented Generation using GCP VertexAI
- Retieval-Augmented Generation using AWS Bedrock
You need to create a Redis Enterprise DB with RedisJSON
and RediSearch
modules. Then, Use the public endpoint in the notebooks.
To create a Redis Enterprise database, you can use Redis Cloud or you can provision a cluster in your own infrastructure using TerrAmine.
In this demo, you will learn how to:
- Create vector embeddings for text,
- Persist vector integrations in Redis,
- Create a Secondary Search Index on these Vectors,
- Find similarity between a new vector (text) and already persisted vectors.
In this second demo, you will learn how to:
- Create vector embeddings for products (by image),
- Persist vector embeddings in Redis,
- Create a Secondary Search Index on these Vectors,
- Find similarity between a new vector (image) and the already persisted vectors.
In this demo, you will learn how to:
- Create vector embeddings for products (by image),
- Persist JSON documents containing the vector embeddings and other fields (e.g., tag, location, price...) in Redis,
- Create a Secondary Search Index on these documents,
- Find similarity between a new vector (image) and the already persisted vectors.
- Search for similarity between a new vector (image) and already persisted vectors, pre-filtered by a tag, a location, or a price range.
In this demo, you will learn how to:
- Create vector embeddings for a private knowledge base (e.g., White papers, blog posts, newsletters...),
- Persist vector embeddings in Redis,
- Create a Secondary Search Index on these Vectors,
- Search semantically (natural language) for the already persisted vectors (relevant resources),
- Use Redis as a semantic cache.
In this last demo, you will learn how to:
- Create vector embeddings for a private knowledge base (e.g., PDF files, blogs posts, Database),
- Persist vector embeddings in Redis,
- Create a Secondary Search Index on these Vectors,
- Search semantically (natural language) for the already persisted vectors (relevant resources),
- Use relevant resources as a prompt context for LLM conversation,
- Generate an augmented response (natural language) using GCP VertexAI models (PaLM),
- Use Redis as a standard cache,
- Use Redis as a semantic cache,
- Use Redis as Q/A history.
In this last demo, you will learn how to:
- Create vector embeddings for a private knowledge base (e.g., PDF files, blogs posts),
- Persist vector embeddings in Redis,
- Create a Secondary Search Index on these Vectors,
- Search semantically (natural language) for the already persisted vectors (relevant resources),
- Use relevant resources as a prompt context for LLM conversation,
- Generate an augmented response (natural language) using AWS Bedrock models (Anthropic Claude 2),
- Use Redis as a standard cache,
- Use Redis as a semantic cache,
- Use Redis as Q/A history.