🔍🤖💬 A minimal RAG (Retrieval-Augmented Generation) implementation that indexes FAQ data into Azure AI Search with vector embeddings and provides a chat terminal using Azure OpenAI for answering questions based on the retrieved content.
- Copy
.env.example→.envand add your Azure endpoints & keys - poetry install
- To use Vector Search in Azure AI Search, a vectorizer can connect the Azure OpenAI Embedding model to turn your text into a numerical embedding. If you prefer, without using a vectorizer, you can also create the embedding yourself and provide it directly to Azure AI Search for Vector Search.
- Options 1 and 3 demonstrate how to provide the embedding directly without using a vectorizer. -
chat_app.pyusesVectorizedQuery. - Options 2 and 4 demonstrate the use of a vectorizer along with semantic search. -
chat_app_v2.pyusesVectorizableTextQuery.
- 📤Push - upload local data
python push_aisearch_index.py
- 📤Push - upload local data w/ Vectorizer and Semantic Search
python push_aisearch_index_v2.py
- 📥Pull - using Azure AI Search Indexer
python pull_aisearch_index.py
- 📥Pull - using Azure AI Search Indexer w/ Vectorizer and Semantic Search
python pull_aisearch_index_v2.py
- To utilize an embedding directly with
VectorizedQuerypython chat_app.py
- To utilize text with a vectorizer, use the
VectorizableTextQuerymethod.python chat_app_v2.py
- Type your question or
exitto quit.
To connect Azure AI Search with Azure AI Foundry, you need to add both a vectorizer and semantic search to the index. Use an existing AI Search index with the Azure AI Search tool
📚 Learn more: csv indexer