The RAG pattern enables businesses to use the reasoning capabilities of LLMs, using their existing models to process and generate responses based on new data. RAG facilitates periodic data updates without the need for fine-tuning, thereby streamlining the integration of LLMs into businesses.
The Enterprise RAG Solution Accelerator (GPT-RAG) offers a robust architecture tailored for enterprise-grade deployment of the RAG pattern. It ensures grounded responses and is built on Zero-trust security and Responsible AI, ensuring availability, scalability, and auditability. Ideal for organizations transitioning from exploration and PoC stages to full-scale production and MVPs.
-
Data ingestion Optimizes data preparation for Azure OpenAI.
-
Orchestrator The system's dynamic backbone ensuring scalability and a consistent user experience.
-
App Front-End Built with Azure App Services and the Backend for Front-End pattern, offers a smooth and scalable user interface.
-
Teams-BOT Constructed using Azure BOT Services, this platform enables users to engage with the Orchestrator seamlessly through the Microsoft Teams interface.
To deploy Enterprise RAG and get your solution up and running, you will use azd, Python, Git, Node.js 16+, and PowerShell 7 (only if you are using Windows).
Pre-reqs links:
- Azure Developer CLI: Download azd for Windows, Other OS's.
- Powershell (Windows only): Powershell
- Git: Download Git
- Node.js 16+ windows/mac linux/wsl
- Python 3.10: Download Python
After installing the pre-requirements you just need to execute the next four steps using Azure Developer CLI (azd) in a terminal:
1 Download the Repository:
azd init -t azure/gpt-rag
2 Login to Azure:
azd auth login
3 Start Building the infrastructure and components deployment:
azd up
4 Add source documents to object storage
Upload your documents to the documents folder in the storage account which name starts with strag.
To deploy the zero trust implementation, follow the same steps. However, before executing the azd up
command, make sure to run the following line:
azd env set AZURE_NETWORK_ISOLATION true
Once deployment is completed, you need to use the Virtual Machine with the Bastion connection (created as part of zero trust deployment) to continue deploying data ingestion, orchestrator and the front-end app.
Login to the created VM then reproduce the following commands:
azd init -t azure/gpt-rag
azd auth login
azd env refresh
azd deploy
Note: when running the
azd init ...
andazd env refresh
, use the same environment name, subscription, and region used in the initial provisioning of the infrastructure.
Refer to the Custom Deployment section to understand the customization options before executing the previously mentioned steps.
Look at the Troubleshooting page in case you face some error in the deployment process.
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.