R2 is a service that pulls events from different sources and use each event as context to start a GTP-4 chat and generate ideas, recommendations and information which might be relevant. Responses are persisted and displayed from a web app.
Demo wep app: https://app-leia-4vzghxfo4t7t2.azurewebsites.net/ Demo video:
R2-Demo.mp4
Demo notes:
- A collection of events were used as input for R2 to generate each response displayed within the demo.
R2 uses the Azure Developer CLI to deploy the entire solution from a few steps.
-
Install azd
-
If you have alredy cloned the repository, run
azd init
from the root. Otherwise, you can runazd init -t vhvb1989/R2
and azd will pull the repo. To complete the init step, select a name for your environment. -
Configure your Azure OpenAI connection by running:
azd env set R2_OPENAI_ENDPOINT paste-open-ai-url-here
azd env set R2_AZURE_API_KEY paste-azure-api-key
- Run
azd up
. azd will create the next required Azure services:
- Azure Key Vault: Holds connection to DB
- Azure Cosmos DB: For the data persistance
- Azure Web Apps:
- padme: micro-service that interacts with OpenAI.
- yoda: micro-service to handle DB and event sources.
- leia: static front-end application.
- The final step is to inject persist the first events. This is because the
yoda
service is still learning how to automatically pull events from external sources. Hence, it provides the endpoit/reset/naboo/events
where a collection of events can be pushed by you. All you need to do is do an HTTP POST call to the yoda url like:
http post yoda-url/reset/naboo/events
payload = [{event},{event}, ...]
You can find an example of a collection of events here.
- Now call padme service like:
http get padme-url/generate
This will make padme
to ask yoda
for the list of events and start creating chats with OpenAI.
- Open
leia
front end to see the responses that are generated.
To run the demo again and generate new responses, repeate from step 5 ahead.