/agent-openai-python-prompty

A creative writing multi-agent solution to help users write articles.

Primary LanguagePythonMIT LicenseMIT

Creative Writing Assistant: Working with Agents using Promptflow (Python Implementation)

Open in GitHub Codespaces Open in Dev Containers

This sample demonstrates how to create and work with AI agents driven by Azure OpenAI. It includes a Flask app that takes a topic and instruction from a user then calls a research agent that uses the Bing Search API to research the topic, a product agent that uses Azure AI Search to do a semantic similarity search for related products from a vectore store, a writer agent to combine the research and product information into a helpful article, and an editor agent to refine the article that's finally presented to the user.

Table of Contents

Features

This project template provides the following features:

Architecture Digram

Azure account requirements

IMPORTANT: In order to deploy and run this example, you'll need:

Opening the project

You have a few options for setting up this project. The easiest way to get started is GitHub Codespaces, since it will setup all the tools for you, but you can also set it up locally.

GitHub Codespaces

  1. You can run this template virtually by using GitHub Codespaces. The button will open a web-based VS Code instance in your browser:

    Open in GitHub Codespaces

  2. Open a terminal window.

  3. Sign in to your Azure account:

    azd auth login
  4. Provision the resources and deploy the code:

    azd up

    This project uses gpt-35-turbo-0613 and gpt-4-1106-Preview which may not be available in all Azure regions. Check for up-to-date region availability and select a region during deployment accordingly. For this project we recommend East US 2.

VS Code Dev Containers

A related option is VS Code Dev Containers, which will open the project in your local VS Code using the Dev Containers extension:

  1. Start Docker Desktop (install it if not already installed)

  2. Open the project:

    Open in Dev Containers

  3. In the VS Code window that opens, once the project files show up (this may take several minutes), open a terminal window.

Local environment

Prerequisites

Initializing the project

  1. Create a new folder and switch to it in the terminal, then run this command to download the project code:

    azd init -t agent-openai-python-prompty

    Note that this command will initialize a git repository, so you do not need to clone this repository.

  2. Install required packages:

    cd src/api
    pip install -r requirements.txt

Deployment

Once you've opened the project in Codespaces, Dev Containers, or locally, you can deploy it to Azure.

  1. Sign in to your Azure account:

    azd auth login

    If you have any issues with that command, you may also want to try azd auth login --use-device-code.

  2. Create a new azd environment:

    azd env new

    This will create a folder under .azure/ in your project to store the configuration for this deployment. You may have multiple azd environments if desired.

  3. Provision the resources and deploy the code:

    azd up

    This project uses gpt-35-turbo-0613 and gpt-4-1106-Preview which may not be available in all Azure regions. Check for up-to-date region availability and select a region during deployment accordingly. We recommend using East US 2 for this project.

  4. At the end of this process a .env file will be created for you. Copy this file to the src/api folder.

Testing the sample

This sample repository contains an agents folder that includes subfolders for each agent. Each agent forlder contains a prompty file where the agents prompty is defined and a python file with the code used to run it. Exploring these files will help you understand what each agent is doing. The agents folder also contains an orchestrator.py file that can be used to run the entire flow and to create an article.

To test the sample:

  1. Populate the Azure AI Search vectore store index with product data.
  • Change into the api/data folder:
cd src/api/data
  • Install the Jupyter extension.
  • Once the extension has been installed, open the create-azure-search.ipynb notebook. We will use this notebook to upload a catalogue of products to the Azure AI Search vector store. Click select kernel in the top right hand corner of the notebook, choose Python environment and then select the recommended Python version.
  • Run all of the cells in the notebook. If this process was successful you should see "uploading 20 documents to index contoso-products". You're now ready to run the full promptflow.
  1. Test with sample data

    2.1 To run the sample using just the orchestrator logic use the following command:

     ```
     cd ..
     python -m api.agents.orchestrator
     ```
    

    2.2 You also have the option of testing this code locally using a Flask app. (You should run this from the src/api folder)

    To run the flask webserver:

    flask --debug --app api.app:app run --port 5000
    
    http://127.0.0.1:5000/get_article?context=Write an article about camping in alaska&instruction=find specifics about what type of gear they would need and explain in detail
    

Evaluating prompt flow results

To understand how well our prompt flow performs using defined metrics like groundedness, coherence etc we can evaluate the results. To evaluate the prompt flow, we need to be able to compare it to what we see as "good results" in order to understand how well it aligns with our expectations.

We may be able to evaluate the flow manually (e.g., using Azure AI Studio) but for now, we'll evaluate this by running the prompt flow using gpt-4 and comparing our performance to the results obtained there. To do this, follow the instructions and steps in the notebook evaluate-chat-prompt-flow.ipynb under the eval folder.

You can also view the evaluation metrics by running the following commands.

In a new terminal

cd src/web

First install node packages:

npm install

Then run the web app with a local dev web server:

npm run dev

Then run evaluation

cd evaluate
python evaluate.py

Costs

Pricing may vary per region and usage. Exact costs cannot be estimated. You may try the Azure pricing calculator for the resources below:

  • Azure Container Apps: Pay-as-you-go tier. Costs based on vCPU and memory used. Pricing
  • Azure OpenAI: Standard tier, GPT and Ada models. Pricing per 1K tokens used, and at least 1K tokens are used per question. Pricing
  • Azure Monitor: Pay-as-you-go tier. Costs based on data ingested. Pricing

Security Guidelines

This template use Managed Identity built in to eliminate the need for developers to manage these credentials. Applications can use managed identities to obtain Microsoft Entra tokens without having to manage any credentials. We also use Key Vault, specifically for Bing Search, since Managed Identity is currently not implemented for it. Additionally, we have added a GitHub Action tool that scans the infrastructure-as-code files and generates a report containing any detected issues. To ensure best practices in your repo we recommend anyone creating solutions based on our templates ensure that the Github secret scanning setting is enabled in your repos.

Resources

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct.

Resources:

For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.