The developer at Mystique Unicorn are interested in building their application using event-driven architectural pattern to process streaming data. For those who are unfamiliar, An event-driven architecture uses events to trigger and communicate between decoupled services and is common in modern applications built with microservices. An event is a change in state, or an update, like an item being placed in a shopping cart on an e-commerce website.
In this application, they will have their physical stores, send a stream sales and inventory related events to a central location, where multiple downstream systems will consume these events. For example, An event for a new order will be consumed by the warehouse system and the sales events will be used by the marketing department to generate revenue and forecast reports. This pattern of separating the produce, router and consumer to independent components allows them to scale the applications without constraints.
They heard that Azure offers capabilities to build event-driven architectures using kubernetes, Can you show them how they can get started?
We can have our producers leverage Azure Storage Queue1 to persist the store event messages. The consumers can process them at their own pace. The producers can also set the time-to-live
on the messages for time sensitive messages. But remember that Azure Storage Queue does not gurantee FIFO(aka First In, First out) ordering of messages. We will use the Azure Python SDK2,6,7,8 for producing and consuming our messages. We will control access to your queue using RBAC for queue3
-
This demo, instructions, scripts and bicep template is designed to be run in
westeurope
. With few or no modifications you can try it out in other regions as well(Not covered here).- ๐ Azure CLI Installed & Configured - Get help here
- ๐ Bicep Installed & Configured - Get help here
- ๐ VS Code & Bicep Extenstions - Get help here
-
-
Get the application code
git clone https://github.com/miztiik/azure-vm-to-storage-queue cd azure-vm-to-storage-queue
-
-
Let check you have Azure Cli working with
# You should have azure cli preinstalled az account show
You should see an output like this,
{ "environmentName": "AzureCloud", "homeTenantId": "16b30820b6d3", "id": "1ac6fdbff37cd9e3", "isDefault": true, "managedByTenants": [], "name": "YOUR-SUBS-NAME", "state": "Enabled", "tenantId": "16b30820b6d3", "user": { "name": "miztiik@", "type": "user" } }
-
Let us walk through each of the stacks,
-
Stack: Main Bicep The params required for the modules are in
params.json
. Do modify them to suit your need.(Especially theadminPassword.secureString
for the VM. You are strongly encouraged to Just-In-Time access5 or use SSH key instead of password based authentication). The helper deployment scriptdeploy.sh
will deploy themain.bicep
file. This will create the following resoureces- Resource Group(RG)
- VNet, Subnet & Virtual Machine
- Virtual Machine(Ubuntu)
- Bootstrapped with custom libs using
userData
script.
- Bootstrapped with custom libs using
- Storage Account -
warehouseXXXX
- Blob Container -
store-events-blob-xxx
- Storage Queue -
store-events-q-xxx
- Blob Container -
- App Config
- The application configs to be used by producers & consumers
- Storage account, Blob Name, Queue Name
- The application configs to be used by producers & consumers
- User Managed Identity
- Scoped with contributor privileges with conditional access restricting to a Blob, Queue & App Config
- Identity attached to the VM
- Log Anayltics Workspace
- Data Collection Endpoint
- Data Collection Rule
- Attached to VM
sh deploy.sh
After successfully deploying the stack, Check the
Resource Groups/Deployments
section for the resources.
-
-
-
Connect to the VM
The Ubuntu vm should be bootstrapped using
userData
to install python3, git and also Azure SDK for Idenity, Blob & Queue.- Connect to the using using Just In Time Access5.
- The bootstrap code should have clone this repo to
/var/azure-vm-to-storage-queue
, If not clone the repo. - The
az_producer_forqueue.py
script expectsAPP_CONFIG_NAME
Store as an environment variable
-
Producer
# ssh miztiik@publicIP # cd /var/ # git clone https://github.com/miztiik/azure-vm-to-storage-queue # cd azure-vm-to-storage-queue # If pre-reqs have not been installed, run the bootstrap script manually # bash /var/azure-vm-to-storage-queue/modules/vm/bootstrap_scripts/deploy_app.sh export APP_CONFIG_NAME="store-events-config-011" python3 /var/azure-vm-to-storage-queue/app/az_producer_for_queues.py &
If everything goes all right, you should see messages like below. You can also check the logs at
/var/log/miztiik-store-events-2023-04-17.json
INFO:root:{ "request_id": "80745806-7378-439e-9707-12d485236d54", "store_id": 6, "store_fqdn": "m-web-srv-011.internal.cloudapp.net", "store_ip": "10.0.0.4", "cust_id": 103, "category": "Furniture", "sku": 95894, "price": 43.43, "qty": 32, "discount": 4.1, "gift_wrap": false, "variant": "black", "priority_shipping": true, "ts": "2023-04-17T12:56:35.868661", "contact_me": "github.com/miztiik", "is_return": true } INFO:root:Message added to store-events-q-011 successfully
The script should create and publish the events to storage queue.
-
Consumer The consumer will read a maximum of
5
messages from the queue and write the events to blob storage and then delete the messages from the queue. Theaz_consumer_for_queues.py
script expectsAPP_CONFIG_NAME
Store as an environment variableexport APP_CONFIG_NAME="store-events-config-011" python3 /var/azure-vm-to-storage-queue/app/az_consumer_for_queues.py &
If everything goes all right, you should see messages like below. You can also check the logs at
/var/log/miztiik-store-events-2023-04-17.json
INFO:root:Message received from store-events-q-011 INFO:root:{ "request_id": "80745806-7378-439e-9707-12d485236d54", "store_id": 6, "store_fqdn": "m-web-srv-011.internal.cloudapp.net", "store_ip": "
-
-
-
Troubleshooting Azure Monitor Agent
-
Check if the VM can write to blob using cli. List Blobs
RG_NAME="MIZTIIK_ENTERPRISES_AZURE_VM_TO_BLOB_STORAGE_011" SA_NAME="warehousei5chd4011" CONTAINER_NAME="store-events-011" az storage blob list \ --container-name ${CONTAINER_NAME1} \ --account-name ${SA_NAME} \ --auth-mode login az storage blob directory list \ --container-name ${CONTAINER_NAME} \ -d default \ --account-name ${SA_NAME} \ --auth-mode login
Upload file to blob,
echo "hello world on $(date +'%Y-%m-%d')" > miztiik.log az storage blob upload \ --account-name ${SA_NAME} \ --container-name ${CONTAINER_NAME} \ --name miztiik.log \ --file miztiik.log \ --auth-mode login
-
-
Here we have demonstrated how to store use Azure Storage Queue to publish and subscribe to events. You can extend the solution to setup trigger in blob conatiners to further process these events or notify other consumers.
If you want to destroy all the resources created by the stack, Execute the below command to delete the stack, or you can delete the stack from console as well
- Resources created during Deploying The Application
- Any other custom resources, you have created for this demo
# Delete from resource group
az group delete --name Miztiik_Enterprises_xxx --yes
# Follow any on-screen prompt
This is not an exhaustive list, please carry out other necessary steps as maybe applicable to your needs.
This repository aims to show how to Bicep to new developers, Solution Architects & Ops Engineers in Azure.
Thank you for your interest in contributing to our project. Whether it is a bug report, new feature, correction, or additional documentation or solutions, we greatly value feedback and contributions from our community. Start here
Buy me a coffee โ.
- Azure Docs: Azure Storage Queue
- Azure Docs: Send & Receive from Queue with Python
- Azure Docs: Azure RBAC Role for Queue
- Azure Docs: Azure RBAC-ABAC Limitations
- Azure Docs: Just In Time Access
- Azure Docs: Python Queue Sample Code
- Azure Docs: Python Queue Client Class
- Azure Docs: Python Queue Message Class
- Azure Docs: Configure pythong logging in the Azure libraries
Level: 300