Note that this branch uses SharedAccessKeys for Event Hub authentication, please see the oauth-msi branch for managed identities with OAuth.
This is a sample project that shows how to use Kafka APIs for Azure Event Hubs through a Spring Boot app that runs on Azure App Service (Linux flavour with containers).
This repository consists of 3 modules
- Declarative infra setup for the required Azure resources
- Code for a simple consumer & producer setup
- Build pipelines
- Using Azure DevOps
- And, Github Actions
Pre-requisites
- A Bash shell, or any other shell in which case you might need to update some of the commands below
- Azure CLI installed and configured
- Maven (to build the Java code)
Let's start with declaring a few variables:
RG=... # the target resource group, assuming that this has been created already
BASE_NAME=... # i.e. kafka, choose something with less than 6 alphanumeric characters
As some of the Azure resources need to have globally unique names, the included ARM templates attempt to generate more or less unique names by appending a hash of the resource group name to the provided base name. If you prefer to have more control or need to use specific names, just update the variables in the templates.
If you've cloned this repo, change directory to the app
folder and run the following command to build
the Spring Boot jar file.
mvn clean package -DskipTests
Once we've the app.jar
we can now build the container image. There are multiple ways to do that; if
you have docker installed locally you can build the image and push it to the container registry, but in this
example, we'll use the build tasks feature of Azure Container Registry. In order to use that feature, we'll first
need to create the registry in the first place.
ACR=`az group deployment create -g $RG \
--template-file ../infra/container-registry-template.json \
--parameters baseName="$BASE_NAME" \
--query properties.outputs.registryName.value \
-o tsv`
Since the registry name is generated by adding a resource specific prefix and appending a unique id, we're capturing the name of the freshly contained registry (without the azurecr.io) from the command output.
Now we can run the build task. Make sure that you're in the app
folder when you run this command.
IMAGE=kafka-demo:v1
az acr build -r $ACR --image $IMAGE .
This will build the image and store it in the registry so that we can refer to it from the App Service.
The next deployment will create a few resources, firstly a Linux web app on containers, a Key Vault to store some secrets, an Event Hub and an Application Insights instance (altough this example won't be using that feature in anger). The web app will have a managed identity so that it can retrieve secrets from the Key Vault. And the newly created image will be deployed on the web app.
HOST_NAME=`az group deployment create -g $RG \
--template-file ../infra/container-webapp-template.json \
--parameters baseName="$BASE_NAME" imageName="$IMAGE" \
--query properties.outputs.webAppHostName.value \
-o tsv`
In order to see some output you might either try out Application Insights or the Log Stream feature of the App Service. The sample app uses the standard slf4j setup to do logging, which ends up showing the output on the Log Stream console.
Start with pinging the service.
curl https://$HOST_NAME/api/ping
This will print the current time on the logs (not on the console where you're issuing the command from).
And you can send a simple message by the following POST request
curl -X POST -H "Content-Type: text/plain" https://$HOST_NAME/api/send -d "Hello World!"
You now should be able to see that a Kafka Producer successfully sends the message to the Event Hub, and
a Kafka Consumer will receive it. For the sake of this example we only have a single Consumer, but
either switching to ConcurrentKafkaListenerContainerFactory
on Spring and/or having multiple instances
of the App Service (scale out) you can achieve parallelism.
The interesting part is that the code has no idea that it's dealing with an Event Hub, all communication goes through Kafka APIs.