Kafka GUI for Apache Kafka to manage topics, topics data, consumers group, schema registry, connect and more...
- Features
- Quick Preview
- Installation
- Configuration
- Api
- Monitoring Endpoint
- Development Environment
- Schema references
- Who's using AKHQ
- General
- Works with modern Kafka cluster (1.0+)
- Connection on standard or ssl, sasl cluster
- Multi cluster
- Topics
- List
- Configurations view
- Partitions view
- ACLS view
- Consumers groups assignments view
- Node leader & assignments view
- Create a topic
- Configure a topic
- Delete a topic
- Browse Topic datas
- View data, offset, key, timestamp & headers
- Automatic deserializarion of avro message encoded with schema registry
- Configurations view
- Logs view
- Delete a record
- Empty a Topic (Delete all the record from one topic)
- Sort view
- Filter per partitions
- Filter with a starting time
- Filter data with a search string
- Consumer Groups (only with kafka internal storage, not with old Zookeeper)
- List with lag, topics assignments
- Partitions view & lag
- ACLS view
- Node leader & assignments view
- Display active and pending consumers groups
- Delete a consumer group
- Update consumer group offsets to start / end / timestamp
- Schema Registry
- List schema
- Create / Update / Delete a schema
- View and delete individual schema version
- Connect
- List connect definition
- Create / Update / Delete a definition
- Pause / Resume / Restart a definition or a task
- Nodes
- List
- Configurations view
- Logs view
- Configure a node
- ACLS
- List principals
- List principals topic & group acls
- Authentification and Roles
- Read only mode
- BasicHttp with roles per user
- User groups configuration
- Filter topics with regexp for current groups
- Ldap configuration to match AKHQ groups/roles
Since this is a major rework, the new UI can have some issues, so please report any issue, thanks!
- Download docker-compose.yml file
- run
docker-compose pull
to be sure to have the last version of AKHQ - run
docker-compose up
- go to http://localhost:8080
It will start a Kafka node, a Zookeeper node, a Schema Registry, a Connect, fill with some sample data, start a consumer group and a kafka stream & start AKHQ.
First you need a configuration files in order to configure AKHQ connections to Kafka Brokers.
docker run -d \
-p 8080:8080 \
-v /tmp/application.yml:/app/application.yml \
tchiotludo/akhq
- With
-v /tmp/application.yml
must be an absolute path to configuration file - Go to http://localhost:8080
- Install Java 11
- Download the latest jar on release page
- Create an configuration files
- Launch the application with
java -Dmicronaut.config.files=/path/to/application.yml -jar akhq.jar
- Go to http://localhost:8080
- Add the AKHQ helm charts repository:
helm repo add akhq https://akhq.io/
- Install or upgrade
helm upgrade --install akhq akhq/akhq
- Chart version >=0.1.1 requires Kubernetes version >=1.14
- Chart version 0.1.0 works on previous Kubernetes versions
helm install akhq akhq/akhq --version 0.1.0
- Clone the repository:
git clone https://github.com/tchiotludo/akhq && cd akhq/deploy/helm/akhq
- Update helm values located in deploy/helm/values.yaml
configuration
values will contains all related configuration that you can find in application.example.yml and will be store in aConfigMap
secrets
values will contains all sensitive configurations (with credentials) that you can find in application.example.yml and will be store inSecret
- Both values will be merged at startup
- Apply the chart:
helm install --name=akhq-release-name .
Configuration file can by default be provided in either Java properties, YAML, JSON or Groovy files. YML Configuration file example can be found here :application.example.yml
By default, the docker container will allow a custom jvn options setting the environnments vars JAVA_OPTS
.
For example, if you want to change the default timezome, just add -e "JAVA_OPTS=-Duser.timezone=Europe/Paris"
By default, the docker container will run with a jvm.options file, you can override it with
your own with an Environment Variable. With the JVM_OPTS_FILE
environment variable, you can override the jvm.options file by passing
the path of your file instead.
Override the JVM_OPTS_FILE
with docker run:
docker run -d \
--env JVM_OPTS_FILE={{path-of-your-jvm.options-file}}
-p 8080:8080 \
-v /tmp/application.yml:/app/application.yml \
tchiotludo/akhq
Override the JVM_OPTS_FILE
with docker-compose:
version: '3.7'
services:
akhq:
image: tchiotludo/akhq-jvm:dev
environment:
JVM_OPTS_FILE: /app/jvm.options
ports:
- "8080:8080"
volumes:
- /tmp/application.yml:/app/application.yml
If you do not override the JVM_OPTS_FILE
, the docker container will take the defaults one instead.
akhq.connections
is a key value configuration with :key
: must be an url friendly (letter, number, _, -, ... dot are not allowed here) string the identify your cluster (my-cluster-1
andmy-cluster-2
is the example above)properties
: all the configurations found on Kafka consumer documentation. Most important isbootstrap.servers
that is a list of host:port of your Kafka brokers.schema-registry
: (optional)url
: the schema registry urlbasic-auth-username
: schema registry basic auth usernamebasic-auth-password
: schema registry basic auth passwordproperties
: all the configurations for registry client, especially ssl configuration
connect
: (optional list, define each connector as a element of a list)name
: connect nameurl
: connect urlbasic-auth-username
: connect basic auth usernamebasic-auth-password
: connect basic auth passwordssl-trust-store
: /app/truststore.jksssl-trust-store-password
: trust-store-passwordssl-key-store
: /app/truststore.jksssl-key-store-password
: key-store-password
Configuration example for kafka cluster secured by ssl for saas provider like aiven (full https & basic auth):
You need to generate a jks & p12 file from pem, cert files give by saas provider.
openssl pkcs12 -export -inkey service.key -in service.cert -out client.keystore.p12 -name service_key
keytool -import -file ca.pem -alias CA -keystore client.truststore.jks
Configurations will look like this example:
akhq:
connections:
ssl-dev:
properties:
bootstrap.servers: "{{host}}.aivencloud.com:12835"
security.protocol: SSL
ssl.truststore.location: {{path}}/avnadmin.truststore.jks
ssl.truststore.password: {{password}}
ssl.keystore.type: "PKCS12"
ssl.keystore.location: {{path}}/avnadmin.keystore.p12
ssl.keystore.password: {{password}}
ssl.key.password: {{password}}
schema-registry:
url: "https://{{host}}.aivencloud.com:12838"
basic-auth-username: avnadmin
basic-auth-password: {{password}}
properties: {}
connect:
- name: connect-1
url: "https://{{host}}.aivencloud.com:{{port}}"
basic-auth-username: avnadmin
basic-auth-password: {{password}}
akhq.pagination.page-size
number of topics per page (default : 25)
akhq.topic.default-view
is default list view (ALL, HIDE_INTERNAL, HIDE_INTERNAL_STREAM, HIDE_STREAM)akhq.topic.internal-regexps
is list of regexp to be considered as internal (internal topic can't be deleted or updated)akhq.topic.stream-regexps
is list of regexp to be considered as internal stream topic
These parameters are the default values used in the topic creation page.
akhq.topic.retention
Default retention in msakhq.topic.replication
Default number of replica to useakhq.topic.partition
Default number of partition
akhq.topic-data.sort
: default sort order (OLDEST, NEWEST) (default: OLDEST)akhq.topic-data.size
: max record per page (default: 50)akhq.topic-data.poll-timeout
: The time, in milliseconds, spent waiting in poll if data is not available in the buffer (default: 1000).
To deserialize topics containing data in Protobuf format, you can set topics mapping:
for each topic-regex
you can specify descriptor-file-base64
(descriptor file encoded to Base64 format)
and corresponding message types for keys and values. If, for example, keys are not in Protobuf format,
key-message-type
can be omitted, the same for value-message-type
. This configuration can be specified
for each Kafka cluster.
Example configuration can look like as follows:
akhq:
connections:
kafka:
properties:
# standard kafka properties
deserialization:
protobuf:
topics-mapping:
- topic-regex: "album.*"
descriptor-file-base64: "Cs4BCgthbGJ1bS5wcm90bxIXY29tLm5ldGNyYWNrZXIucHJvdG9idWYidwoFQWxidW0SFAoFdGl0bGUYASABKAlSBXRpdGxlEhYKBmFydGlzdBgCIAMoCVIGYXJ0aXN0EiEKDHJlbGVhc2VfeWVhchgDIAEoBVILcmVsZWFzZVllYXISHQoKc29uZ190aXRsZRgEIAMoCVIJc29uZ1RpdGxlQiUKF2NvbS5uZXRjcmFja2VyLnByb3RvYnVmQgpBbGJ1bVByb3RvYgZwcm90bzM="
value-message-type: "Album"
- topic-regex: "film.*"
descriptor-file-base64: "CuEBCgpmaWxtLnByb3RvEhRjb20uY29tcGFueS5wcm90b2J1ZiKRAQoERmlsbRISCgRuYW1lGAEgASgJUgRuYW1lEhoKCHByb2R1Y2VyGAIgASgJUghwcm9kdWNlchIhCgxyZWxlYXNlX3llYXIYAyABKAVSC3JlbGVhc2VZZWFyEhoKCGR1cmF0aW9uGAQgASgFUghkdXJhdGlvbhIaCghzdGFycmluZxgFIAMoCVIIc3RhcnJpbmdCIQoUY29tLmNvbXBhbnkucHJvdG9idWZCCUZpbG1Qcm90b2IGcHJvdG8z"
value-message-type: "Film"
- topic-regex: "test.*"
descriptor-file-base64: "Cs4LChhzdHJlYW1pbmctcHJvdG9jb2wucHJvdG8SLmNvbS5uZXRjcmFja2VyLnByb3RvYnVmLnNlcmlhbGl6YXRpb24ucHJvdG9jb2wiIwoFUG9pbnQSDAoBeBgBIAEoAVIBeBIMCgF5GAIgASgBUgF5IscFCgVEYXR1bRIfCgtjb2x1bW5fbmFtZRgBIAEoCVIKY29sdW1uTmFtZRJbCgtjb2x1bW5fdHlwZRgCIAEoDjI6LmNvbS5uZXRjcmFja2VyLnByb3RvYnVmLnNlcmlhbGl6YXRpb24ucHJvdG9jb2wuQ29sdW1uVHlwZVIKY29sdW1uVHlwZRIfCgtzY2hlbWFfbmFtZRgLIAEoCVIKc2NoZW1hTmFtZRJ4ChFzY2hlbWFfcGFyYW1ldGVycxgMIAMoCzJLLmNvbS5uZXRjcmFja2VyLnByb3RvYnVmLnNlcmlhbGl6YXRpb24ucHJvdG9jb2wuRGF0dW0uU2NoZW1hUGFyYW1ldGVyc0VudHJ5UhBzY2hlbWFQYXJhbWV0ZXJzEiUKDWRhdHVtX2ludGVnZXIYAyABKAVIAFIMZGF0dW1JbnRlZ2VyEh8KCmRhdHVtX2xvbmcYBCABKANIAFIJZGF0dW1Mb25nEiEKC2RhdHVtX2Zsb2F0GAUgASgCSABSCmRhdHVtRmxvYXQSIwoMZGF0dW1fZG91YmxlGAYgASgBSABSC2RhdHVtRG91YmxlEiUKDWRhdHVtX2Jvb2xlYW4YByABKAhIAFIMZGF0dW1Cb29sZWFuEiMKDGRhdHVtX3N0cmluZxgIIAEoCUgAUgtkYXR1bVN0cmluZxIhCgtkYXR1bV9ieXRlcxgJIAEoDEgAUgpkYXR1bUJ5dGVzElgKC2RhdHVtX3BvaW50GAogASgLMjUuY29tLm5ldGNyYWNrZXIucHJvdG9idWYuc2VyaWFsaXphdGlvbi5wcm90b2NvbC5Qb2ludEgAUgpkYXR1bVBvaW50GkMKFVNjaGVtYVBhcmFtZXRlcnNFbnRyeRIQCgNrZXkYASABKAlSA2tleRIUCgV2YWx1ZRgCIAEoCVIFdmFsdWU6AjgBQgcKBWRhdHVtIlIKA1JvdxJLCgVkYXR1bRgBIAMoCzI1LmNvbS5uZXRjcmFja2VyLnByb3RvYnVmLnNlcmlhbGl6YXRpb24ucHJvdG9jb2wuRGF0dW1SBWRhdHVtImkKBlNvdXJjZRIcCgljb25uZWN0b3IYASABKAlSCWNvbm5lY3RvchIcCgl0aW1lc3RhbXAYAiABKARSCXRpbWVzdGFtcBIjCg1sYXN0X3NuYXBzaG90GAMgASgIUgxsYXN0U25hcHNob3QijwIKCEVudmVsb3BlEhwKCXRpbWVzdGFtcBgBIAEoBFIJdGltZXN0YW1wEhwKCW9wZXJhdGlvbhgCIAEoCVIJb3BlcmF0aW9uElAKCGRhdGFfcm93GAMgASgLMjMuY29tLm5ldGNyYWNrZXIucHJvdG9idWYuc2VyaWFsaXphdGlvbi5wcm90b2NvbC5Sb3dIAFIHZGF0YVJvdxIdCglkYXRhX2pzb24YBCABKAlIAFIIZGF0YUpzb24STgoGc291cmNlGAUgASgLMjYuY29tLm5ldGNyYWNrZXIucHJvdG9idWYuc2VyaWFsaXphdGlvbi5wcm90b2NvbC5Tb3VyY2VSBnNvdXJjZUIGCgRkYXRhKnMKCkNvbHVtblR5cGUSCwoHSU5URUdFUhAAEggKBExPTkcQARIJCgVGTE9BVBACEgoKBkRPVUJMRRADEgsKB0JPT0xFQU4QBBIKCgZTVFJJTkcQBRIICgRKU09OEAYSCQoFQllURVMQBxIJCgVQT0lOVBAIQkUKLmNvbS5uZXRjcmFja2VyLnByb3RvYnVmLnNlcmlhbGl6YXRpb24ucHJvdG9jb2xCEVN0cmVhbWluZ1Byb3RvY29sSAFiBnByb3RvMw=="
key-message-type: "Row"
value-message-type: "Envelope"
More examples about Protobuf deserialization can be found in tests. Info about descriptor files generation can be found in test resources.
akhq.security.default-group
: Default group for all the user even unlogged user. By default, the default group isadmin
and allow you all read / write access on the whole app.
By default, security & roles is enabled by default but anonymous user have full access. You can completely disabled
security with micronaut.security.enabled: false
.
If you need a read-only application, simply add this to your configuration files :
akhq:
security:
default-group: reader
AKHQ uses JWT tokens to perform authentication. Please generate a secret that is at least 256 bits and change the config like this:
micronaut:
security:
enabled: true
token:
jwt:
signatures:
secret:
generator:
secret: <Your secret here>
Groups allow you to limit user
Define groups with specific roles for your users
-
akhq.security.default-group
: Default group for all the user even unlogged user -
akhq.security.groups
: Groups list definition- name: group-name
Group identifierroles
: Roles list for the groupattributes.topics-filter-regexp
: Regexp to filter topics available for current groupattributes.connects-filter-regexp
: Regexp to filter Connect tasks available for current group
3 defaults group are available :
admin
with all rightreader
with only read acces on all AKHQno-roles
without any roles, that force user to login
akhq.security.basic-auth
: List user & password with affected roles- username: actual-username
: Login of the current user as a yaml key (may be anything email, login, ...)password
: Password in sha256 (default) or bcrypt. The password can be converted- For default SHA256, with command
echo -n "password" | sha256sum
or Ansible filter{{ 'password' | hash('sha256') }}
- For BCrypt, with Ansible filter
{{ 'password' | password_hash('blowfish') }}
- For default SHA256, with command
passwordHash
: Password hashing algorithm, eitherSHA256
orBCRYPT
groups
: Groups for current user
Take care that basic auth will use session store in server memory. If your instance is behind a reverse proxy or a loadbalancer, you will need to forward the session cookie named
SESSION
and / or use sesssion stickiness
Configure basic-auth connection in AKHQ
akhq.security:
basic-auth:
- username: admin
password: "$2a$<hashed password>"
passwordHash: BCRYPT
groups:
- admin
- username: reader
password: "<SHA-256 hashed password>"
groups:
- reader
Configure how the ldap groups will be matched in AKHQ groups
akhq.security.ldap.groups
: Ldap groups list- name: ldap-group-name
: Ldap group name (same name as in ldap)groups
: AKHQ group list to be used for current ldap group
Example using online ldap test server
Configure ldap connection in micronaut
micronaut:
security:
ldap:
default:
enabled: true
context:
server: 'ldap://ldap.forumsys.com:389'
managerDn: 'cn=read-only-admin,dc=example,dc=com'
managerPassword: 'password'
search:
base: "dc=example,dc=com"
groups:
enabled: true
base: "dc=example,dc=com"
If you want to enable anonymous auth to your LDAP server you can pass :
managerDn: ''
managerPassword: ''
Debuging ldap connection can be done with
curl -i -X POST -H "Content-Type: application/json" \
-d '{ "configuredLevel": "TRACE" }' \
http://localhost:8080/loggers/io.micronaut.configuration.security
Configure AKHQ groups and Ldap groups and users
akhq:
security:
groups:
- name: topic-reader # Group name
roles: # roles for the group
- topic/read
attributes:
# Regexp to filter topic available for group
topics-filter-regexp: "test\\.reader.*"
connects-filter-regexp: "^test.*$"
- name: topic-writer # Group name
roles:
- topic/read
- topic/insert
- topic/delete
- topic/config/update
attributes:
topics-filter-regexp: "test.*"
connects-filter-regexp: "^test.*$"
ldap:
groups:
- name: mathematicians
groups:
- topic-reader
- name: scientists
groups:
- topic-reader
- topic-writer
users:
- username: franz
groups:
- topic-reader
- topic-writer
To enable OIDC in the application, you'll first have to enable OIDC in micronaut:
micronaut:
security:
oauth2:
enabled: true
clients:
google:
client-id: "<client-id>"
client-secret: "<client-secret>"
openid:
issuer: "<issuer-url>"
To further tell AKHQ to display OIDC options on the login page and customize claim mapping, configure OIDC in the AKHQ config:
akhq:
security:
oidc:
enabled: true
providers:
google:
label: "Login with Google"
username-field: preferred_username
groups-field: roles
default-group: topic-reader
groups:
- name: mathematicians
groups:
- topic-reader
- name: scientists
groups:
- topic-reader
- topic-writer
users:
- username: franz
groups:
- topic-reader
- topic-writer
The username field can be any string field, the roles field has to be a JSON array.
micronaut.server.context-path
: if behind a reverse proxy, path to akhq with trailing slash (optional). Example: akhq is behind a reverse proxy with url http://my-server/akhq, set base-path: "/akhq/". Not needed if you're behind a reverse proxy with subdomain http://akhq.my-server/
akhq.clients-defaults.{{admin|producer|consumer}}.properties
: default configuration for admin producer or consumer. All properties from Kafka documentation is available.
Since AKHQ is based on Micronaut, you can customize configurations (server port, ssl, ...) with Micronaut configuration. More information can be found on Micronaut documentation
AKHQ docker image support 3 environment variables to handle configuraiton :
AKHQ_CONFIGURATION
: a string that contains the full configuration in yml that will be written on /app/configuration.yml on container.MICRONAUT_APPLICATION_JSON
: a string that contains the full configuration in JSON formatMICRONAUT_CONFIG_FILES
: a path to to a configuration file on container. Default path is/app/application.yml
Take care when you mount configuration files to not remove akhq files located on /app.
You need to explicitely mount the /app/application.yml
and not mount the /app
directory.
This will remove the AKHQ binnaries and give you this error: /usr/local/bin/docker-entrypoint.sh: 9: exec: ./akhq: not found
volumeMounts:
- mountPath: /app/application.yml
subPath: application.yml
name: config
readOnly: true
An experimental api is available that allow you to fetch all the exposed on AKHQ through api.
Take care that this api is experimental and will change in a future release. Some endpoint expose too many datas and is slow to fetch, and we will remove some properties in a future in order to be fast.
Example: List topic endpoint expose log dir, consumer groups, offsets. Fetching all of theses is slow for now and we will remove these in a future.
You can discover the api endpoint here :
/api
: a RapiDoc webpage that document all the endpoints./swagger/akhq.yml
: a full OpenApi specifications files
Several monitoring endpoint is enabled by default. You can disabled it or restrict access only for authenticated users following micronaut configuration below.
/info
Info Endpoint with git status informations./health
Health Endpoint/loggers
Loggers Endpoint/metrics
Metrics Endpoint/prometheus
Prometheus Endpoint
You can debug all query duration from AKHQ with this commands
curl -i -X POST -H "Content-Type: application/json" \
-d '{ "configuredLevel": "TRACE" }' \
http://localhost:8080/loggers/org.akhq
You can have access to last feature / bug fix with docker dev image automatically build on tag dev
docker pull tchiotludo/akhq:dev
The dev jar is not publish on GitHub, you have 2 solutions to have the dev
jar :
Get it from docker image
docker pull tchiotludo/akhq:dev
docker run --rm --name=akhq -it tchiotludo/akhq:dev
docker cp akhq:/app/akhq.jar .
Or build it with a ./gradlew shadowJar
, the jar will be located here build/libs/akhq-*.jar
A docker-compose is provide to start a development environnement.
Just install docker & docker-compose, clone the repository and issue a simple docker-compose -f docker-compose-dev.yml up
to start a dev server.
Dev server is a java server & webpack-dev-server with live reload.
The configuration for the dev server is in application.dev.yml
.
Since Confluent 5.5.0, Avro schemas can now be reused by others schemas through schema references. This feature allows to define a schema once and use it as a record type inside one or more schemas.
When registering new Avro schemas with AKHQ UI, it is now possible to pass a slightly more complex object with a schema
and a references
field.
To register a new schema without references, no need to change anything:
{
"name": "Schema1",
"namespace": "org.akhq",
"type": "record",
"fields": [
{
"name": "description",
"type": "string"
}
]
}
To register a new schema with a reference to an already registered schema:
{
"schema": {
"name": "Schema2",
"namespace": "org.akhq",
"type": "record",
"fields": [
{
"name": "name",
"type": "string"
},
{
"name": "schema1",
"type": "Schema1"
}
]
},
"references": [
{
"name": "Schema1",
"subject": "SCHEMA_1",
"version": 1
}
]
}
Documentation on Confluent 5.5 and schema references can be found here.
- Adeo
- Auchan Retail
- Bell
- BMW Group
- Boulanger
- GetYourGuide
- Klarna
- La Redoute
- Leroy Merlin
- NEXT Technologies
- Nuxeo
- Pipedrive
- BARMER
- TVG
Many thanks to:
- JetBrains for their free OpenSource license.
- Apache, Apache Kafka, Kafka, and associated open source project names are trademarks of the Apache Software Foundation. AKHQ is not affiliated with, endorsed by, or otherwise associated with the Apache Software.
Apache 2.0 © tchiotludo