The citadel project uses ksqldb to build an abstraction on top of kafka streams to extract, transform and load data from kafka to persistence sinks. All the modules are run as docker containers (except SAP HANA).
- Docker
- jq
- ksqtools project (in Titan)
In order to successfuly start/run the citadel project, it needs the ksqldb plugins (UDFs) to be placed in the correct directory.
bash start.sh
sh start.sh
bash stop.sh
sh stop.sh
The script automatically prodecues some static test data into the kafka topic. In order to produce custom message, login into the broker:
docker exec -it broker /bin/bash
Then enter the kafka-console-producer and start pushing data like this:
kafka-console-producer --bootstrap-server localhost:9092 --topic events --property "parse.key=true" --property "key.separator=:"
>"event_1":{"eventId": "event_1", "event_type": "behavior", "count": 5, "meta": "test event"}
>^C
Note: To quit the producer-console, use the ctrl + C
(^C
).
The data is transformed by KSQLDB and then forwarded to the sinks. We may verify if data ended up in our configured sinks.
We may query the REST API exposed by elasticsearch. e.g.
curl -X GET http://localhost:9200/transformed_behavior_events/_search?pretty=true&q=*:*
Login into the postgres container
docker exec -it postgres /bin/bash
Using psql
enter the interactive shell
psql -d transformedevents
Query using SQL commands e.g.:
SELECT * from events;
delete.enabled
value is set tofalse
for sinks, meaning delete operation would not be allowed. This has been done because in order to support deletepk.mode
needs to be set torecord_key
. But, in order to achieve that, we need to explicitly identify keys in each stream (which is currently not supported bycreate stream by select
).- Support for array fields have not been decided yet (Should they fan out as multiple rows?)
- One event may lead to multiple entries across multiple tables. It needs to be statically defined and can't be inferred on the fly (dynamically using logic) in KSQLDB. It also leads to creation of mutiple streams and topics.
- Primarily for Mac: If some pods keep crashing with stattus codes 127 or 137, most probably that's due to OOM. We may increase the memory limit in docker panel (dashboard) to allocate a higher memory. Default is 1GB. More details here
- Primarily for Linux/WSL : Docker containers nay not be able to connect to the external world (internet). For that, please update the
/etc/docker/daemon.json
file (create if doesn't exist) with a DNS entry e.g.
{
"dns" ["10.1.2.3", "8.8.8.8"]
}