agile-lab-dev/darwin

Kafka connector

Opened this issue · 1 comments

Do you think that make sense to have a Kafka connector? Store schemas directly in Kafka?

We can have a topic with infinite retention, where the key is the schema id and the value is the schema (plus other info if?).
This connector can write to this topic and maintain a list of id->schema in memory by consuming its own the records.
Of course the assumption is that we don't have milions of schemas...

Each instance of this connector reads and maintain all the schemas in memory, so each instance is indipendent from the other.
At every startup it reads the topic from the start.

The main benefit is that we don't have external dependencies for use cases where we just use Kafka and that we receive new schemas immediately.

It makes sense, Confluent schema registry works as you described