Improve metadata
Opened this issue · 0 comments
avelanarius commented
- Refactor
refresh_metadata()
fromkafka_producer
into separate class (maybemetadata_updater
,metadata_manager
?). Warning: you may need to change fieldconnection_manager
inkafka_producer
fromconnection_manager
type tolw_shared_ptr<connection_manager>
type in order to pass it tometadata_updater
. - Currently each
produce
request inkafka_producer
refreshes the matadata information. Modify it to cache the information (insidemetadata_updater
) -refresh_metadata()
inmetadata_updater
should refresh and store the information and subsequentget_metadata()
inmetadata_updater
should read the stored metadata.produce
inkafka_producer
should useget_metadata()
andinit
inkafka_producer
should runrefresh_metadata()
.
I recommend making a PR after ↑ previous 2 points ↑, so that we can use this code in other tasks (next 2 points won't change the interface of the class)
- Implement functionality to update the cache every X seconds (run
refresh_metadata()
automatically every X seconds insidemetadata_updater
). - Add error handling/retries to
metadata_updater
. Currently we ask only 1 broker about the metadata, however that broker could become nonresponsive (metadata._error_code != NONE
). In such case, we should ask another broker for metadata (and if it fails, another broker, and another...). Keep in mind that at the start, we have only 1 user-provided bootstrap broker (also modify it to be a list of many bootstrap brokers). Once we get our first metadata response, we should rely on its broker list (for nextrefresh_metadata()
and its retries).