source.max.poll.records not updating
Closed this issue · 4 comments
I'm trying to set source.max.poll.records config. I've tried the following:
Setting in the connector during creation by adding the following:
"source.max.poll.records": "5000"
"connector.consumer.max.poll.records": "5000"
Setting in the worker.properties:
source.max.poll.records=5000
max.poll.records=5000
None of the above changes the config. During kafka connect startup I see the following log:
2019-06-24T15:55:38.341063595Z [2019-06-24 15:55:38,270] INFO ConsumerConfig values:
2019-06-24T15:55:38.341350987Z max.poll.records = 500
What could be the reason for not accepting the config?
Thanks for the report, @Tahvok
I thought I was able to reproduce this bug, but it turns out I was looking at the wrong log item - I wasn't able to reproduce after some tests
Can you re-check your logs? You may be looking at the consumer config for the internal connect brokers, rather than the connector worker task. You may want to turn up the logging level up to DEBUG to see the worker logs.
Can you tell me how did you set the max.poll.records?
I can even see that it's still 500 in the ui:
https://github.com/Landoop/kafka-connect-ui
Simply click on any task after creating the connector, and you'll see that it's set to 500.
Also, how do I know for sure that it's not working? Simply by metrics, it never goes beyond 500, if there is lag, I see a straight line at 500 records.
It worked in the end. For some reason it's taking a few hours until the change is seen in the tasks. Not sure why it's taking so long time
Glad it worked out!
Note that ConsumerConfig is used for Kafka Connect internals, as well as source connectors - so there will be logs from multiple instances. It would be worth checking your logs closely to see if it is printing max.poll.records for the connect internal consumer as well as the config for this connector, ConsumerConfig logging for this source connector should occur, just after a line like:
KafkaSourceTask@ffffffff: task is starting.
I'll close this issue but feel free to re-open if you re-encounter this issue.