zendesk/racecar

long running processes

HDLP9 opened this issue · 4 comments

HDLP9 commented

Hi,

Currently we have a specific topic/consumer that can run for more then 1 or 2 days (we have a lot of data coming from our IOT devices) and until now we've been doing this with sidekiq. Since we already have kafka in place it makes sense to move our sidekiq backgrounds jobs to apache kafka also...but is it a good approach? We think so but let me know, it would reduce our technology stack.

From what I've found there's a lot of "issues" with kafka for running long processes and we can see that in this article:

https://medium.com/codex/dealing-with-long-running-jobs-using-apache-kafka-192f053e1691

I need some pointers how to do it with racecar.

In the article the second possible solution is to increase the timeout and in racecar we have it as:

max_poll_interval

Is this type of config only available to all consumers or can it be specific for one? Can't find a way to do it specifically!

Third solution is to call stop and resume consumer or calling poll in a different thread to keep the consumer alive. I've tried these two approaches but both of them seem to fail somehow:

def process(message)
  @consumer.pause(message.topic, message.partition, message.offset)
  # execute something...
  @consumer.resume(message.topic, message.partition)
end
def process(message)
  running = true
  calling_home = Thread.new do
    while running
      @consumer.poll
      sleep 0.1
    end
  end
  # execute something...
  running = false
  calling_home.join
end

@HDLP9 do you have jobs that run that long based on a single kafka message?

HDLP9 commented

@mensfeld unfortunately yes...We receive a lot of data (millions every day) from our IoT devices and sometimes there a new one connecting to our network that already as a lot of historical data.

Every time that happens we need to reprocess a lot of statistics and background algorithms....currently we use sidekiq for that and its working ok.

Its just for a specific topic, everything else runs is seconds.

I have few questions in order to be able to help (hope @dasch does not mind):

  1. Can't you split the incoming data into chunks and commit the offset/state periodically?
  2. How would you like to handle restarts and deployments? How do you handle that now with Sidekiq?
  3. Is "all of this data" in a single message? If so, what is it's size?
  4. What do you expect in case of an error? Is the full process restarted now or are the worker jobs idempotent?
dasch commented

I second those questions, but would also just emphasize that the Kafka consumer processing model is simply not great for very long running jobs – you'll want a background job for that, e.g. Sidekiq. You may schedule those jobs from a Kafka consumer, which is a common pattern, but of course you'll lose ordering of messages. I would say that with such long processing times, your app needs to be resilient to reordering regardless.