bwlewis/doRedis

Clean up the job queue after user interrupt

Closed this issue · 1 comments

Currently, if user interrupts foreach run, the job queue is not cleaned up, so the workers continue to process all the tasks (for nothing).

How to reproduce:

require(doRedis)
removeQueue('jobs')
redisFlushAll() # be carefull this removes everything on the redis server!!!
registerDoRedis('jobs', '127.0.0.1')

foreach (i = 1:4) %dopar% {
    for (j in 1:10) {
        cat("*")
        flush.console()
        Sys.sleep(2)
    }
    cat("\n")
}

Now break the code run in R and issue redisHGetAll("jobs:1"). The jobs are still there, the workers will continue to process them till all get finished - for no use at all.
(Tested on R 3.1.0, doRedis 1.1.1, rredis 1.6.9 and redis server 2.6.12, all on a single host Windows XP).

Expected behaviour: upon user interrupt, the job queue should be cleaned up so that unprocessed tasks will no longer burden the workers. It can be handled by tryCatch(... interrupt = function () { ... }).

Note: solving this issue would probably mean no manual job queue cleanup using removeQueue/registerDoRedis sequence is necessary in the code anymore (see question http://stackoverflow.com/q/25947991/684229), which would be a good workaround for serious issue #19 (but this issue should be solved anyway).

Next step would be to also stop the workers which are already processing the tasks, see issue #21.

Should finally be fixed now. The fix is a bit tricky since one work queue may be used to queue tasks from multiple jobs. The solution removes all tasks from the queue on user interrupt and restores alien tasks back.

This should also fix #19.