Intermittent problems using with celluloid
lachie opened this issue · 5 comments
Hi
I'm having trouble using pusher gem from a celluloid pool.
I've boiled the problematic setup down into this gem:
https://gist.github.com/lachie/65150813829df97f5d70
At times it seems to time out (longer than 5 seconds) for the first few requests. At other times it works fine for all requests.
My box has 8 cores, so the celluloid pool has 8 actors.
Firstly, am I doing something wrong?
If not:
- is this a bug in
pusher-http-ruby
or the underlyinghttpclient
gem? - if this is due to network problems, what's the recommended approach for coping with it?
Obviously the first requests for each client takes longer as it sets up the connection, up to 2.5secs when it works and when using https.
The question is whether the 5 second connection timeouts are happening because of
- network conditions
- pusher backend struggling
- or some genuine bug or race condition somewhere
It also makes me wonder what the behaviour will be if any of the http connections are dropped over time. Should I be building the retry behaviour?
Hi,
I'm not really familiar with celluloid and if it works well with httpclient (the underlying HTTP lib we're using). If it's using multiple threads there might be some concurrency issue in regards to keep-alive. I recommend using 1 pusher instance per thread to see if it fixes the issue. Concurrent access to $outstanding_requests
might also be an issue but that shouldn't affect the pusher gem.
The timeout is the default one in the library, see 7dca547 . Any link between your server and ours might fail so adding a retry is definitely a good idea.
It might be easier to create 8 threads and then publish to a Queue object like that: https://gist.github.com/zimbatm/334b4337e61a43e12908
@zimbatm I'm experiencing thread-safety problems as well. Try to not reuse the same Pusher::Client
instance, but create a new one before to use it. https://gist.github.com/zimbatm/334b4337e61a43e12908#file-test-rb-L19
It will be probably slower, but better safe than sorry.
Unless Celluloid is changing ruby radically it should be safe enough to have 1 ruby client per thread. Creating a new instance for each request means not being able to use HTTP keep-alive.
Closing for lack of reproducibility. I believe Celluloid is also now deprecated. For example sidekiq has switched to use the concurrent-ruby gem.