Sucker Punch is a single-process Ruby asynchronous processing library. It's girl_friday and DSL sugar on top of Celluloid. With Celluloid's actor pattern, we can do asynchronous processing within a single process. This reduces costs of hosting on a service like Heroku along with the memory footprint of having to maintain additional jobs if hosting on a dedicated server. All queues can run within a single Rails/Sinatra process.
Sucker Punch is perfect for asynchronous processes like emailing, data crunching, or social platform manipulation. No reason to hold up a user when you can do these things in the background within the same process as your web application...
Sucker Punch is built on top of Celluloid Pools. Each job is setup as a pool, which equates to its own queue with individual workers working against the jobs. Unlike most other background processing libraries, Sucker Punch's jobs are stored in memory. The benefit to this is there is no additional infrastructure requirement (ie. database, redis, etc.). The downside is that if the web processes is restarted and there are jobs that haven't yet been processed, they will be lost. For this reason, Sucker Punch is generally recommended for jobs that are fast and non-mission critical (ie. logs, emails, etc.).
Add this line to your application's Gemfile:
gem 'sucker_punch', '~> 1.0'
And then execute:
$ bundle
Or install it yourself as:
$ gem install sucker_punch
Each job acts as its own queue and should be a separate Ruby class that:
- includes
SuckerPunch::Job
- defines the instance method
perform
that includes the code the job will run when enqueued
# app/jobs/log_job.rb
class LogJob
include SuckerPunch::Job
def perform(event)
Log.new(event).track
end
end
Synchronous:
LogJob.new.perform("login")
Asynchronous:
LogJob.new.async.perform("login") # => nil
Jobs interacting with ActiveRecord
should take special precaution not to exhaust connections in the pool. This can be done with ActiveRecord::Base.connection_pool.with_connection
, which ensures the connection is returned back to the pool when completed.
# app/jobs/awesome_job.rb
class AwesomeJob
include SuckerPunch::Job
def perform(user_id)
ActiveRecord::Base.connection_pool.with_connection do
user = User.find(user_id)
user.update_attributes(is_awesome: true)
end
end
end
We can create a job from within another job:
class AwesomeJob
include SuckerPunch::Job
def perform(user_id)
ActiveRecord::Base.connection_pool.with_connection do
user = User.find(user_id)
user.update_attributes(is_awesome: true)
LogJob.new.async.perform("User #{user.id} became awesome!")
end
end
end
The number of workers can be set from the Job using the workers
method:
class LogJob
include SuckerPunch::Job
workers 4
def perform(event)
Log.new(event).track
end
end
If the workers
method is not set, the default is 2
.
Many background processing libraries have methods to perform operations after a
certain amount of time. Fortunately, timers are built-in to Celluloid, so you
can take advantage of them with the later
method:
class Job
include SuckerPunch::Job
def perform(data)
puts data
end
def later(sec, data)
after(sec) { perform(data) }
end
end
Job.new.async.perform("asdf")
Job.new.async.later(60, "asdf") # `perform` will be excuted 60 sec. later
SuckerPunch.logger = Logger.new('sucker_punch.log')
SuckerPunch.logger # => #<Logger:0x007fa1f28b83f0>
Note: If Sucker Punch is being used within a Rails application, Sucker Punch's logger is set to Rails.logger by default.
You can customize how to handle uncaught exceptions that are raised by your jobs.
For example, using Rails and the ExceptionNotification gem, add a new initializer config/initializers/sucker_punch.rb
:
SuckerPunch.exception_handler { |ex| ExceptionNotifier.notify_exception(ex) }
Or, using Airbrake:
SuckerPunch.exception_handler { |ex| Airbrake.notify(ex) }
Full job data can be reported like this:
def perform(all, my, arguments)
... your code ...
rescue StandardError
Airbrake.error($!, [self.class.name, all, my, arguments].inspect)
raise
end
Using Timeout
causes persistent connections to randomly get corrupted.
Do not use timeouts as control flow, use builtin connection timeouts.
If you decide to use Timeout, only use it as last resort to know something went very wrong and
ideally restart the worker process after every timeout.
Requiring this library causes your jobs to run everything inline. So a call to the following will actually be SYNCHRONOUS:
# spec/spec_helper.rb
require 'sucker_punch/testing/inline'
Log.new.async.perform("login") # => Will be synchronous and block until job is finished
If you're using Sucker Punch with Rails, there's a built-in generator task:
$ rails g sucker_punch:job logger
would create the file app/jobs/logger_job.rb
with a unimplemented #perform
method.
Sucker Punch has been added as an Active Job adapter in Rails 4.2. See the guide for configuration and implementation.
Add Sucker Punch to your Gemfile
:
gem 'sucker_punch'
And then configure the backend to use Sucker Punch:
# config/initializers/sucker_punch.rb
Rails.application.configure do
config.active_job.queue_adapter = :sucker_punch
end
Previously, Sucker Punch required an initializer and that posed problems for
Unicorn and Passenger and other servers that fork. Version 1 was rewritten to
not require any special code to be executed after forking occurs. Please remove
if you're using version >= 1.0.0
Job classes are ultimately Celluloid Actor classes. As a result, class names
are susceptible to being clobbered by Celluloid's internal classes. To ensure
the intended application class is loaded, preface classes with ::
, or use
names like NotificationsMailer
or UserMailer
. Example:
class EmailJob
include SuckerPunch::Job
def perform(contact)
@contact = contact
::Notifications.contact_form(@contact).deliver # => If you don't use :: in this case, the notifications class from Celluloid will be loaded
end
end
If you're running tests in transactions (using Database Cleaner or a native solution), Sucker Punch jobs may have trouble finding database records that were created during test setup because the job class is running in a separate thread and the Transaction operates on a different thread so it clears out the data before the job can do its business. The best thing to do is cleanup data created for tests jobs through a truncation strategy by tagging the rspec tests as jobs and then specifying the strategy in spec_helper
like below. And do not forget to turn off transactional fixtures (delete, comment or set it to false
).
# spec/spec_helper.rb
RSpec.configure do |config|
# Turn off transactional fixtures (delete, comment or set it to `false`)
# config.use_transactional_fixtures = true
config.before(:each) do
DatabaseCleaner.strategy = :transaction
end
# Clean up all jobs specs with truncation
config.before(:each, job: true) do
DatabaseCleaner.strategy = :truncation
end
config.before(:each) do
DatabaseCleaner.start
end
config.after(:each) do
DatabaseCleaner.clean
end
end
# spec/jobs/email_job_spec.rb
require 'spec_helper'
# Tag the spec as a job spec so data is persisted long enough for the test
describe EmailJob, job: true do
describe "#perform" do
let(:user) { FactoryGirl.create(:user) }
it "delivers an email" do
expect {
EmailJob.new.perform(user.id)
}.to change{ ActionMailer::Base.deliveries.size }.by(1)
end
end
end
...is awesome. But I can't take credit for it. Thanks to @jmazzi for his superior naming skills. If you're looking for a name for something, he is the one to go to.
- Fork it
- Create your feature branch (
git checkout -b my-new-feature
) - Commit your changes (
git commit -am 'Add some feature'
) - Push to the branch (
git push origin my-new-feature
) - Create new Pull Request