Job Queue Experiment
This project is unfinished, do not use it. It's similar to something like RabbitMQ, but a lot more powerful. This is for LOW BANDWIDTH & IMPORTANT jobs. This is not for sending emails. This is for dealing with complex multi-stage jobs with high failure chance. Logging is supported well, so you can tell exactly what happened if things go wrong. You can attach arbitrary information to jobs, like a user_id, to keep track of who's doing what without cluttering your main database.
Example Usage
TODO
Features
- Delayed jobs
- Optional retries
- with custom backoff
- Job priority
- RESTful JSON API
- creating jobs
- searching jobs
- processing jobs
- Job data is kept, keep all your job history here and query it whenever
- Processes jobs by
- node callbacks
- using webhooks
- calling cli commands
- or polling
- Web API lets you configure new queues without having to restart the server
- The default queue can be used for multiple things, by
switch
ing on an arbitrary type - Rate limit jobs
- only process 5 at a time
- only process 5 per minute
- Job monitoring (logging, progress, when finished you can check the result, or reason failed)
- Batches????? (do something when all jobs in a batch are finished)
- When being processed, a job can either: succeed, fail, die (ignore remaining attempts), or retry (don't count current attempt as failed)
- cleanup hanging processing jobs
Database
jobs
type: 'email'
data:
subject: 'hey'
message: 'what up bro'
priority: 0
attempts: 3
backoff: 60
delay: 10
insertAt: timestamp
resetAt: timestamp
delayTil: timestamp
batchId: batch._id
parentId: job._id
progress: 0
logs: [{t: timestamp, m: 'message'}]
result: where you can store job output
state: pending,processing,failed,success,delayed,killed
onSuccessDelete: false
onComplete: job
batches
(currently only used to generate a batch id)
config
queues: {default: defaultQueueConfig}