This is a initial work on Scrapy-Redis integration, not production-tested. Use it at your own risk!
Features:
- Distributed crawling/scraping
- Distributed post-processing
Requirements:
- Scrapy >= 0.13 (development version)
- redis-py (tested on 2.4.9)
- redis server (tested on 2.2-2.4)
Available Scrapy components:
- Scheduler
- Duplication Filter
- Item Pipeline
- Base Spider
From pypi:
$ pip install scrapy-redis
From github:
$ git clone https://github.com/darkrho/scrapy-redis.git $ cd scrapy-redis $ python setup.py install
Enable the components in your settings.py:
# enables scheduling storing requests queue in redis
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
# don't cleanup redis queues, allows to pause/resume crawls
SCHEDULER_PERSIST = True
# Schedule requests using a priority queue. (default)
SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.SpiderPriorityQueue'
# Schedule requests using a queue (FIFO).
SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.SpiderQueue'
# Schedule requests using a stack (LIFO).
SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.SpiderStack'
# Max idle time to prevent the spider from being closed when distributed crawling
# this only work if queue class is SpiderQueue or SpiderStack
# and may also block the same time when your spider start at the first time (because the queue is empty).
SCHEDULER_IDLE_BEFORE_CLOSE = 10
# store scraped item in redis for post-processing
ITEM_PIPELINES = [
'scrapy_redis.pipelines.RedisPipeline',
]
Note
Version 0.3 changed the requests serialization from marshal to cPickle, therefore persisted requests using version 0.2 will not able to work on 0.3.
This example illustrates how to share a spider's requests queue across multiple spider instances, highly suitable for broad crawls.
Setup scrapy_redis package in your PYTHONPATH
Run the crawler for first time then stop it:
$ cd example-project $ scrapy crawl dmoz ... [dmoz] ... ^C
Run the crawler again to resume stopped crawling:
$ scrapy crawl dmoz ... [dmoz] DEBUG: Resuming crawl (9019 requests scheduled)
Start one or more additional scrapy crawlers:
$ scrapy crawl dmoz ... [dmoz] DEBUG: Resuming crawl (8712 requests scheduled)
Start one or more post-processing workers:
$ python process_items.py Processing: Kilani Giftware (http://www.dmoz.org/Computers/Shopping/Gifts/) Processing: NinjaGizmos.com (http://www.dmoz.org/Computers/Shopping/Gifts/) ...
The class scrapy_redis.spiders.RedisSpider enables a spider to read the urls from redis. The urls in the redis queue will be processed one after another, if the first request yields more requests, the spider will process those requests before fetching another url from redis.
For example, create a file myspider.py with the code below:
from scrapy_redis.spiders import RedisSpider
class MySpider(RedisSpider):
name = 'myspider'
def parse(self, response):
# do stuff
pass
Then:
run the spider:
scrapy runspider myspider.py
push urls to redis:
redis-cli lpush myspider:start_urls http://google.com