Skip to content

Redis-based components for scrapy that allows distributed crawling

License

Notifications You must be signed in to change notification settings

NaturalLanguage/scrapy-redis

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Redis-based components for Scrapy

This is a initial work on Scrapy-Redis integration, not production-tested. Use it at your own risk!

Features:

  • Distributed crawling/scraping
  • Distributed post-processing

Requirements:

  • Scrapy >= 0.13 (development version)
  • redis-py (tested on 2.4.9)
  • redis server (tested on 2.2-2.4)

Available Scrapy components:

  • Scheduler
  • Duplication Filter
  • Item Pipeline
  • Base Spider

Installation

From pypi:

$ pip install scrapy-redis

From github:

$ git clone https://github.com/darkrho/scrapy-redis.git
$ cd scrapy-redis
$ python setup.py install

Usage

Enable the components in your settings.py:

# enables scheduling storing requests queue in redis
SCHEDULER = "scrapy_redis.scheduler.Scheduler"

# don't cleanup redis queues, allows to pause/resume crawls
SCHEDULER_PERSIST = True

# Schedule requests using a priority queue. (default)
SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.SpiderPriorityQueue'

# Schedule requests using a queue (FIFO).
SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.SpiderQueue'

# Schedule requests using a stack (LIFO).
SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.SpiderStack'

# Max idle time to prevent the spider from being closed when distributed crawling
# this only work if queue class is SpiderQueue or SpiderStack
# and may also block the same time when your spider start at the first time (because the queue is empty).
SCHEDULER_IDLE_BEFORE_CLOSE = 10


# store scraped item in redis for post-processing
ITEM_PIPELINES = [
    'scrapy_redis.pipelines.RedisPipeline',
]

Note

Version 0.3 changed the requests serialization from marshal to cPickle, therefore persisted requests using version 0.2 will not able to work on 0.3.

Running the example project

This example illustrates how to share a spider's requests queue across multiple spider instances, highly suitable for broad crawls.

  1. Setup scrapy_redis package in your PYTHONPATH

  2. Run the crawler for first time then stop it:

    $ cd example-project
    $ scrapy crawl dmoz
    ... [dmoz] ...
    ^C
    
  3. Run the crawler again to resume stopped crawling:

    $ scrapy crawl dmoz
    ... [dmoz] DEBUG: Resuming crawl (9019 requests scheduled)
    
  4. Start one or more additional scrapy crawlers:

    $ scrapy crawl dmoz
    ... [dmoz] DEBUG: Resuming crawl (8712 requests scheduled)
    
  5. Start one or more post-processing workers:

    $ python process_items.py
    Processing: Kilani Giftware (http://www.dmoz.org/Computers/Shopping/Gifts/)
    Processing: NinjaGizmos.com (http://www.dmoz.org/Computers/Shopping/Gifts/)
    ...
    

Feeding a Spider from Redis

The class scrapy_redis.spiders.RedisSpider enables a spider to read the urls from redis. The urls in the redis queue will be processed one after another, if the first request yields more requests, the spider will process those requests before fetching another url from redis.

For example, create a file myspider.py with the code below:

from scrapy_redis.spiders import RedisSpider

class MySpider(RedisSpider):
    name = 'myspider'

    def parse(self, response):
        # do stuff
        pass

Then:

  1. run the spider:

    scrapy runspider myspider.py
    
  2. push urls to redis:

    redis-cli lpush myspider:start_urls http://google.com
    
Bitdeli badge

About

Redis-based components for scrapy that allows distributed crawling

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published