Scrapy >= 1.0.0 #终于过了1版本,这个太重要了,总算坑小了点,感谢那些为了解决各种scrapy与scrapy-redis不兼容做出了贡献的开发者和博主。
redis-py >= 2.10.0
redis server >= 2.8.0
0.6版本的主要更新内容是更新代码以支持Scrapy 1.0; 增加了-a domain=... option for example spiders.
二、scrapy-redis的作用和特点作用:scrapy-redis为Scrapy提供Redis-backed组件
特点: 可以启动多个爬虫实例共享一个单一的 redis队列。是最适合广泛的多域爬虫。
分布式的post处理。scrapy到的items放入一个redis队列意味着可以分享这个items队列,并在其中启用足够多的post处理进程。
三、scrapy和scrapy-redis的区别与组件的意义{
priority0:队列0priority1:队列2priority2:队列2
}
- </pre></div></blockquote><pre name="code" class="plain">def request_seen(self, request):
- #self.figerprints就是一个指纹集合
- fp = self.request_fingerprint(request)
- if fp in self.fingerprints:#这就是判重的核心操作。
- return True
- self.fingerprints.add(fp)
- ......
在scrapy-redis中去重是由Duplication Filter组件来实现的。
四、最快的安装和启用安装:
在 settings.py 中启用组件们:
- $ pip install scrapy-redis
- 或者
- $ git clone https://github.com/darkrho/scrapy-redis.git
- $ cd scrapy-redis
- $ python setup.py install
五、通过redis来喂饱爬虫们~
- </pre></blockquote><pre name="code" class="plain"># Enables scheduling storing requests queue in redis.
- SCHEDULER = "scrapy_redis.scheduler.Scheduler"
- </pre></div></blockquote><blockquote style="margin:0 0 0 40px; border:none; padding:0px"><div><span style="white-space:pre"></span><pre code_snippet_id="1682225" snippet_file_name="blog_20160513_16_2220405" name="code" class="python"># Don't cleanup redis queues, allows to pause/resume crawls.
- SCHEDULER_PERSIST = True
- </pre></div></blockquote><blockquote style="margin:0 0 0 40px; border:none; padding:0px"><div><span style="white-space:pre"></span><pre code_snippet_id="1682225" snippet_file_name="blog_20160513_19_5349541" name="code" class="python"># Schedule requests using a priority queue. (default)
- SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.SpiderPriorityQueue'
- </pre></div></blockquote><blockquote style="margin:0 0 0 40px; border:none; padding:0px"><div><span style="white-space:pre"></span><pre code_snippet_id="1682225" snippet_file_name="blog_20160513_22_4193298" name="code" class="python"># Schedule requests using a queue (FIFO).
- SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.SpiderQueue'
- </pre></div></blockquote><blockquote style="margin:0 0 0 40px; border:none; padding:0px"><div><span style="white-space:pre"></span><pre code_snippet_id="1682225" snippet_file_name="blog_20160513_25_5337568" name="code" class="python"># Schedule requests using a stack (LIFO).
- SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.SpiderStack'
- </pre></div></blockquote><blockquote style="margin:0 0 0 40px; border:none; padding:0px"><div><span style="white-space:pre"></span><pre code_snippet_id="1682225" snippet_file_name="blog_20160513_28_6547876" name="code" class="python"># Max idle time to prevent the spider from being closed when distributed crawling.
- # This only works if queue class is SpiderQueue or SpiderStack,
- # and may also block the same time when your spider start at the first time (because the queue is empty).
- SCHEDULER_IDLE_BEFORE_CLOSE = 10
- </pre></div></blockquote><blockquote style="margin:0 0 0 40px; border:none; padding:0px"><div><span style="white-space:pre"></span><pre code_snippet_id="1682225" snippet_file_name="blog_20160513_33_5980278" name="code" class="python"># Store scraped item in redis for post-processing.
- ITEM_PIPELINES = {
- 'scrapy_redis.pipelines.RedisPipeline': 300
- }
- </pre></div></blockquote><blockquote style="margin:0 0 0 40px; border:none; padding:0px"><div><span style="white-space:pre"></span><pre code_snippet_id="1682225" snippet_file_name="blog_20160513_38_8651283" name="code" class="python"># Specify the host and port to use when connecting to Redis (optional).
- REDIS_HOST = 'localhost'
- REDIS_PORT = 6379
- </pre></div></blockquote><blockquote style="margin:0 0 0 40px; border:none; padding:0px"><div><span style="white-space:pre"></span><pre code_snippet_id="1682225" snippet_file_name="blog_20160513_42_1638716" name="code" class="python"># Specify the full Redis URL for connecting (optional).
- # If set, this takes precedence over the REDIS_HOST and REDIS_PORT settings.
- REDIS_URL = 'redis://user:pass@hostname:9001'
scrapy_redis.spiders.RedisSpider类启用了爬虫通过redis得到urls,这些在redis队列中的爬虫将会一个接一个的被处理,!!如果第一个request产生了更多的request,爬虫会先处理这些请求,再从redis队列中抓取其他url。
上面偷懒,这里举个栗子:创建 couxiaoxiao.py
- from scrapy_redis.spiders import RedisSpider
- </pre></div><div><pre code_snippet_id="1682225" snippet_file_name="blog_20160513_47_2390110" name="code" class="python">class MySpider(RedisSpider):
- name = 'myspider'
- </pre></div><div><pre code_snippet_id="1682225" snippet_file_name="blog_20160513_50_2662327" name="code" class="python"> def parse(self, response):
- # do stuff
- pass
运行爬虫
向redis中装入url们
- scrapy runspider myspider.py
- redis-cli lpush myspider:start_urls http://xiaowangzhi.com