在scrapy的蜘蛛的start_urls列表中给出的每个url的单独输出文件 [英] Separate output file for every url given in start_urls list of spider in scrapy

查看:29
本文介绍了在scrapy的蜘蛛的start_urls列表中给出的每个url的单独输出文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想为我在蜘蛛的 start_urls 中设置的每个 url 创建单独的输出文件,或者想以某种方式拆分输出文件以明智的 url 开始.

I want to create separate output file for every url I have set in start_urls of spider or somehow want to split ouput files start url wise.

以下是我的蜘蛛的start_urls

Following is the start_urls of my spider

start_urls = ['http://www.dmoz.org/Arts/', 'http://www.dmoz.org/Business/', 'http://www.dmoz.org/Computers/']

我想创建单独的输出文件

I want to create separate output file like

Arts.xml
业务.xml
计算机.xml

Arts.xml
Business.xml
Computers.xml

我不知道如何做到这一点.我正在考虑通过在项目管道类的 spider_opened 方法中实现一些类似的东西来实现这一点,

I don't know exactly how to do this. I am thinking to achieve this by implementing some thing like following in spider_opened method of item pipeline class,

import re
from scrapy import signals
from scrapy.contrib.exporter import XmlItemExporter

class CleanDataPipeline(object):
    def __init__(self):
        self.cnt = 0
        self.filename = ''

    @classmethod
    def from_crawler(cls, crawler):
        pipeline = cls()
        crawler.signals.connect(pipeline.spider_opened, signals.spider_opened)
        crawler.signals.connect(pipeline.spider_closed, signals.spider_closed)
        return pipeline

    def spider_opened(self, spider):
        referer_url = response.request.headers.get('referer', None)
        if referer_url in spider.start_urls:
            catname = re.search(r'/(.*)$', referer_url, re.I)
            self.filename = catname.group(1)

        file = open('output/' + str(self.cnt) + '_' + self.filename + '.xml', 'w+b')
        self.exporter = XmlItemExporter(file)
        self.exporter.start_exporting()

    def spider_closed(self, spider):
        self.exporter.finish_exporting()
        #file.close()

    def process_item(self, item, spider):
        self.cnt = self.cnt + 1
        self.spider_closed(spider)
        self.spider_opened(spider)
        self.exporter.export_item(item)
        return item

我试图在 start_urls 列表中找到每个抓取项目的引用 URL.如果在 start_urls 中找到引用 url,则将使用该引用 url 创建文件名.但问题是如何访问 spider_opened() 方法中的响应对象.如果我可以在那里访问它,我可以基于它创建文件.

Where I am trying to find the referer url of every scraped item within the start_urls list. If referer url is found in start_urls then file name will be created using that referer url. But problem is how to access response object inside spider_opened() method. If I can access it there, I can create file based on that.

任何帮助找到执行此操作的方法?提前致谢!

Any help to find a way to perform this? Thanks in advance!

通过如下更改我的管道代码解决了我的问题.

Solved my problem by changing my pipelines code as followed.

import re
from scrapy import signals
from scrapy.contrib.exporter import XmlItemExporter

class CleanDataPipeline(object):
    def __init__(self):
        self.filename = ''
        self.exporters = {}

    @classmethod
    def from_crawler(cls, crawler):
        pipeline = cls()
        crawler.signals.connect(pipeline.spider_opened, signals.spider_opened)
        crawler.signals.connect(pipeline.spider_closed, signals.spider_closed)
        return pipeline

    def spider_opened(self, spider, fileName = 'default.xml'):
        self.filename = fileName
        file = open('output/' + self.filename, 'w+b')
        exporter = XmlItemExporter(file)
        exporter.start_exporting()
        self.exporters[fileName] = exporter

    def spider_closed(self, spider):
        for exporter in self.exporters.itervalues(): 
            exporter.finish_exporting()

    def process_item(self, item, spider):
        fname = 'default'
        catname = re.search(r'http://www.dmoz.org/(.*?)/', str(item['start_url']), re.I)
        if catname:
            fname = catname.group(1)
        self.curFileName = fname + '.xml'

        if self.filename == 'default.xml':
            if os.path.isfile('output/' + self.filename):
                os.rename('output/' + self.filename, 'output/' + self.curFileName)
            exporter = self.exporters['default.xml']
            del self.exporters['default.xml']
            self.exporters[self.curFileName] = exporter
            self.filename = self.curFileName

        if self.filename != self.curFileName and not self.exporters.get(self.curFileName):
            self.spider_opened(spider, self.curFileName)

        self.exporters[self.curFileName].export_item(item)
        return item

还在蜘蛛中实现了 make_requests_from_url 来为每个项目设置 start_url.

Also Implemented make_requests_from_url in spider to set start_url for every item.

def make_requests_from_url(self, url):
    request = Request(url, dont_filter=True)
    request.meta['start_url'] = url
    return request

推荐答案

我会实施更明确的方法(未测试):

I'd implement a more explicit approach (not tested):

  • settings.py中配置可能的类别列表:

  • configure list of possible categories in settings.py:

CATEGORIES = ['Arts', 'Business', 'Computers']

  • 根据设置定义您的start_urls

    start_urls = ['http://www.dmoz.org/%s' % category for category in settings.CATEGORIES]
    

  • 添加category FieldItem

    在spider的parse方法中根据当前的response.url设置category字段,例如:

    in the spider's parse method set the category field according to the current response.url, e.g.:

    def parse(self, response):
         ...
         item['category'] = next(category for category in settings.CATEGORIES if category in response.url)
         ...
    

  • 在管道中打开所有类别的导出器,并根据 item['category'] 选择要使用的导出器:

  • in the pipeline open up exporters for all categories and choose which exporter to use based on the item['category']:

    def spider_opened(self, spider):
        ...
        self.exporters = {}
        for category in settings.CATEGORIES:
            file = open('output/%s.xml' % category, 'w+b')
            exporter = XmlItemExporter(file)
            exporter.start_exporting()
            self.exporters[category] = exporter
    
    def spider_closed(self, spider):
        for exporter in self.exporters.itervalues(): 
            exporter.finish_exporting()
    
    def process_item(self, item, spider):
        self.exporters[item['category']].export_item(item)
        return item
    

  • 您可能需要稍微调整一下以使其工作,但我希望您明白 - 将类别存储在正在处理的 item 中.根据项目类别值选择要导出到的文件.

    You would probably need to tweak it a bit to make it work but I hope you got the idea - store the category inside the item being processed. Choose a file to export to based on the item category value.

    希望有所帮助.

    这篇关于在scrapy的蜘蛛的start_urls列表中给出的每个url的单独输出文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆