0%

scrapy爬虫实例之用crawl spider爬取阳光政务网

爬什么

还是爬取阳光政务网的问题反映模块。主要练习crawl spider的使用方法。

怎么做

crawl spider用法的特别之处在于,生成爬虫时,加上参数-t crawl。比如本例使用的语句是scrapy genspider -t crawl ygspider sun0769.com
关键点就是使用Rule可以自动提取匹配到Rule规则的url,并发送请求,获取响应response。比如这是本例中用的Rule。

1
2
3
4
rules = (
Rule(LinkExtractor(allow=r'/html/question/\d+/\d+\.shtml'), callback='parse_item'), #提取列表页的url,并返回响应按照parse_item的方法提取数据
Rule(LinkExtractor(allow=r'/index.php/question/report\?page=\d+'), follow=True), #循环匹配翻页请求
)

参数解读:

  • LinkExtractor指的是链接提取器,是内置的一个函数。
  • allow:这里就写正则表达式,匹配我们需要提取的url链接。
  • callback:这是对响应做处理的函数,这个函数实现数据的提取。
  • follow:设置为True表示会循环匹配该条Rule,直到匹配不上为止。

代码

ygspider.py:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
import re

class YgspiderSpider(CrawlSpider):
name = 'ygspider'
allowed_domains = ['sun0769.com']
start_urls = ['http://wz.sun0769.com/html/top/report.shtml']

rules = (
Rule(LinkExtractor(allow=r'/html/question/\d+/\d+\.shtml'), callback='parse_item'), #提取列表页的url,并返回响应按照parse_item的方法提取数据
Rule(LinkExtractor(allow=r'/index.php/question/report\?page=\d+'), follow=True), #循环匹配翻页请求
)

def parse_item(self, response):
item = {}
item["title"] = response.xpath("//div[@class='wzy1']//span[@class='niae2_top']/text()").get()
item["content"] = response.xpath("//div[@class='wzy1'][1]/table[2]/tr[1]/td[1]//text()").extract()
item["content"] = [re.sub(r"\xa0|\s|\r\n", "",i) for i in item["content"]] #去除\xa0和空格,美化输出格式
item["content"] = [i for i in item["content"] if len(item["content"]) > 0] #去掉空元素
item["img"] = response.xpath("//div[@class='wzy1'][1]/table[2]/tr[1]/td[1]//img/@src").extract()
item["img"] = ["http://wz.sun0769.com"+i for i in item["img"]] #补全url地址
print(item)
return item

settings.py:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
# -*- coding: utf-8 -*-

# Scrapy settings for yg project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'yg'

SPIDER_MODULES = ['yg.spiders']
NEWSPIDER_MODULE = 'yg.spiders'

LOG_LEVEL = 'WARNING'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'yg (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'yg.middlewares.YgSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'yg.middlewares.YgDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
# 'yg.pipelines.YgPipeline': 300,
#}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

爬取结果

image.png

总结

allow=r'/index.php/question/report\?page=\d+'正则表达式中,?要做转义处理。

1
2
3
item["content"] = response.xpath("//div[@class='wzy1'][1]/table[2]/tr[1]/td[1]//text()").extract()
item["content"] = [re.sub(r"\xa0|\s|\r\n", "",i) for i in item["content"]] #去除\xa0和空格,美化输出格式
item["content"] = [i for i in item["content"] if len(item["content"]) > 0] #去掉空元素

这段代码是对问题反映的内容做格式化处理,去掉\xa0、空格、列表空元素。
"//div[@class='wzy1'][1]/table[2]/tr[1]/td[1]//text()"这句xpath有个新知识在里面,相同属性的标签如果有多个,我们可以按照标签的顺序进行提取。比如这里的div[@class='wzy1'][1]就表示class='wzy1'的第一个div标签。
response.xpath("xpath语句").get()这个方法也可以提取xpath定位到的内容。

-------------本文结束感谢您的阅读-------------