0%

scrapy爬虫实例之爬取阳光政务网的问题反映模块

需求

爬取阳光政务网的问题反映模块。主要练习翻页请求是怎么实现的。

怎么做

首先,观察element和html响应,发现element中有tbody标签,但是html响应里面没有tbody标签(这种情况比较多,所以看到tbody标签要额外小心)。写xpath一定要以html响应为准!
start_url要找对,今天我亲自体验了一把start_url都没有正确设置的情况下改xpath的痛苦之情😭
image.png
思路:

  1. 提取当前页面的数据和详情页的url
  2. scrapy.Request()发送详情页的请求,提取详情页数据
  3. 提取翻页的url,发送翻页请求,循环1-2步
1
2
3
4
5
yield scrapy.Request(
item["href"],
callback=self.parse_detail,
meta={"item": item}
)

这部分代码是关键。功能是发送详情页请求,parse_detail是我们新定义的一个用于提取详情页数据的函数。meta用于传递当前的数据到parse_detail函数体,使用的场景是当前提取的是部分数据,需要将这部分的数据传递到callback函数体继续添加新的数据。
这个例子中,我们还用到了items模块,定义我们需要提取的数据字段。在爬虫代码中使用这个item时,需import。比如:
from sun.items import SunItem

代码

爬虫代码:wzsun.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# -*- coding: utf-8 -*-
import scrapy
from sun.items import SunItem

class WzsunSpider(scrapy.Spider):
name = 'wzsun'
allowed_domains = ['sun0769.com']
start_urls = ['http://wz.sun0769.com/html/top/report.shtml']

def parse(self, response):
#1. 提取当前页面的数据和详情页的url
tr_list = response.xpath("//div[@class='newsHead clearfix']/table[2]/tr")
for tr in tr_list:
item = SunItem()
item["title"] = tr.xpath("./td[3]/a/@title").extract_first()
# print(item["title"])
item["href"] = tr.xpath("./td[3]/a/@href").extract_first()
# print(item["href"])
item["publish_date"] = tr.xpath("./td[last()]/text()").extract_first()
# print(item["publish_date"])
#2. scrapy.Request()发送详情页的请求,提取详情页数据
yield scrapy.Request(
item["href"],
callback=self.parse_detail,
meta={"item": item}
)
#3. 翻页
next_url = response.xpath("//div[@class='pagination']//a[text()='>']/@href").extract_first()
# print(next_url)
if next_url is not None:
yield scrapy.Request(
next_url,
callback=self.parse
)

def parse_detail(self,response):
item = response.meta["item"]
item["content"] = response.xpath("//div[@class='wzy1']/table[2]//td[@class='txt16_3']/text()").extract()
# print(item["content"])
item["content_img"] = response.xpath("//div[@class='wzy1']/table[2]//td[@class='txt16_3']//img/@src").extract()
item["content_img"] = ["http://wz.sun0769.com"+i for i in item["content_img"]]
# print(item["content_img"])
# print(item)
yield item

items.py:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class SunItem(scrapy.Item):
# define the fields for your item here like:
title = scrapy.Field()
publish_date = scrapy.Field()
href = scrapy.Field()
content = scrapy.Field()
content_img = scrapy.Field()

pipelines.py:这里保存数据时,我们对提取的数据做了格式化,将content里面的\xa0和空格做了替换。此处复习了正则表达式的替换功能re.sub

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
import re

class SunPipeline(object):
def process_item(self, item, spider):
item["content"] = self.process_content(item["content"]) #将content的数据做格式化
print(item)
return item

def process_content(self, content):
content = [re.sub(r"\xa0|\s","",i) for i in content] #去除\xa0和空格
content = [i for i in content if len(i)>0] #去除列表中的空字符串
return content

settings.py:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
# -*- coding: utf-8 -*-

# Scrapy settings for sun project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'sun'

SPIDER_MODULES = ['sun.spiders']
NEWSPIDER_MODULE = 'sun.spiders'

LOG_LEVEL = 'WARNING'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.106 Safari/537.36'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'sun.middlewares.SunSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'sun.middlewares.SunDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'sun.pipelines.SunPipeline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

在设置文件里,我们开启了USER_AGENT字段,设置了一个UA。

爬取的结果

image.png

总结

settings.py文件里面的ROBOTSTXT_OBEY = False意思是不遵循robots.txt的爬虫规则。
正则表达式中\s表示空格符号。
翻页请求是这样实现的:

1
2
3
4
5
6
7
next_url = response.xpath("//div[@class='pagination']//a[text()='>']/@href").extract_first()
# print(next_url)
if next_url is not None:
yield scrapy.Request(
next_url,
callback=self.parse
)

xpath之后一定要记得使用.extract_first()或者.extract()
a[text()='>'表示提取文本内容为>的a标签。

-------------本文结束感谢您的阅读-------------