0%

scrapy爬虫实例之爬取苏宁图书的图书信息

爬什么

苏宁图书的子分类图书信息全部爬取下来。爬取的数据字段包括大分类名称、子分类名称、子分类下的图书信息。
image.png

怎么做

思路:

  1. 分组
  2. 获取大分类名称、小分类的名称、小分类的url
  3. 发送小分类url的请求,提取小分类的数据
  4. 提取当前页的数据

代码

snbook.py:关键点在于,提取大分类名称、小分类名称,要嵌套for循环实现。实验了好久才搞出来😱😱😱😱

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# -*- coding: utf-8 -*-
import scrapy
from book.items import BookItem

class SnbookSpider(scrapy.Spider):
name = 'snbook'
allowed_domains = ['suning.com']
start_urls = ['https://book.suning.com/']

def parse(self, response):
#1. 分组
menu_list = response.xpath("//div[@class='left-menu-container']//div[@class='menu-item']")
item = BookItem()
#2. 获取大分类名称、小分类的名称、小分类的url
for menu in menu_list:
sub_list = menu.xpath(".//dd/a")
for sub in sub_list:
item["category"] = menu.xpath(".//h3/a/text()").extract_first()
item["sub_category"] = sub.xpath("./text()").extract_first()
item["sub_category_url"] = sub.xpath("./@href").extract_first()
# 3. 发送小分类url的请求,提取小分类的数据
yield scrapy.Request(
item["sub_category_url"],
callback=self.parse_detail,
meta={"item":item}
)
# print(item)

def parse_detail(self, response):
item = response.meta["item"]
# 4. 提取当前页的数据
item["content_title"] = response.xpath("//img[@class='search-loading']/@alt").extract()
print(item)

items.py:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class BookItem(scrapy.Item):
# define the fields for your item here like:
category = scrapy.Field()
sub_category = scrapy.Field()
sub_category_url = scrapy.Field()
content_title = scrapy.Field()

settings.py:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
# -*- coding: utf-8 -*-

# Scrapy settings for book project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'book'

SPIDER_MODULES = ['book.spiders']
NEWSPIDER_MODULE = 'book.spiders'

LOG_LEVEL = 'WARNING'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.106 Safari/537.36'

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'book.middlewares.BookSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'book.middlewares.BookDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
# 'book.pipelines.BookPipeline': 300,
#}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

爬取结果

image.png

总结

实践是检验真理的唯一标准。现在越来越觉得这句话是真理。分组的时候,不能.extract(),对分组提取数据时,才使用.extract()或者.extract_first()来提取数据。这个例子,翻页的请求是ajax请求,现有知识水平还做不了翻页之后数据的提取。所以我最终做出的结果是每个子分类只抓取了第一页的数据。这次学习的主要知识是用for循环的嵌套来实现每个字段数据的一一对应。

-------------本文结束感谢您的阅读-------------