0%

scrapy爬虫实例之发送post请求登录github

爬什么

使用scrapy发送post请求登录到github

怎么做

github登录的过程是这样:

  1. 发送这条get请求https://github.com/login,获取到登录请求的post数据
    image.png
  2. 用第一步获取的post数据,发送post请求https://github.com/session。
    image.png

代码

github.py:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# -*- coding: utf-8 -*-
import scrapy
import re

class GithubSpider(scrapy.Spider):
name = 'github'
allowed_domains = ['github.com']
start_urls = ['https://github.com/login']

def parse(self,response):
authenticity_token = response.xpath("//input[@name='authenticity_token']/@value").extract_first() #获取登录请求所需的post数据
utf8 = response.xpath("//input[@name='utf8']/@value").extract_first()
commit = response.xpath("//input[@name='commit']/@value").extract_first()
timestamp = response.xpath("//input[@name='timestamp']/@value").extract_first()
timestamp_secret = response.xpath("//input[@name='timestamp_secret']/@value").extract_first()

post_data = { #post数据体,字典类型
'login': 'xiaokunjia',
'password': '1070710263xkj',
'authenticity_token': authenticity_token,
'utf8': utf8,
'commit': commit,
'timestamp': timestamp,
'timestamp_secret': timestamp_secret,
}
yield scrapy.FormRequest( #使用scrapy.FormRequest提交post表单请求
"https://github.com/session", #post请求的url
formdata=post_data, #post数据体
callback=self.after_login #对post请求的响应内容做数据处理
)

def after_login(self,response):
print(re.findall("xiaokunjia",response.body.decode())) #从post请求的响应html中查找是否存在"xiaokunjia"这一字符串。如果存在,则说明我们发送的post请求成功获取响应

settings.py:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
# -*- coding: utf-8 -*-

# Scrapy settings for postlogin project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'postlogin'

SPIDER_MODULES = ['postlogin.spiders']
NEWSPIDER_MODULE = 'postlogin.spiders'

LOG_LEVEL = 'WARNING'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.116 Safari/537.36' #这里必须设置USER_AGENT,否则scrapy爬取失败

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
# DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
# # 'referer':'https://github.com',
# }

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'postlogin.middlewares.PostloginSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'postlogin.middlewares.PostloginDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
# 'postlogin.pipelines.PostloginPipeline': 300,
#}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

爬取结果

image.png

总结

第一需要确认的是待爬取的链接,能不能获取到响应html。如果获取不到响应html,原因可能是因为没有设置USER_AGENT。这个例子就是必须要设置USER_AGENT,否则scrapy爬取不到数据。
timestamp = response.xpath("//input[@name='timestamp']/@value").extract_first()通过xpath语句提取所需的post数据,记住要加上.extract_first()
scrapy发送post请求的方式:scrapy.FormRequest。用法:

1
2
3
4
5
yield scrapy.FormRequest( #使用scrapy.FormRequest提交post表单请求
"https://github.com/session", #post请求的url
formdata=post_data, #post数据体,字典类型
callback=self.after_login #对post请求的响应内容做数据处理
)

拓展-自动获取登录之后的url

scrapy.FormRequest.from_response可以自动从登录请求的响应html中提取form表单的action跳转地址。意思就是这个方式可以直接获取到登录之后的url地址。
image.png
用法:

1
2
3
4
5
yield scrapy.FormRequest.from_response(
response, #这里可以直接获取到form表单的action地址,即https://github.com/session
formdata=post_data,
callback=self.after_login
)
-------------本文结束感谢您的阅读-------------