gpt4 book ai didi

python-2.7 - RobotParser 抛出 SSL Certificate Verify Failed 异常

转载 作者:太空宇宙 更新时间:2023-11-03 14:43:47 25 4
gpt4 key购买 nike

我正在用 Python 2.7 编写一个简单的网络爬虫,并且在尝试从 HTTPS 网站检索 robots.txt 文件时收到 SSL 证书验证失败异常。

相关代码如下:

def getHTMLpage(pagelink, currenttime):
"Downloads HTML page from server"
#init
#parse URL and get domain name
o = urlparse.urlparse(pagelink,"http")
if o.netloc == "":
netloc = re.search(r"[^/]+\.[^/]+\.[^/]+", o.path)
if netloc:
domainname="http://"+netloc.group(0)+"/"
else:
domainname=o.scheme+"://"+o.netloc+"/"
if o.netloc != "" and o.netloc != None and o.scheme != "mailto": #if netloc isn't empty and it's not a mailto link
link=domainname+o.path[1:]+o.params+"?"+o.query+"#"+o.fragment
if not (robotfiledictionary.get(domainname)): #if robot file for domainname was not downloaded
robotfiledictionary[domainname] = robotparser.RobotFileParser() #initialize robots.txt parser
robotfiledictionary[domainname].set_url(domainname+"robots.txt") #set url for robots.txt
print " Robots.txt for %s initial download" % str(domainname)
robotfiledictionary[domainname].read() #download/read robots.txt
elif (robotfiledictionary.get(domainname)): #if robot file for domainname was already downloaded
if (currenttime - robotfiledictionary[domainname].mtime()) > 3600: #if robot file is older than 1 hour
robotfiledictionary[domainname].read() #download/read robots.txt
print " Robots.txt for %s downloaded" % str(domainname)
robotfiledictionary[domainname].modified() #update time
if robotfiledictionary[domainname].can_fetch("WebCrawlerUserAgent", link): #if access is allowed...
#fetch page
print link
page = requests.get(link, verify=False)
return page.text()
else: #otherwise, report
print " URL disallowed due to robots.txt from %s" % str(domainname)
return "URL disallowed due to robots.txt"
else: #if netloc was empty, URL wasn't parsed. report
print "URL not parsed: %s" % str(pagelink)
return "URL not parsed"

这是我遇到的异常:

  Robots.txt for https://ehi-siegel.de/ initial download
Traceback (most recent call last):
File "C:\webcrawler.py", line 561, in <module>
HTMLpage = getHTMLpage(link, loopstarttime)
File "C:\webcrawler.py", line 122, in getHTMLpage
robotfiledictionary[domainname].read() #download/read robots.txt
File "C:\Python27\lib\robotparser.py", line 58, in read
f = opener.open(self.url)
File "C:\Python27\lib\urllib.py", line 213, in open
return getattr(self, name)(url)
File "C:\Python27\lib\urllib.py", line 443, in open_https
h.endheaders(data)
File "C:\Python27\lib\httplib.py", line 1053, in endheaders
self._send_output(message_body)
File "C:\Python27\lib\httplib.py", line 897, in _send_output
self.send(msg)
File "C:\Python27\lib\httplib.py", line 859, in send
self.connect()
File "C:\Python27\lib\httplib.py", line 1278, in connect
server_hostname=server_hostname)
File "C:\Python27\lib\ssl.py", line 353, in wrap_socket
_context=self)
File "C:\Python27\lib\ssl.py", line 601, in __init__
self.do_handshake()
File "C:\Python27\lib\ssl.py", line 830, in do_handshake
self._sslobj.do_handshake()
IOError: [Errno socket error] [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)

如您所见,我已经更改了最后的代码以检索忽略 SSL 证书的页面(我知道在生产中不赞成这样做,但我想测试一下),但现在看来 robotparser.read()函数未通过 SSL 验证。我已经看到我可以手动下载证书并将程序指向该方向以验证 SSL 证书,但理想情况下我希望我的程序“开箱即用”,因为我个人不会成为一个使用它。有谁知道该怎么做吗?

编辑:我进入了 robotparser.py。我加了

import requests

并将第 58 行更改为

f = requests.get(self.url, verify=False)

这似乎已经解决了。这仍然不理想,所以我仍然愿意听取有关如何操作的建议。

最佳答案

我自己找到了解决方案。使用 urllib3 的请求功能,我能够验证所有网站并继续访问它们。

我仍然需要编辑 robotparser.py 文件。这是我在开头添加的内容:

import urllib3
import urllib3.contrib.pyopenssl
import certifi
urllib3.contrib.pyopenssl.inject_into_urllib3()
http = urllib3.PoolManager(cert_reqs="CERT_REQUIRED", ca_certs=certifi.where())

这是 read(self) 的定义:

def read(self):
"""Reads the robots.txt URL and feeds it to the parser."""
opener = URLopener()
f = http.request('GET', self.url)
lines = [line.strip() for line in f.data]
f.close()
self.errcode = opener.errcode
if self.errcode in (401, 403):
self.disallow_all = True
elif self.errcode >= 400 and self.errcode < 500:
self.allow_all = True
elif self.errcode == 200 and lines:
self.parse(lines)

我还使用相同的过程在我的程序函数中获取实际的页面请求:

def getHTMLpage(pagelink, currenttime):
"Downloads HTML page from server"
#init
#parse URL and get domain name
o = urlparse.urlparse(pagelink,u"http")
if o.netloc == u"":
netloc = re.search(ur"[^/]+\.[^/]+\.[^/]+", o.path)
if netloc:
domainname=u"http://"+netloc.group(0)+u"/"
else:
domainname=o.scheme+u"://"+o.netloc+u"/"
if o.netloc != u"" and o.netloc != None and o.scheme != u"mailto": #if netloc isn't empty and it's not a mailto link
link=domainname+o.path[1:]+o.params+u"?"+o.query+u"#"+o.fragment
if not (robotfiledictionary.get(domainname)): #if robot file for domainname was not downloaded
robotfiledictionary[domainname] = robotparser.RobotFileParser() #initialize robots.txt parser
robotfiledictionary[domainname].set_url(domainname+u"robots.txt") #set url for robots.txt
print u" Robots.txt for %s initial download" % str(domainname)
robotfiledictionary[domainname].read() #download/read robots.txt
elif (robotfiledictionary.get(domainname)): #if robot file for domainname was already downloaded
if (currenttime - robotfiledictionary[domainname].mtime()) > 3600: #if robot file is older than 1 hour
robotfiledictionary[domainname].read() #download/read robots.txt
print u" Robots.txt for %s downloaded" % str(domainname)
robotfiledictionary[domainname].modified() #update time
if robotfiledictionary[domainname].can_fetch("WebCrawlerUserAgent", link.encode('utf-8')): #if access is allowed...
#fetch page
if domainname == u"https://www.otto.de/" or domainname == u"http://www.otto.de":
driver.get(link.encode('utf-8'))
time.sleep(5)
page=driver.page_source
return page
else:
page = http.request('GET',link.encode('utf-8'))
return page.data.decode('UTF-8','ignore')
else: #otherwise, report
print u" URL disallowed due to robots.txt from %s" % str(domainname)
return u"URL disallowed due to robots.txt"
else: #if netloc was empty, URL wasn't parsed. report
print u"URL not parsed: %s" % str(pagelink)
return u"URL not parsed"

您还会注意到我更改了我的程序以严格使用 UTF-8,但这无关紧要。

关于python-2.7 - RobotParser 抛出 SSL Certificate Verify Failed 异常,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40994681/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com