gpt4 book ai didi

python - 网络爬虫 HTTP 错误 403 :Forbidden

转载 作者:行者123 更新时间:2023-11-28 18:50:15 26 4
gpt4 key购买 nike

我是一个尝试编写网络蜘蛛脚本的新手。我想转到一个页面,在文本框中输入一个数据,然后通过单击提交按钮转到下一页并检索新页面上的所有数据,迭代

以下是我正在尝试的代码:

import urllib
import urllib2
import string
import sys
from BeautifulSoup import BeautifulSoup

hdr = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11','Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8','Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3','Accept-Encoding': 'none','Accept-Language': 'en-US,en;q=0.8','Connection': 'keep-alive'}
values = {'query' : '5ed10c844ed4266a18d34e2ba06b381a' }
data = urllib.urlencode(values)
request = urllib2.Request("https://www.virustotal.com/#search", data, headers=hdr)
response = urllib2.urlopen(request)
the_page = response.read()
pool = BeautifulSoup(the_page)

print pool

错误如下:

Traceback (most recent call last):
File "C:\Users\Dipanshu\Desktop\webscraping_demo.py", line 19, in <module>
response = urllib2.urlopen(request)
File "C:\Python27\lib\urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "C:\Python27\lib\urllib2.py", line 406, in open
response = meth(req, response)
File "C:\Python27\lib\urllib2.py", line 519, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python27\lib\urllib2.py", line 444, in error
return self._call_chain(*args)
File "C:\Python27\lib\urllib2.py", line 378, in _call_chain
result = func(*args)
File "C:\Python27\lib\urllib2.py", line 527, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 403: Forbidden

我该如何解决这个问题?

最佳答案

from bs4 import BeautifulSoup
import urllib.request

user_agent = 'Mozilla/5.0'
headers = {'User-Agent': user_agent }
target_url = 'https://www.google.co.kr/search?q=cat&source=lnms&tbm=isch&sa=X&ved=0ahUKEwjtrZCg7uXbAhVaUd4KHc2HDgIQ_AUICygC&biw=1375&bih=842'

request = urllib.request.Request( url=target_url, headers=headers )
req = urllib.request.urlopen(request)
soup = BeautifulSoup(req.read(), 'html.parser')

target_url :“猫”的谷歌搜索网页

“headers”将帮助您解决 Forbidden 错误。这段代码

关于python - 网络爬虫 HTTP 错误 403 :Forbidden,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/13987624/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com