- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试从 espncricinfo 网站抓取数据我正在请求 IPL 的每场比赛的页面但有时会在 10 场比赛或 20 场甚至 2 场比赛后出现错误,但它没有完成下面是我的代码,我的错误帮助了我。我正在使用 request.get() 方法从给定链接获取网页
import requests
from bs4 import BeautifulSoup
import html5lib as h5l
import json
import pandas as pd
import os
import time
X = [['ID', 'Season', 'Home', 'Away', 'TossWin', 'TossDec', 'Venue', 'Winner']]
webpages = ["https://stats.espncricinfo.com/ci/engine/records/team/match_results.html?id=2007/08;trophy=117;type=season",
"https://stats.espncricinfo.com/ci/engine/records/team/match_results.html?id=2009;trophy=117;type=season",
"https://stats.espncricinfo.com/ci/engine/records/team/match_results.html?id=2010;trophy=117;type=season",
"https://stats.espncricinfo.com/ci/engine/records/team/match_results.html?id=2011;trophy=117;type=season",
"https://stats.espncricinfo.com/ci/engine/records/team/match_results.html?id=2012;trophy=117;type=season",
"https://stats.espncricinfo.com/ci/engine/records/team/match_results.html?id=2013;trophy=117;type=season",
"https://stats.espncricinfo.com/ci/engine/records/team/match_results.html?id=2014;trophy=117;type=season",
"https://stats.espncricinfo.com/ci/engine/records/team/match_results.html?id=2015;trophy=117;type=season",
"https://stats.espncricinfo.com/ci/engine/records/team/match_results.html?id=2016;trophy=117;type=season",
"https://stats.espncricinfo.com/ci/engine/records/team/match_results.html?id=2017;trophy=117;type=season",
"https://stats.espncricinfo.com/ci/engine/records/team/match_results.html?id=2018;trophy=117;type=season",
"https://stats.espncricinfo.com/ci/engine/records/team/match_results.html?id=2019;trophy=117;type=season"
]
# For Match ID
match_id = 1
# Iterating over Given webpages of Seasonal Match
for page in webpages:
r = requests.get(page)
htmlContent = r.content
soup = BeautifulSoup(htmlContent, 'html.parser')
# print(soup)
# print(soup.prettify)
# Finding link for all Matches Summary in given Season
links = soup.find_all("a", class_ = "data-link", text = "T20")
# Iterating over Matches
for link in links:
# print(link['href'])
r = requests.get("https://stats.espncricinfo.com:443" + link['href'])
# print("https://stats.espncricinfo.com" + link['href'])
htmlContent = r.content
soup = BeautifulSoup(htmlContent, 'html.parser')
#finding Season
Season_var = soup.find("a", class_ = "d-block").getText()
season = Season_var[21:]
#finding Short Names of Teams
teams = []
T = soup.find_all("a", class_ = "team-name")
for tt in T:
teams.append(tt.getText())
# Finding Full Names of Teams
full_team_names = []
TN = soup.find_all("a", class_ = "team-name")
for ttt in TN:
span = ttt.find("span")
full_team_names.append(span['title'])
# print(full_team_names)
# Toss Details
toss_det = soup.find("td", text = "Toss").findNext("td").getText()
toss_det = toss_det.split(',')
toss_det[0] = toss_det[0][:-1]
# Toss Winner
toss_win = ""
# print(toss_det[0],len(toss_det[0]), full_team_names[0], len(full_team_names[0]))
if toss_det[0] == full_team_names[0]:
toss_win = toss_win + "Team 1"
# print(toss_det[0], full_team_names[0])
else:
toss_win = toss_win + "Team 2"
# print(toss_det[0], full_team_names[1])
# print(toss_win)
# Toss Decision
toss_array = toss_det[1].split()
toss_dec = toss_array[2]
# print(toss_dec)
# Finding Ground
full_place = soup.find("td", class_ = "match-venue").getText()
places = full_place.split(',')
stadium = places[0]
# Finding Winner of match
win = ""
winner_tag = soup.find("td", text = "Points").findNext("td").getText()
winner_arr = winner_tag.split(',')
# print(winner_arr[0])
# print(winner_arr[0][-1])
# print(winner_arr[0][:-2])
if winner_arr[0][-1] == 1:
win = win + "Tie"
# print(winner_arr, win)
elif winner_arr[0][:-2] == full_team_names[0]:
win = win + "Team 1"
# print(winner_arr[0][:-2], full_team_names[0], win)
else:
win = win + "Team 2"
# print(winner_arr[0][:-2], full_team_names[1], win)
# print(win)
temp = [match_id, season, teams[0], teams[1], toss_win, toss_dec, stadium, win]
del season, teams, toss_win, toss_dec, stadium, win
match_id = match_id + 1
X.append(temp)
del temp
print("Running", match_id-1)
time.sleep(2)
df = pd.DataFrame(X)
df.to_csv('Matches.csv')
print("Completed")
**Error**:
Running 1
Running 2
Running 3
Traceback (most recent call last):
File "C:\Python38\lib\site-packages\urllib3\connection.py", line 159, in _new_conn
conn = connection.create_connection(
File "C:\Python38\lib\site-packages\urllib3\util\connection.py", line 61, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "C:\Python38\lib\socket.py", line 918, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 11001] getaddrinfo failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 670, in urlopen
httplib_response = self._make_request(
File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 381, in _make_request
self._validate_conn(conn)
File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 976, in _validate_conn
conn.connect()
File "C:\Python38\lib\site-packages\urllib3\connection.py", line 308, in connect
conn = self._new_conn()
File "C:\Python38\lib\site-packages\urllib3\connection.py", line 171, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x00000155C7FE7E50>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python38\lib\site-packages\requests\adapters.py", line 439, in send
resp = conn.urlopen(
File "C:\Python38\lib\site-packages\urllib3\connectionpool.py", line 724, in urlopen
retries = retries.increment(
File "C:\Python38\lib\site-packages\urllib3\util\retry.py", line 439, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='stats.espncricinfo.com', port=443): Max retries exceeded with url: /ci/engine/matnnection: [Errno 11001] getaddrinfo failed'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "f:\GITHUB\Web-Scraping\IPL matches data\scraper.py", line 42, in <module>
r = requests.get("https://stats.espncricinfo.com:443" + link['href'])
File "C:\Python38\lib\site-packages\requests\api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "C:\Python38\lib\site-packages\requests\api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Python38\lib\site-packages\requests\sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "C:\Python38\lib\site-packages\requests\sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "C:\Python38\lib\site-packages\requests\adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='stats.espncricinfo.com', port=443): Max retries exceeded with url: /ci/engine/match/335986.html (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x00000155C7FE7E50>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))```
最佳答案
看起来有防止抓取的保护。
第一步是将 header 添加到您的请求中:
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Firefox/78.0',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Accept-Language': 'fr,fr-FR;q=0.8,en-US;q=0.5,en;q=0.3',
'Referer': 'https://www.espncricinfo.com/',
'Upgrade-Insecure-Requests': '1',
'Connection': 'keep-alive',
'Pragma': 'no-cache',
'Cache-Control': 'no-cache',
}
并更改代码中的以下行:
r = requests.get(page, headers = headers)
那么您可能会考虑在请求之间更“随机”地等待:
import random
...
time.sleep(random.random()*10)
当 toss_det
等于 no_toss
时,它对我来说非常适合你的代码,除了一个错误,但这不是网站相关的问题
关于python - 使用 python 抓取 Web 时出错,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62820300/
如本answer所述,如果浏览器不支持 e,可以设置后备游标。 G。 光标:抓取;。我现在的问题是获取这些图像。在我的驱动器上本地搜索“.cur”只给了我系统光标,其中 grab.cur 和 grab
以下代码在计算机上运行以从 Instagram 帐户中抓取数据。当我尝试在 VPS 服务器上使用它时,我被重定向到 Instagram 登录页面,因此脚本不起作用。 为什么当我在电脑上或服务器上时,I
我在使用 Ruby 和 Mechanize 将 POST 查询传递到站点的网站上。访问站点的查询基于 firebug,如下所示 param.PrdNo=-1¶m.Type=Prop¶m
我正在尝试抓取一个具有多个页面结果的网站,例如“1、2、3、4、5...”。 每个分页号都是到另一个页面的链接,我需要抓取每个页面。 到目前为止,我想出了这个: while lien = page.l
我正在使用 HtmlAgilityPack 在 C# Asp.Net 中执行 Scraping,到目前为止,我在从多个 Web 执行 Scratch 时没有遇到问题,但是,尝试弹出以下代码时出现错误
如果我有一个 css 文件做这样的事情 #foo:after{content:"bar;} ,有没有办法用 javascript 获取 :after 的内容?获取父元素的内容只返回 #foo 元素的内
问题是这样的: 我有一个 Web 应用程序 - 一个经常更改的通知系统 - 在一系列本地计算机上运行。该应用程序每隔几秒刷新一次以显示新信息。计算机仅显示信息,没有键盘或任何输入设备。 问题是,如果与
我想制作一个程序来模拟用户浏览网站和点击链接。必须启用 Cookie 和 javascript。我已经在 python 中成功地做到了这一点,但我想把它写成一种可编译的语言(python ide 不会
我制作了这个小机器人,它通过搜索参数列表进行处理。它工作正常,直到页面上有几个结果: product_prices_euros 给出了一半为空的项目列表。因此,当我与 product_prices_c
我需要找到一个单词的匹配项,例如: 在网上找到所有单词“学习”https://www.georgetown.edu/(结果:4个字)(您可以看到它按CTRL + F并搜索) 我有我的 Python 代
有一个站点\资源提供一些一般统计信息以及搜索工具的界面。这种搜索操作成本高昂,因此我想限制频繁且连续(即自动)的搜索请求(来自人,而不是来自搜索引擎)。 我相信有很多现有的技术和框架可以执行一些情报抓
这并不是真正的抓取,我只是想在网页中找到类具有特定值的 URL。例如: 我想获取 href 值。关于如何做到这一点的任何想法?也许正则表达式?你能发布一些示例代码吗?我猜 html 抓取库,比如 B
我正在使用 scrapy。 我正在使用的网站具有无限滚动功能。 该网站有很多帖子,但我只抓取了 13 个。 如何抓取剩余的帖子? 这是我的代码: class exampleSpider(scrapy.
我正在尝试从这个 website 中抓取图像和新闻 url .我定义的标签是 root_tag=["div", {"class":"ngp_col ngp_col-bottom-gutter-2 ng
关闭。这个问题需要更多focused .它目前不接受答案。 想改进这个问题吗? 更新问题,使其只关注一个问题 editing this post . 关闭上个月。 Improve this ques
我在几个文件夹中有数千个 html 文件,我想从评论中提取数据并将其放入 csv 文件中。这将允许我为项目格式化和清理它。例如,我在这个文件夹中有 640 个 html 文件: D:\My Web S
我在编写用于抓取网页的实用程序时遇到了一个问题。 我正在发送 POST 请求来检索数据,我模仿我正在抓取的网络行为(根据使用 fiddler 收集的信息)。 我已经能够自动替换我的 POST 中除 V
对于 Googlebot 的 AJAX 抓取,我在我的网站中使用“_escaped_fragment_”参数。 现在我查看了 Yandex 对我网站的搜索结果。 我看到搜索结果中不存在 AJAX 响应
我正在尝试抓取网站的所有结果页面,它可以工作,但有时脚本会停止并显示此错误: 502 => Net::HTTPBadGateway for https://website.com/id/12/ --
我是一个学习网络爬虫的初学者,由于某种原因我无法爬网this地点。当我在 Chrome 中检查它时,代码看起来不错,但是当我用 BeautifulSoup 阅读它时,它不再是可刮的。汤提到“谷歌分析”
我是一名优秀的程序员,十分优秀!