gpt4 book ai didi

python - 使用 python 在没有 API 的情况下抓取 wunderground

转载 作者:太空宇宙 更新时间:2023-11-04 04:16:31 25 4
gpt4 key购买 nike

我在抓取数据方面不是很有经验,所以这里的问题对某些人来说可能是显而易见的。

我想要的是从 wunderground.com 抓取历史每日天气数据,而无需支付 API 费用。也许根本不可能。

我的方法是简单地使用requests.get 并将整个文本保存到一个文件中(代码如下)。

结果不是获取可从 Web 浏览器访问的表格(见下图),而是一个几乎包含这些表格以外的所有内容的文件。像这样:

总结
没有数据记录
每日观察
没有数据记录

奇怪的是,如果我用 Firefox 保存为网页,结果取决于我选择“网页,仅 HTML”还是“网页,完整”:后者包括我的数据' m 有兴趣,前者没有。

这可能是故意的,所以没有人抓取他们的数据吗?我只是想确保没有解决此问题的方法。

提前致谢,胡安

注意:我尝试使用 user-agent字段无济于事。

# Note: I run > set PYTHONIOENCODING=utf-8 before executing python
import requests

# URL with wunderground weather information for a specific date:
date = '2019-03-12'
url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/' + date
r = requests.get(url)

# Write a file to check if the tables ar being retrieved:
with open('test.html', 'wb') as testfile:
testfile.write(r.text.encode('utf-8'))

Screenshot of the tables I want to scrape.


更新:找到解决方案

多亏了 selenium 模块,它正是我需要的解决方案。该代码提取给定日期的 URL 上出现的所有表格(如正常访问该站点时所见)。它需要修改才能抓取日期列表并组织创建的 CSV 文件。

注意:geckodriver.exe工作目录中需要。

from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.common.keys import Keys
import requests, sys, re

# URL with wunderground weather information
url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/2019-3-12'

# Commands related to the webdriver (not sure what they do, but I can guess):
bi = FirefoxBinary(r'C:\Program Files (x86)\Mozilla Firefox\\firefox.exe')
br = webdriver.Firefox(firefox_binary=bi)

# This starts an instance of Firefox at the specified URL:
br.get(url)

# I understand that at this point the data is in html format and can be
# extracted with BeautifulSoup:
sopa = BeautifulSoup(br.page_source, 'lxml')

# Close the firefox instance started before:
br.quit()

# I'm only interested in the tables contained on the page:
tablas = sopa.find_all('table')

# Write all the tables into csv files:
for i in range(len(tablas)):
out_file = open('wunderground' + str(i + 1) + '.csv', 'w')
tabla = tablas[i]

# ---- Write the table header: ----
table_head = tabla.findAll('th')
output_head = []
for head in table_head:
output_head.append(head.text.strip())

# Some cleaning and formatting of the text before writing:
encabezado = '"' + '";"'.join(output_head) + '"'
encabezado = re.sub('\s', '', encabezado) + '\n'
out_file.write(encabezado.encode(encoding='UTF-8'))

# ---- Write the rows: ----
output_rows = []
filas = tabla.findAll('tr')
for j in range(1, len(filas)):
table_row = filas[j]
columns = table_row.findAll('td')
output_row = []
for column in columns:
output_row.append(column.text.strip())

# Some cleaning and formatting of the text before writing:
fila = '"' + '";"'.join(output_row) + '"'
fila = re.sub('\s', '', fila) + '\n'
out_file.write(fila.encode(encoding='UTF-8'))

out_file.close()

Extra:@QHarr 的回答效果很好,但我需要做一些修改才能使用它,因为我在我的 PC 上使用 firefox。重要的是要注意,为了使它起作用,我必须添加 geckodriver.exe文件到我的工作目录。这是代码:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd

url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/2019-03-12'
bi = FirefoxBinary(r'C:\Program Files (x86)\Mozilla Firefox\\firefox.exe')
driver = webdriver.Firefox(firefox_binary=bi)
# driver = webdriver.Chrome()
driver.get(url)
tables = WebDriverWait(driver,20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "table")))
for table in tables:
newTable = pd.read_html(table.get_attribute('outerHTML'))
if newTable:
print(newTable[0].fillna(''))

最佳答案

您可以使用 selenium 来确保页面加载,然后使用 pandas read_html 来获取表格

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd

url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/2019-03-12'
driver = webdriver.Chrome()
driver.get(url)
tables = WebDriverWait(driver,20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "table")))
for table in tables:
newTable = pd.read_html(table.get_attribute('outerHTML'))
if newTable:
print(newTable[0].fillna(''))

关于python - 使用 python 在没有 API 的情况下抓取 wunderground,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55306320/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com