gpt4 book ai didi

python - 将抓取数据转储到MySQL数据库中

转载 作者:太空宇宙 更新时间:2023-11-03 20:25:15 25 4
gpt4 key购买 nike

聪明的人,我希望您能为我提供帮助,或者至少指导我朝正确的方向发展。我的老同事建立了一个网络抓取工具,我们曾使用它从CSV文件中的URL列表中抓取数据,然后将结果转储到CSV输出文件中。我基本上是从一个URL抓取一个表。

但是,我想更改代码,而不是将结果保存到CSV文件中,而是想将数据推送到MySQL数据库中。现在,我已经能够成功地将数据分别推送到本地服务器上的测试数据库中,但是我不确定如何组合这两个过程(将数据抓取然后发送到DB)。

我已包含以下代码。另外,为了提供有关我自己的背景知识,我是一个相当新的python用户。

from selenium import webdriver
import csv
import datetime
import time
import pandas as pd

def scrape_url(url, rows=15):
"""
This function takes the URL of and appends to the pre-defined list
"""
driver.get(url)
for r in range(3, int(rows)+1):
# print(url, r)
try:
name_path = '//*[@id="xxx_id"]'
name = driver.find_element_by_xpath(name_path).text

gamedate_path = '//*[@id="xxxx_div"]/table/tbody/tr/td/table/tbody/tr['+str(r)+']/td[1]'
gamedate = driver.find_element_by_xpath(gamedate_path).text

opponent_path = '//*[@id="xxxx_div"]/table/tbody/tr/td/table/tbody/tr['+str(r)+']/td[2]/a'
opponent = driver.find_element_by_xpath(opponent_path).text

result_path = '//*[@id="xxxx_div"]/table/tbody/tr/td/table/tbody/tr['+str(r)+']/td[3]/a'
result = driver.find_element_by_xpath(result_path).text

row_text = [gamedate, opponent, result, name, url]
results.append(row_text)

with open('results ' + str(start_time.strftime('%Y-%m-%d %H-%M')) + '.csv', 'a', newline='') as fd:
writer = csv.writer(fd)
writer.writerow(row_text)

except:
pass


if __name__ == '__main__':
# Locate the chrome webdriver

driver = webdriver.Chrome(executable_path=r'/chromedriver.exe')
results = []

# Read in a set of URLS, which will be used later pass through the scrape(url) function
urls = []
with open(r'/urls.csv', 'r') as f:
for line in f:
urls.append(line)

with open('ncaaresults ' + str(time.strftime('%Y-%m-%d %H-%M')) + '.csv', 'a', newline='') as fd:
writer = csv.writer(fd)
writer.writerow(['gamedate', 'opponent', 'result', 'name', 'url'])

start_time = datetime.datetime.now()

# Feed the urls to the scrape_url() function
for i, u in enumerate(urls):
scrape_url(u, 40)
print('**** Analyzed {}% or {} out of {}, runtime so far: {} minutes, expected time left: {} minutes ****'.format(round( (i+1) / len(urls) * 100, 1),
i+1,
len(urls),
round((datetime.datetime.now() - start_time).seconds / 60 , 1),
round(round((datetime.datetime.now() - start_time).seconds / (i+1) * len(urls)/60,1) - round((datetime.datetime.now() - start_time).seconds / 60 , 1),1)
)
)

# Print the results list
print("*** Entire list of {} urls scraped, total process took {} minutes".format(len(urls), round((datetime.datetime.now() - start_time).seconds / (i+1) * len(urls)/60,1)))
driver.close()

最佳答案

您的问题尚不清楚,因此我假设您想知道是否有可能在同一脚本中执行此操作。

有两种方法可以做到这一点。一种是走很长的路要走,并将所有代码放在一个脚本中。那将是手动执行的以下操作:导入sqlite3库,创建连接和游标,使用for循环和execute命令将值插入db。
第二种方法是学习scrapy,这是一个很棒的python模块,可以非常有效地管理所有内容。
希望这可以帮助。

关于python - 将抓取数据转储到MySQL数据库中,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57876445/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com