gpt4 book ai didi

python - 如何提高 aiohttp 爬虫速度?

转载 作者:行者123 更新时间:2023-12-05 07:05:36 31 4
gpt4 key购买 nike

import aiohttp
from bs4 import BeautifulSoup
from xlrd import open_workbook
from xlwt import Workbook

url_list = [https://www.facebook.com,https://www.baidu.com,https://www.yahoo.com,...]
#There are more than 20000 different websites in the list
#Some websites may not be accessible
keywords=['xxx','xxx'....]
start = time.time()
localtime = time.asctime(time.localtime(time.time()))
print("start time :", localtime)
choose_url=[]
url_title=[]
async def get(url, session):
try:
async with session.get(url=url,timeout=0) as response:
resp = await response.text()
soup = BeautifulSoup(resp, "lxml")
title = soup.find("title").text.strip()
for keyword in keywords:
if keyword in title:
choose_url.append(url)
url_title.append(title)
print("Successfully got url {} with resp's name {}.".format(url, title))
break
except Exception as e:
pass

async def main(urls):
connector = aiohttp.TCPConnector(ssl=False,limit=0,limit_per_host =0)
session = aiohttp.ClientSession(connector=connector)
ret = await asyncio.gather(*[get(url, session) for url in urls])
print("Finalized all. Return is a list of outputs.")
await session.close()
def write_exccel(choose_url,url_title):
#write choose_url,url_title to excel
pass

asyncio.run(main(url_list))
write_exccel(choose_url,url_title)
localtime = time.asctime(time.localtime(time.time()))
print("now time is :", localtime)
end = time.time()
print('time used:', end - start)

我有 20000 个要请求的 URL。但是时间比较长(4、5个小时以上),如果我用requests+multiprocessing(Pool 4)的话3个小时就可以了。

我试过用aiohttp+multiprocessing,好像不行。通过优化此代码或使用任何可用技术,代码能否尽可能快?谢谢

最佳答案

不知道下面的方法快不快

import time
from simplified_scrapy import Spider, SimplifiedDoc, SimplifiedMain, utils

class MySpider(Spider):
name = 'demo_spider'
start_urls = ["https://www.facebook.com","https://www.baidu.com","https://www.yahoo.com"] # Entry page
keywords = ['xxx','xxx']
choose_url=[]
url_title=[]
concurrencyPer1s = 10
def extract(self, url, html, models, modelNames):
doc = SimplifiedDoc(html)
title = doc.title
if title.containsOr(self.keywords):
self.choose_url.append(url.url)
self.url_title.append(title.text)
print("Successfully got url {} with resp's name {}.".format(url, title.text))
def urlCount(self):
count = Spider.urlCount(self)
if count==0:
SimplifiedMain.setRunFlag(False)
return count

start = time.time()
localtime = time.asctime(time.localtime(time.time()))
print("start time :", localtime)
SimplifiedMain.startThread(MySpider(),{"concurrency":600, "concurrencyPer1S":100, "intervalTime":0.001, "max_workers":10}) # Start download
localtime = time.asctime(time.localtime(time.time()))
print("now time is :", localtime)
end = time.time()
print('time used:', end - start)

关于python - 如何提高 aiohttp 爬虫速度?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62688012/

31 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com