gpt4 book ai didi

python - 池模块和漂亮的汤出现奇怪的错误 : Invalid URL 'h'

转载 作者:太空宇宙 更新时间:2023-11-03 20:46:38 25 4
gpt4 key购买 nike

我正在使用 Beautiful Soup 为一个项目抓取一个非常大的网站,并希望使用 Pool 模块来加速它。我收到一个奇怪的错误,它没有正确读取 URL 列表,据我所知它只是抓取第一个“h”。

如果我不使用池,整个代码可以完美运行。 URL 列表已正确读取。我不确定在调用 p.map(scrapeClauses, links) 时如何准备 URL 是否有什么奇怪的地方,因为如果我只是调用 scrapeClauses(links)一切正常。

这是我的主要功能:

if __name__ == '__main__':    
links = list()
og = 'https://www.lawinsider.com'
halflink = '/clause/limitation-of-liability'
link = og + halflink
links.append(link)
i = 0
while i < 50:
try:
nextLink = generateNextLink(link)
links.append(nextLink)
link = nextLink
i += 1
except:
print('Only ', i, 'links found')
i = 50
start_time = time.time()
print(links[0])
p = Pool(5)
p.map(scrapeClauses, links)
p.terminate()
p.join()
#scrapeClauses(links)

这是scrapeClauses():

def scrapeClauses(links):
#header to avoid site detecting scraper
headers = requests.utils.default_headers()
headers.update({
'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0',
})
#list of clauses
allText = []
number = 0
for line in links:
page_link = line
print(page_link)
page_response = requests.get(page_link, headers=headers)
html_soup = BeautifulSoup(page_response.content, "html.parser")
assignments = html_soup.find_all('div', class_ ='snippet-content')
for i in range(len(assignments)):
assignments[i] = assignments[i].get_text()
#option to remove te assignment that precedes each clause
#assignments[i] = assignments[i].replace('Assignment.','',1)
allText.append(assignments[i])
#change the index of the name of the word doc
name = 'limitationOfLiability' + str(number) + '.docx'
#some clauses have special characters tat produce an error
try:
document = Document()
stuff = assignments[i]
document.add_paragraph(stuff)
document.save(name)
number += 1
except:
continue

为了节省空间,我没有包含 generateNextLink() ,因为我很确定错误不在那里,但如果有人认为是这样,我会提供它。

如您所见,我在 scrapeClauses 中“打印(page_link)”。如果我不使用池,它将打印所有正常链接。但如果我使用池,一堆 h 就会一行一行地打印出来。然后我得到一个错误,h 不是一个有效的 URL。我将在下面显示错误代码。

https://www.lawinsider.com/clause/limitation-of-liability
h
h
h
h
h
h
h
h
h
h
h
h
h
h
h
h
h
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\Users\wquinn\AppData\Local\Programs\Python\Python37-32\lib\multiproce
ssing\pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "C:\Users\wquinn\AppData\Local\Programs\Python\Python37-32\lib\multiproce
ssing\pool.py", line 44, in mapstar
return list(map(*args))
File "C:\Users\wquinn\Web Scraping\assignmentBSScraper.py", line 20, in scrape
Clauses
page_response = requests.get(page_link, headers=headers)
File "C:\Users\wquinn\AppData\Local\Programs\Python\Python37-32\lib\site-packa
ges\requests\api.py", line 75, in get
return request('get', url, params=params, **kwargs)
File "C:\Users\wquinn\AppData\Local\Programs\Python\Python37-32\lib\site-packa
ges\requests\api.py", line 60, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\wquinn\AppData\Local\Programs\Python\Python37-32\lib\site-packa
ges\requests\sessions.py", line 519, in request
prep = self.prepare_request(req)
File "C:\Users\wquinn\AppData\Local\Programs\Python\Python37-32\lib\site-packa
ges\requests\sessions.py", line 462, in prepare_request
hooks=merge_hooks(request.hooks, self.hooks),
File "C:\Users\wquinn\AppData\Local\Programs\Python\Python37-32\lib\site-packa
ges\requests\models.py", line 313, in prepare
self.prepare_url(url, params)
File "C:\Users\wquinn\AppData\Local\Programs\Python\Python37-32\lib\site-packa
ges\requests\models.py", line 387, in prepare_url
raise MissingSchema(error)
requests.exceptions.MissingSchema: Invalid URL 'h': No schema supplied. Perhaps
you meant http://h?

最佳答案

p.map 的第二个参数得到一个 list 。每个这样的元素将被发送到一个函数。所以你的函数得到了 string而不是list of string如您所料。

最小的例子是:

from multiprocessing import Pool

def f(str_list):
for x in str_list:
print ('hello {}'.format(x))

if __name__ == '__main__':
str_list = ['111', '2', '33']
p = Pool(5)
p.map(f, str_list)
p.terminate()
p.join()

输出是:

hello 1
hello 1
hello 1
hello 2
hello 3
hello 3

关于python - 池模块和漂亮的汤出现奇怪的错误 : Invalid URL 'h' ,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56546468/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com