gpt4 book ai didi

python - 为什么我无法在Python中抓取这个链接?

转载 作者:行者123 更新时间:2023-11-30 23:07:43 24 4
gpt4 key购买 nike

我正在尝试抓取网页的内容,但我不明白为什么会收到此错误:http.client.IncompleteRead:IncompleteRead(已读取2268字节,预计还有612字节)

这是我试图抓取的链接: www.rc2.vd.ch

这是我用来抓取的Python代码:

import requests
from bs4 import BeautifulSoup
def spider_list():
url = 'http://www.rc2.vd.ch/registres/hrcintapp-pub/companySearch.action?lang=FR&init=false&advancedMode=false&printMode=false&ofpCriteria=N&actualDate=18.08.2015&rowMin=0&rowMax=0&listSize=0&go=none&showHeader=false&companyName=&companyNameSearchType=CONTAIN&companyOfsUid=&companyOfrcId13Part1=&companyOfrcId13Part2=&companyOfrcId13Part3=&limitResultCompanyActive=ACTIVE&searchRows=51&resultFormat=STD_COMP_NAME&display=Rechercher#result'

source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, 'html.parser')

for link in soup.findAll('a', {'class': 'hoverable'}):
print(link)

spider_list()

我尝试使用其他网站链接,效果很好,但为什么我无法抓取这个网站链接?

如果使用此代码无法做到这一点,那么我该怎么做?

------------ 编辑 ------------

这是完整的错误消息:

    Traceback (most recent call last):
File "C:/Users/Nuriddin/PycharmProjects/project/a.py", line 19, in <module>
spider_list()
File "C:/Users/Nuriddin/PycharmProjects/project/a.py", line 12, in spider_list
source_code = requests.get(url)
File "C:\Python34\lib\site-packages\requests\api.py", line 69, in get
return request('get', url, params=params, **kwargs)
File "C:\Python34\lib\site-packages\requests\api.py", line 50, in request
response = session.request(method=method, url=url, **kwargs)
File "C:\Python34\lib\site-packages\requests\sessions.py", line 465, in request
resp = self.send(prep, **send_kwargs)
File "C:\Python34\lib\site-packages\requests\sessions.py", line 605, in send
r.content
File "C:\Python34\lib\site-packages\requests\models.py", line 750, in content
self._content = bytes().join(self.iter_content(CONTENT_CHUNK_SIZE)) or bytes()
File "C:\Python34\lib\site-packages\requests\models.py", line 673, in generate
for chunk in self.raw.stream(chunk_size, decode_content=True):
File "C:\Python34\lib\site-packages\requests\packages\urllib3\response.py", line 303, in stream
for line in self.read_chunked(amt, decode_content=decode_content):
File "C:\Python34\lib\site-packages\requests\packages\urllib3\response.py", line 450, in read_chunked
chunk = self._handle_chunk(amt)
File "C:\Python34\lib\site-packages\requests\packages\urllib3\response.py", line 420, in _handle_chunk
returned_chunk = self._fp._safe_read(self.chunk_left)
File "C:\Python34\lib\http\client.py", line 664, in _safe_read
raise IncompleteRead(b''.join(s), amt)
http.client.IncompleteRead: IncompleteRead(4485 bytes read, 628 more expected)

最佳答案

您的编辑器可能有问题。

我在 python 3 中使用您的代码在 IDLE 中得到正确的结果

下面附有图片供引用-

enter image description here

我唯一能想到的就是以某种方式绕过错误:

import requests
from bs4 import BeautifulSoup
def spider_list():
url = 'http://www.rc2.vd.ch/registres/hrcintapp-pub/companySearch.action?lang=FR&init=false&advancedMode=false&printMode=false&ofpCriteria=N&actualDate=18.08.2015&rowMin=0&rowMax=0&listSize=0&go=none&showHeader=false&companyName=&companyNameSearchType=CONTAIN&companyOfsUid=&companyOfrcId13Part1=&companyOfrcId13Part2=&companyOfrcId13Part3=&limitResultCompanyActive=ACTIVE&searchRows=51&resultFormat=STD_COMP_NAME&display=Rechercher#result'
try:
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, 'html.parser')

for link in soup.findAll('a', {'class': 'hoverable'}):
print(link)
except:
pass
#I am passing but you do whatever you want to do in case of error
spider_list()

如果有帮助请告诉我。

关于python - 为什么我无法在Python中抓取这个链接?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/32073570/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com