gpt4 book ai didi

python - 使用python(在mac上)在Linkedin上抓取公司列表-默认为重试或<999>错误

转载 作者:行者123 更新时间:2023-12-03 08:24:08 25 4
gpt4 key购买 nike

我是新手,我正在尝试从 Linkedin 上的每个公司页面自动提取详细信息。

我正在修改我找到的一段代码,它不会超出 requests.get 的进度,并且我的输出立即默认为重试。当我启用 header 作为参数时会发生这种情况。当我忽略它时,我实际上得到了一个 <999> 响应。

关于如何在这里取得进展的任何想法?我该如何解决 999 错误,或者如果程序立即默认重试并添加 header ,我如何理解出了什么问题。

from lxml import html
import csv, os, json
import requests
from time import sleep
import certifi
import urllib3
urllib3.disable_warnings()



def linkedin_companies_parser(url):
for i in range(5):
try:

print("looking at the headers")
headers = {
"accept" : "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
"accept-encoding" : "gzip, deflate, sdch, br",
"accept-language" : "en-US,en;q=0.8,ms;q=0.6",
"user-agent" : " Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36"}

print ("Fetching :",url)
response = requests.get(url, headers = headers, verify=False)
print (response)
formatted_response = response.content.replace('<!--', '').replace('-->', '')
print (formatted_response)
doc = html.fromstring(formatted_response)
print ("we have come here")

datafrom_xpath = doc.xpath('//code[@id="stream-promo-top-bar-embed-id-content"]//text()')
content_about = doc.xpath('//code[@id="stream-about-section-embed-id-content"]')
if not content_about:
content_about = doc.xpath('//code[@id="stream-footer-embed-id-content"]')
if content_about:
pass
# json_text = content_about[0].html_content().replace('<code id="stream-footer-embed-id-content"><!--','').replace('<code id="stream-about-section-embed-id-content"><!--','').replace('--></code>','')

if datafrom_xpath:
try:
json_formatted_data = json.loads(datafrom_xpath[0])

company_name = json_formatted_data['companyName'] if 'companyName' in json_formatted_data.keys() else None
size = json_formatted_data['size'] if 'size' in json_formatted_data.keys() else None
industry = json_formatted_data['industry'] if 'industry' in json_formatted_data.keys() else None
description = json_formatted_data['description'] if 'description' in json_formatted_data.keys() else None
follower_count = json_formatted_data['followerCount'] if 'followerCount' in json_formatted_data.keys() else None
year_founded = json_formatted_data['yearFounded'] if 'yearFounded' in json_formatted_data.keys() else None
website = json_formatted_data['website'] if 'website' in json_formatted_data.keys() else None
type = json_formatted_data['companyType'] if 'companyType' in json_formatted_data.keys() else None
specialities = json_formatted_data['specialties'] if 'specialties' in json_formatted_data.keys() else None

if "headquarters" in json_formatted_data.keys():
city = json_formatted_data["headquarters"]['city'] if 'city' in json_formatted_data["headquarters"].keys() else None
country = json_formatted_data["headquarters"]['country'] if 'country' in json_formatted_data['headquarters'].keys() else None
state = json_formatted_data["headquarters"]['state'] if 'state' in json_formatted_data['headquarters'].keys() else None
street1 = json_formatted_data["headquarters"]['street1'] if 'street1' in json_formatted_data['headquarters'].keys() else None
street2 = json_formatted_data["headquarters"]['street2'] if 'street2' in json_formatted_data['headquarters'].keys() else None
zip = json_formatted_data["headquarters"]['zip'] if 'zip' in json_formatted_data['headquarters'].keys() else None
street = street1 + ', ' + street2
else:
city = None
country = None
state = None
street1 = None
street2 = None
street = None
zip = None

data = {
'company_name': company_name,
'size': size,
'industry': industry,
'description': description,
'follower_count': follower_count,
'founded': year_founded,
'website': website,
'type': type,
'specialities': specialities,
'city': city,
'country': country,
'state': state,
'street': street,
'zip': zip,
'url': url
}
return data
except:
print ("cant parse page"), url

# Retry in case of captcha or login page redirection
if len(response.content) < 2000 or "trk=login_reg_redirect" in url:
if response.status_code == 404:
print ("linkedin page not found")
else:
raise ValueError('redirecting to login page or captcha found')
except :
print ("retrying :"),url

def readurls():
companyurls = ['https://www.linkedin.com/company/tata-consultancy-services']
extracted_data = []
for url in companyurls:
extracted_data.append(linkedin_companies_parser(url))
f = open('data.json', 'w')
json.dump(extracted_data, f, indent=4)

if __name__ == "__main__":
readurls()

最佳答案

状态码 999从 Linkedin 发送的通常表示由于机器人事件或其他一些安全原因而拒绝访问。

最好通过在 headless 模式下使用 Chrome 或 Firefox 并爬取页面来模拟实际用户。它将消除手动设置 cookie 或传递 header 的需要,从而节省大量时间。

您可以使用 Selenium 使用 python 来自动化浏览器导航和抓取。

PS:确保您没有从 AWS 或其他流行的托管 ip 运行程序,因为这些 ip 范围被 Linkedin 阻止用于未经身份验证的 session 。

关于python - 使用python(在mac上)在Linkedin上抓取公司列表-默认为重试或<999>错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47552417/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com