gpt4 book ai didi

python - BeautifulSoup - 如何从网站提取电子邮件?

转载 作者:行者123 更新时间:2023-11-28 20:30:16 25 4
gpt4 key购买 nike

我正在尝试从网站中提取一些信息,但我不知道如何抓取电子邮件。

这段代码对我有用:

from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup

url = "https://www.eurocham-cambodia.org/member/476/2-LEau-Protection"
uClient = uReq(url)
page_html = uClient.read()
uClient.close()
soup = BeautifulSoup(page_html,"lxml")

members = soup.findAll("b")
for member in members:
member = members[0].text
print(member)

我想提取数字并与 soup.findAll() 链接,但找不到正确获取文本的方法,所以我使用了 SelectorGadget 工具并尝试了这个:

numbers = soup.select("#content li:nth-child(1)")
for number in numbers:
number = numbers[0].text
print(number)

links = soup.findAll(".icon-globe+ a")
for link in links:
link = links[0].text
print(link)

打印正确:

2 L'Eau Protection
(+33) 02 98 19 43 86
http://www.2leau-protection.com/

现在,在提取电子邮件地址时我遇到了困难。我是新来的,任何建议将不胜感激,谢谢!

尝试 1

emails = soup.select("#content li:nth-child(2)")
for email in emails:
email = emails[0].text
print(email)

我什至不知道它只是打印什么

//<![CDATA[
var l=new Array();
l[0]='>';l[1]='a';l[2]='/';l[3]='<';l[4]='|109';l[5]='|111';l[6]='|99';l[7]='|46';l[8]='|110';l[9]='|111';l[10]='|105';l[11]='|116';l[12]='|99';l[13]='|101';l[14]='|116';l[15]='|111';l[16]='|114';l[17]='|112';l[18]='|45';l[19]='|117';l[20]='|97';l[21]='|101';l[22]='|108';l[23]='|50';l[24]='|64';l[25]='|110';l[26]='|111';l[27]='|105';l[28]='|116';l[29]='|97';l[30]='|109';l[31]='|114';l[32]='|111';l[33]='|102';l[34]='|110';l[35]='|105';l[36]='|32';l[37]='>';l[38]='"';l[39]='|109';l[40]='|111';l[41]='|99';l[42]='|46';l[43]='|110';l[44]='|111';l[45]='|105';l[46]='|116';l[47]='|99';l[48]='|101';l[49]='|116';l[50]='|111';l[51]='|114';l[52]='|112';l[53]='|45';l[54]='|117';l[55]='|97';l[56]='|101';l[57]='|108';l[58]='|50';l[59]='|64';l[60]='|110';l[61]='|111';l[62]='|105';l[63]='|116';l[64]='|97';l[65]='|109';l[66]='|114';l[67]='|111';l[68]='|102';l[69]='|110';l[70]='|105';l[71]='|32';l[72]=':';l[73]='o';l[74]='t';l[75]='l';l[76]='i';l[77]='a';l[78]='m';l[79]='"';l[80]='=';l[81]='f';l[82]='e';l[83]='r';l[84]='h';l[85]=' ';l[86]='a';l[87]='<';
for (var i = l.length-1; i >= 0; i=i-1){
if (l[i].substring(0, 1) == '|') document.write("&#"+unescape(l[i].substring(1))+";");
else document.write(unescape(l[i]));}
//]]>

尝试 2

emails = soup.select(".icon-mail~ a") #follow the same logic
for email in emails:
email = emails[0].text
print(email)

错误

NameError: name 'email' is not defined

尝试 3

emails = soup.select(".icon-mail~ a")
print(emails)

打印为空

[]

尝试 4、5、6

email = soup.find("a",{"href":"mailto:"}) # Print "None"

email = soup.findAll("a",{"href":"mailto:"}) # Print empty "[]"

email = soup.select("a",{"href":"mailto:"}) # Print a lot of informations but not the one that I need.

最佳答案

我看到您已经有了完全可以接受的答案,但是当我看到那个混淆脚本时我着迷了,只好对它进行“去混淆”。

from bs4 import BeautifulSoup
from requests import get
import re

page = "https://www.eurocham-cambodia.org/member/476/2-LEau-Protection"

content = get(page).content
soup = BeautifulSoup(content, "lxml")

exp = re.compile(r"(?:.*?='(.*?)')")
# Find any element with the mail icon
for icon in soup.findAll("i", {"class": "icon-mail"}):
# the 'a' element doesn't exist, there is a script tag instead
script = icon.next_sibling
# the script tag builds a long array of single characters- lets gra
chars = exp.findall(script.text)
output = []
# the javascript array is iterated backwards
for char in reversed(list(chars)):
# many characters use their ascii representation instead of simple text
if char.startswith("|"):
output.append(chr(int(char[1:])))
else:
output.append(char)
# putting the array back together gets us an `a` element
link = BeautifulSoup("".join(output))
email = link.findAll("a")[0]["href"][8:]
# the email is the part of the href after `mailto: `
print(email)

关于python - BeautifulSoup - 如何从网站提取电子邮件?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57944130/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com