gpt4 book ai didi

python - BeautifulSoup 爬行并从中间提取文本

转载 作者:行者123 更新时间:2023-12-01 04:17:31 25 4
gpt4 key购买 nike

我的 html 代码如下所示:

 <br><a href="/drink12xy569.html">Alien Suicide</a>
<br><a href="/drink792.html">All Jacked Up</a>
<br><a href="/drink3805.html">All Night Hunter</a>
<br><a href="/drink796.html">Alley Shooter</a>
<br><a href="/drink10013.html">Alligator Sperm</a>
<br><a href="/drink804.html">Almond Delight</a>
<br><a href="/drink11135.html">Almond Gravy</a>
<br><a href="/drink7519.html">Almond Joy #2</a>
<br><a href="/drinks1r2563.html">Almond Kiss</a>
<br><a href="/drink12xy578.html">Amaretto Pie</a>
<br><a href="/drink11144.html">Amaretto Sourball</a>
<br><a href="/drinkp15q144.html">Ambuco Cinnamon Shooter</a>
<br><a href="/drink835.html">Amenie Mama</a>
<br><a href="/drink7521.html">American Death</a>

我需要帮助来提取 <br> 之间的标题然后打印出来。然后,我需要帮助将此信息与我已提取到文本文档中的其他信息一起编写,我可以使用 GUI 界面进行搜索。我有单独的代码,最后可以将它们全部组合在一起,我只需要概念帮助。

我 BeautifulSoup 爬行看起来像这样:

import urllib2
from bs4 import BeautifulSoup
url=[]
for i in range(28):
url="http://www.drinksmixer.com/cat/3/"
page = urllib2.urlopen("http://www.drinksmixer.com/cat/3/")
soup = BeautifulSoup(page.read())
links=soup.find_all('a')

for link in links:
if "drink" in link ['href']:
print link['href']
print "****\n\n"
url="http://drinksmixer.com"+link['href']
page1=urllib2.urlopen(url)
soup1=BeautifulSoup(page1.read())
divs=soup1.find('div', {"class":"ingredients"})
print divs.text.encode("utf-8")

我的 GUI 界面如下所示:

import Tkinter
from Tkinter import *

def show_entry_fields():
print("Shot Name: %s" % (e1.get()))

master = Tk()
Label(master, text="Shot Name").grid(row=0)

e1 = Entry(master)

e1.grid(row=0, column=1)

Button(master, text='Search', command=show_entry_fields).grid(row=3, column=1, sticky=W, pady=4)

mainloop( )

我只需要帮助在我提取的信息中实现搜索。

最佳答案

设计 UI 并不容易。你的代码几乎没问题。我将其分为功能并添加了您要求的基本搜索。

import urllib2
from bs4 import BeautifulSoup
import Tkinter
from Tkinter import *

e1 = None
links = []

def get_drinks():
global links
for i in range(28):
url="http://www.drinksmixer.com/cat/3/" + i
page = urllib2.urlopen(url)
soup = BeautifulSoup(page.read())
links.append(soup.find_all('a'))

def get_recipe(drink_name):
print drink_name
for link in links:
if "drink" in link ['href'] and drink_name in link.contents:
#print link['href']
print "****\n\n"
url="http://drinksmixer.com"+link['href']
page1=urllib2.urlopen(url)
soup1=BeautifulSoup(page1.read())
divs=soup1.find('div', {"class":"ingredients"})
recipe = divs.text.encode("utf-8")
return recipe

def show_entry_fields():
drink_name = e1.get()
print("Shot Name: %s" % drink_name)
recipe = get_recipe(drink_name)
print recipe # or better yet, popup
# tkMessageBox.showinfo(drink_name, recipe)

def main():
global e1
master = Tk()
Label(master, text="Shot Name").grid(row=0)
e1 = Entry(master)
e1.grid(row=0, column=1)
Button(master, text='Search', command=show_entry_fields).grid(row=3, column=1, sticky=W, pady=4)
mainloop()

if __name__ == "__main__":
get_drinks()
main()

关于python - BeautifulSoup 爬行并从中间提取文本<br>,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34163303/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com