- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我用 python 编写了一个脚本,用于从网页的登陆页面中抓取不同餐厅的名称
、地址
和电话
并解析每个餐厅内页的作者
和评论
。
I would like to generate results using
yield
withinget_additional_info(link)
function but print the same withinget_links(link)
function together with other results.
到目前为止我已经写过:
import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin
url = "https://www.yellowpages.com/search?search_terms=restaurant&geo_location_terms=San+Francisco%2C+CA"
base = "https://www.yellowpages.com"
def get_links(link):
res = requests.get(link,headers={'User-Agent':'Mozilla/5.0'})
soup = BeautifulSoup(res.text,"lxml")
for item in soup.select(".v-card"):
inner_link = item.select_one("a.business-name")
author,review = get_additional_info(urljoin(base,inner_link.get('href')))
title = inner_link.text
address = item.select_one("p.adr").get_text(strip=True)
phone = item.select_one(".phone").text
yield title,address,phone,author,review
def get_additional_info(link):
res = requests.get(link,headers={'User-Agent':'Mozilla/5.0'})
soup = BeautifulSoup(res.text,"lxml")
for elem in soup.select("article[class='clearfix']"):
try:
author = elem.select_one(".review-info a.author").text
except AttributeError: author = ""
try:
review = elem.select_one(".review-response > p").text
except AttributeError: review = ""
yield author, review
if __name__ == '__main__':
for item in get_links(url):
print(item)
如果我运行上面的脚本,它会抛出以下错误,指向 author,review = get_additional_info(urljoin(base,inner_link.get('href')))
行:
Traceback (most recent call last):
File "C:\Users\WCS\AppData\Local\Programs\Python\Python37-32\demo.py", line 36, in <module>
for item in get_links(url):
File "C:\Users\WCS\AppData\Local\Programs\Python\Python37-32\demo.py", line 14, in get_links
author,review = get_additional_info(urljoin(base,inner_link.get('href')))
ValueError: too many values to unpack (expected 2)
我希望抓取的所有字段都已正确定义(选择器)。
这就是the output我追求的是:
PS I wish to stick to the way I've already tried, meaning I do not want to parse everything from inner pages as the data are useless to me.
最佳答案
如果我理解正确的话,您想要“加入”链接和附加信息。一种方法是这样的:
import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin
from textwrap import shorten
url = "https://www.yellowpages.com/search?search_terms=restaurant&geo_location_terms=San+Francisco%2C+CA"
base = "https://www.yellowpages.com"
def get_links(session, link):
res = session.get(link,headers={'User-Agent':'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0'})
soup = BeautifulSoup(res.text,"lxml")
for item in soup.select(".v-card"):
inner_link = item.select_one("a.business-name")
title = inner_link.text
address = item.select_one("p.adr").get_text(strip=True)
phone = item.select_one(".phone").text
for author, review in get_additional_info(session, urljoin(base,inner_link.get('href'))):
yield title,address,phone,author,review
def get_additional_info(session, link):
res = session.get(link,headers={'User-Agent':'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0'})
soup = BeautifulSoup(res.text,"lxml")
for elem in soup.select("article[class='clearfix']"):
try:
author = elem.select_one(".review-info a.author").text
except AttributeError: author = ""
try:
review = elem.select_one(".review-response > p").text
except AttributeError: review = ""
yield author, review
if __name__ == '__main__':
with requests.session() as s:
# this sets all cookies
res = s.get("https://www.yellowpages.com", headers={'User-Agent':'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0'}).text
for title,address,phone,author,review in get_links(s, url):
print('{: <30}{: <30}{: <20}{: <20}{}'.format(shorten(title, 30), shorten(address, 30), shorten(phone, 20), shorten(author, 20), shorten(review, 60)))
打印:
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 Mark I. Their food is good but i think they need to improve on [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 Cathy L. This place is pretty much my go to place is I want [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 Mary C. They have so many things in here worth going in here [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 Claude R. The appetizers in here are enough to make you ask for [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 Felicia M. How can this be? This place looks like magic and their [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 Jose H. I feel like I just got from Mexico, we went here last [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 Authentic Mexican. Always busy and the house salsa is [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 I'm disappointed. The decor is ecclectic and fun, the [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 This used to be one of my favorite restaurants until I [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 I came to this restarnt for a birthday of a friend of [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 The reviews here, which I consulted before going, were [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 I have been told to give it a try.Food is on [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 Great food... love the empalmada... sort of like a [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 Definitely the best Mexican restaurant in town!... [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 This place has been consistenly good for a few years. [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 So-so Mexican food served by a vaguely condescending, [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 since the place is small, it gets crowded quickly and [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 Go early if you don't want to wait. They don't take [...]
El Toreador Restaurant 50 W Portal Ave, San [...] (415) 347-3294 A great place where you belong like part of the [...]
House Of Prime Rib 1906 Van Ness Ave, San [...] (415) 636-6476 Keith Y. Loved this place. Food and service was amazing
House Of Prime Rib 1906 Van Ness Ave, San [...] (415) 636-6476 Quintrell P. Was really hungry and needed a place to get some [...]
House Of Prime Rib 1906 Van Ness Ave, San [...] (415) 636-6476 Len K. I'm not usually a fan of red meat, but I'm definitely [...]
House Of Prime Rib 1906 Van Ness Ave, San [...] (415) 636-6476 Emm C. I haven't been able to see San Francisco, one of my [...]
House Of Prime Rib 1906 Van Ness Ave, San [...] (415) 636-6476 James O. For me, it`s one of the best ribs in town, I give [...]
House Of Prime Rib 1906 Van Ness Ave, San [...] (415) 636-6476 Jing H. This is one of the best places if you are craving for [...]
...etc.
关于python - 无法同时从两个不同深度刮取不同字段,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57631564/
我尝试通过 Goutte 列表从流行的拍卖网络服务中抓取,但它们的部分代码是由 javascript 呈现的,问题是 Goutte 只返回没有 JS 作业的 DOM。因此,如果 symphony 是
我遇到了旧的工作代码无法正常运行的问题。 我的 python 代码正在使用漂亮的汤抓取网站并提取事件数据(日期、事件、链接)。 我的代码正在提取位于 tbody 中的所有事件.每个事件都存储在 中.
所以我期待着对这个 link 中出现的表格进行抓取. 为了抓取,我决定使用 Selenium 。 在我的第一次尝试中,我所做的是: driver = webdriver.Chrome(ChromeDr
所以我期待着对这个 link 中出现的表格进行抓取. 为了抓取,我决定使用 Selenium 。 在我的第一次尝试中,我所做的是: driver = webdriver.Chrome(ChromeDr
我通过 Selenium 运行 headless (PhantomJS) 浏览器的网站有不同的时区,所以我得到了很多条目的错误日期。因此,我抓取的结果显示了错误的日期/时间(我在美国东部时间,看起来网
尝试使用 beautiful soup 从网站上抓取表格以解析数据。我将如何通过它的标题来解析它?到目前为止,我什至无法打印整个表格。提前致谢。 代码如下: import urllib2 from b
我一直在使用 Selenium(Python Webdriver)抓取一个网站。当我尝试将它作为 click() 选项时,我收到了权限被拒绝的错误。完整堆栈跟踪: Traceback (most re
使用 Beautiful soup 和 Pandas 抓取网页以获取表格。其中一列有一些网址。当我将 html 传递给 pandas 时,href 丢失了。 有没有办法只为该列保留 url 链接? 示
我正在尝试抓取 table进入数据框。我的尝试仅返回表名称,而不返回每个区域的行内的数据。 这是我到目前为止所拥有的: from bs4 import BeautifulSoup as bs4 imp
我是一名优秀的程序员,十分优秀!