gpt4 book ai didi

python - 从网页中提取所有图像的脚本

转载 作者:太空宇宙 更新时间:2023-11-03 21:15:36 31 4
gpt4 key购买 nike

我尝试使用以下代码从网页中提取所有图像,但它给出错误“Nonetype”对象没有属性“group”。有人可以告诉我这里有什么问题吗?

import re
import requests
from bs4 import BeautifulSoup

site = 'http://pixabay.com'

response = requests.get(site)

soup = BeautifulSoup(response.text, 'html.parser')
img_tags = soup.find_all('img')

urls = [img['src'] for img in img_tags]


for url in urls:
filename = re.search(r'/([\w_-]+[.](jpg|gif|png))$', url)
with open(filename.group(1), 'wb') as f:
if 'http' not in url:
# sometimes an image source can be relative
# if it is provide the base url which also happens
# to be the site variable atm.
url = '{}{}'.format(site, url)
response = requests.get(url)
f.write(response.content)

最佳答案

编辑:对于上下文,由于原始问题已由其他人更新,并且更改了原始代码,因此用户使用的原始模式是 r'/([\w_-]+.)$ '。这是原来的问题。这种上下文将使以下答案更有意义:

我采用了像r'/([\w_.-]+)$'这样的模式。您使用的模式不允许路径包含 . ,除非作为最后一个字符,因为 [] 之外的 . 表示任何字符,并且它就在 $ (字符串末尾)之前。因此,我将 . 移至 [] 中,这意味着在字符组中允许使用文字 .。这使得该模式能够捕获 URL 末尾的图像文件名。

import re
import requests
from bs4 import BeautifulSoup

site = 'http://pixabay.com'

response = requests.get(site)

soup = BeautifulSoup(response.text, 'html.parser')
img_tags = soup.find_all('img')

urls = [img['src'] for img in img_tags]

for url in urls:
filename = re.search(r'/([\w_.-]+)$', url)
with open(filename.group(1), 'wb') as f:
if 'http' not in url:
# sometimes an image source can be relative
# if it is provide the base url which also happens
# to be the site variable atm.
url = '{}{}'.format(site, url)
response = requests.get(url)
f.write(response.content)

关于python - 从网页中提取所有图像的脚本,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54738342/

31 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com