gpt4 book ai didi

python - 如何优化图像比较脚本的性能?

转载 作者:行者123 更新时间:2023-11-28 17:39:54 25 4
gpt4 key购买 nike

我编写了一个脚本,使用均方根比较对大量图像(超过 4500 个文件)进行相互比较。首先,它将每个图像的大小调整为 800x600 并绘制直方图。之后,它构建一个组合数组并将它们平均分配给四个线程,计算每个组合的均方根。 RMS 低于 500 的图像将被移动到文件夹中,以便稍后手动整理。

#!/usr/bin/python3

import sys
import os
import math
import operator
import functools
import datetime
import threading
import queue
import itertools
from PIL import Image


def calc_rms(hist1, hist2):
return math.sqrt(
functools.reduce(operator.add, map(
lambda a, b: (a - b) ** 2, hist1, hist2
)) / len(hist1)
)


def make_histogram(imgs, path, qout):
for img in imgs:
try:
tmp = Image.open(os.path.join(path, img))
tmp = tmp.resize((800, 600), Image.ANTIALIAS)
qout.put([img, tmp.histogram()])
except Exception:
print('bad image: ' + img)
return


def compare_hist(pairs, path):
for pair in pairs:
rms = calc_rms(pair[0][1], pair[1][1])
if rms < 500:
folder = 'maybe duplicates'
if rms == 0:
folder = 'exact duplicates'
try:
os.rename(os.path.join(path, pair[0][0]), os.path.join(path, folder, pair[0][0]))
except Exception:
pass
try:
os.rename(os.path.join(path, pair[1][0]), os.path.join(path, folder, pair[1][0]))
except Exception:
pass
return


def get_time():
return datetime.datetime.now().strftime("%H:%M:%S")


def chunkify(lst, n):
return [lst[i::n] for i in range(n)]


def main(path):
starttime = get_time()
qout = queue.Queue()
images = []
for img in os.listdir(path):
if os.path.isfile(os.path.join(path, img)):
images.append(img)
imglen = len(images)
print('Resizing ' + str(imglen) + ' Images ' + starttime)
images = chunkify(images, 4)
threads = []
for x in range(4):
threads.append(threading.Thread(target=make_histogram, args=(images[x], path, qout)))

[x.start() for x in threads]
[x.join() for x in threads]

resizetime = get_time()
print('Done resizing ' + resizetime)

histlist = []
for i in qout.queue:
histlist.append(i)

if not os.path.exists(os.path.join(path, 'exact duplicates')):
os.makedirs(os.path.join(path, 'exact duplicates'))
if not os.path.exists(os.path.join(path, 'maybe duplicates')):
os.makedirs(os.path.join(path, 'maybe duplicates'))

combinations = []
for img1, img2 in itertools.combinations(histlist, 2):
combinations.append([img1, img2])

combicount = len(combinations)
print('Going through ' + str(combicount) + ' combinations of ' + str(imglen) + ' Images. Please stand by')
combinations = chunkify(combinations, 4)

threads = []

for x in range(4):
threads.append(threading.Thread(target=compare_hist, args=(combinations[x], path)))

[x.start() for x in threads]
[x.join() for x in threads]

print('\nstarted at ' + starttime)
print('resizing done at ' + resizetime)
print('went through ' + str(combicount) + ' combinations of ' + str(imglen) + ' Images')
print('all done at ' + get_time())

if __name__ == '__main__':
main(sys.argv[1]) # sys.argv[1] has to be a folder of images to compare

这可行,但比较会在 15 到 20 分钟内完成调整大小后运行数小时。起初我假设这是一个锁定队列,工作人员从中获取他们的组合,所以我用预定义的数组 block 替换它。这并没有减少执行时间。我还在不移动文件的情况下运行它,以排除可能的硬盘驱动器问题。

使用 cProfile 对此进行分析可提供以下输出。

Resizing 4566 Images 23:51:05
Done resizing 00:05:07
Going through 10421895 combinations of 4566 Images. Please stand by

started at 23:51:05
resizing done at 00:05:07
went through 10421895 combinations of 4566 Images
all done at 03:09:41
10584539 function calls (10584414 primitive calls) in 11918.945 seconds

Ordered by: cumulative time

ncalls tottime percall cumtime percall filename:lineno(function)
16/1 0.001 0.000 11918.945 11918.945 {built-in method exec}
1 2.962 2.962 11918.945 11918.945 imcomp.py:3(<module>)
1 19.530 19.530 11915.876 11915.876 imcomp.py:60(main)
51 11892.690 233.190 11892.690 233.190 {method 'acquire' of '_thread.lock' objects}
8 0.000 0.000 11892.507 1486.563 threading.py:1028(join)
8 0.000 0.000 11892.507 1486.563 threading.py:1066(_wait_for_tstate_lock)
1 0.000 0.000 11051.467 11051.467 imcomp.py:105(<listcomp>)
1 0.000 0.000 841.040 841.040 imcomp.py:76(<listcomp>)
10431210 1.808 0.000 1.808 0.000 {method 'append' of 'list' objects}
4667 1.382 0.000 1.382 0.000 {built-in method stat}

可以在 here 中找到完整的分析器输出。

考虑到第四行,我猜测线程以某种方式锁定了。但为什么以及为什么不管图像数量多少都恰好是 51 次?

我在 Windows 7 64 位上运行它。

提前致谢。

最佳答案

一个主要问题是您使用线程来完成至少部分受 CPU 限制的工作。由于全局解释器锁,一次只能运行一个 CPython 线程,这意味着您无法利用多个 CPU 内核。这将使 CPU 绑定(bind)任务的多线程性能充其量与单核执行没有区别,甚至可能更糟,因为线程增加了额外的开销。这在 threading documentation 中有说明。 :

CPython implementation detail: In CPython, due to the Global Interpreter Lock, only one thread can execute Python code at once (even though certain performance-oriented libraries might overcome this limitation). If you want your application to make better use of the computational resources of multi-core machines, you are advised to use multiprocessing. However, threading is still an appropriate model if you want to run multiple I/O-bound tasks simultaneously.

要绕过 GIL 的限制,您应该按照文档中的说明进行操作,并使用 multiprocessing库而不是 threading 库:

import multiprocessing
...

qout = multiprocessing.Queue()

for x in range(4):
threads.append(multiprocessing.Process(target=make_histogram, args=(images[x], path, qout)))

...
for x in range(4):
threads.append(multiprocessing.Process(target=compare_hist, args=(combinations[x], path)))

如您所见,multiprocessing 在大多数情况下是 threading 的直接替代品,因此进行更改应该不会太难。唯一的复杂情况是,如果您在进程之间传递的任何参数不可挑选,尽管我认为所有这些参数都适用于您的情况。在进程之间发送 Python 数据结构的 IPC 成本也会增加,但我怀疑真正并行计算的好处将超过额外的开销。

综上所述,由于依赖于对磁盘的读/写操作,您可能仍会受到 I/O 的限制。并行化不会使您的磁盘 I/O 更快,因此在那里可以做的事情不多。

关于python - 如何优化图像比较脚本的性能?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/25879558/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com