gpt4 book ai didi

python - python中使用openCV进行多线程图像处理

转载 作者:行者123 更新时间:2023-11-30 21:53:48 26 4
gpt4 key购买 nike

我对 python 很陌生,并且在并行化我的算法的一部分时遇到问题。考虑需要在像素级别以某种方式设置阈值的输入图像。由于该算法仅考虑特定区域来计算阈值,因此我想在单独的线程/进程中运行图像的每个 block 。这就是我被困住的地方。我找不到这些线程在同一图像上工作的方法,也找不到如何将结果合并到新图像中。因为我通常来自java世界,所以我通常会解决我不想干扰其他线程的问题。因此我只是尝试将图像传递给每个进程。

def thresholding(img):
stepSize = int(img.shape[0] / 10)
futures = []
with ProcessPoolExecutor(max_workers=4) as e:
for y in range(0, img.shape[0], stepSize):
for x in range(0, img.shape[1], stepSize):
futures.append(e.submit(thresholdThread, y, x, img))
concurrent.futures.wait(futures)
return img


def thresholdThread(y, x, img):
window_size = int(img.shape[0] / 10)
window_shape = (window_size, window_size)
window = img[y:y + window_shape[1], x:x + window_shape[0]]
upper_bound, lower_bound, avg = getThresholdBounds(window, 0.6)

for y_2 in range(0, window.shape[0]):
for x_2 in range(0, window.shape[1]):
tmp = img[y + y_2, x + x_2]
img[y + y_2, x + x_2] = tmp if (tmp >= upper_bound or tmp <= lower_bound) else avg
return str(avg)

据我了解Python,这是行不通的,因为每个进程都有自己的img副本。但由于 img 是来自 numpy 的 ndarray float 类型,我不知道是否以及如何使用描述的共享对象 here .

仅供引用:我正在使用 python 3.6.9。我确实知道 3.7 已经发布了,但是重新安装所有东西以便我可以使用spyder 和 openCV 并不那么容易。

最佳答案

您没有利用任何 Numpy 的矢量化技术,这可以显着减少处理时间。我假设这就是您想要在图像的窗口/ block 上进行多进程操作的原因 - 我不知道 Docker 是什么,所以我不知道这是否是您的多进程方法中的一个因素。

这是一个矢量化解决方案,但需要注意的是它可能会从操作中排除底部和右边缘像素。如果这是 Not Acceptable ,则无需继续阅读。

您的示例中的右边缘和下边缘窗口的大小很可能与其他窗口不同。看起来您任意选择了十倍来分割图像 - 如果十是任意选择,您可以轻松优化底部和右边缘增量 - 我将在答案末尾发布该函数。

图像需要被 reshape 成补丁以矢量化操作。我用过 sklearn function sklearn.feature_extraction.image._extract_patches因为它很方便并且允许创建不重叠的补丁(这似乎就是您想要的)。注意下划线前缀 - 这曾经是一个暴露函数,image.extract_patches ,但这已被弃用。该函数使用 numpy.lib.stride_tricks.as_strided - 可能只是reshape数组,但我还没有尝试过。

设置

import numpy as np
from sklearn.feature_extraction import image
img = np.arange(4864*3546*3).reshape(4864,3546,3)
# all shape dimensions in the following example derived from img's shape

定义补丁大小(请参阅下面的 opt_size)并 reshape 图像。

hsize, h_remainder, h_windows = opt_size(img.shape[0])
wsize, w_remainder, w_windows = opt_size(img.shape[1])

# rgb - not designed for rgba
if img.ndim == 3:
patch_shape = (hsize,wsize,img.shape[-1])
else:
patch_shape = (hsize,wsize)

patches = image._extract_patches(img,patch_shape=patch_shape,
extraction_step=patch_shape)
patches = patches.squeeze()

patches是原始数组的 View ,对它的更改将在原始数组中看到。它的形状是(8, 9, 608, 394, 3)8x9 , (608,394,3)窗口/补丁。

找到每个补丁的上限和下限;将每个像素与其补丁的边界进行比较;提取其边界之间且需要更改的每个像素的索引。

lower = patches.min((2,3)) * .6
lower = lower[...,None,None,:]
upper = patches.max((2,3)) * .6
upper = upper[...,None,None,:]
indices = np.logical_and(patches > lower, patches < upper).nonzero()

找到每个补丁的平均值,然后更改所需的像素值,

avg = patches.mean((2,3))    # shape (8,9,3)
patches[indices] = avg[indices[0],indices[1],indices[-1]]
<小时/>

将所有内容组合在一起的函数

def g(img, opt_shape=False):
original_shape = img.shape

# determine patch shape
if opt_shape:
hsize, h_remainder, h_windows = opt_size(img.shape[0])
wsize, w_remainder, w_windows = opt_size(img.shape[1])
else:
patch_size = img.shape[0] // 10
hsize, wsize = patch_size,patch_size

# constraint checking here(?) for
# number of windows,
# orphaned pixels

if img.ndim == 3:
patch_shape = (hsize,wsize,img.shape[-1])
else:
patch_shape = (hsize,wsize)

patches = image._extract_patches(img,patch_shape=patch_shape,
extraction_step=patch_shape)
#squeeze??
patches = patches.squeeze()

#assume color (h,w,3)
lower = patches.min((2,3)) * .6
lower = lower[...,None,None,:]
upper = patches.max((2,3)) * .6
upper = upper[...,None,None,:]
indices = np.logical_and(patches > lower, patches < upper).nonzero()

avg = patches.mean((2,3))
## del lower, upper, mask
patches[indices] = avg[indices[0],indices[1],indices[-1]]
<小时/>
def opt_size(size):
'''Maximize number of windows, minimize loss at the edge

size -> int
Number of "windows" constrained to 4-10
Returns (int,int,int)
size in pixels,
loss in pixels,
number of windows
'''

size = [(divmod(size,n),n) for n in range(4,11)]
n_windows = 0
remainder = 99
patch_size = 0
for ((p,r),n) in size:
if r <= remainder and n > n_windows:
remainder = r
n_windows = n
patch_size = p
return patch_size, remainder, n_windows
<小时/>

针对您的天真流程进行了测试 - 我希望我正确执行了它。 4864x3546 彩色图像大约提高了 35 倍。可能还有进一步的优化,也许一些向导会评论。

使用 block 因子 10 进行测试:

#yours
def f(img):
window_size = int(img.shape[0] / 10)
window_shape = (window_size, window_size)

for y in range(0, img.shape[0], window_size):
for x in range(0, img.shape[1], window_size):

window = img[y:y + window_shape[1], x:x + window_shape[0]]
upper_bound = window.max((0,1)) * .6
lower_bound = window.min((0,1)) * .6
avg = window.mean((0,1))

for y_2 in range(0, window.shape[0]):
for x_2 in range(0, window.shape[1]):
tmp = img[y + y_2, x + x_2]
indices = np.logical_and(tmp < upper_bound,tmp > lower_bound)
tmp[indices] = avg[indices]


img0 = np.arange(4864*3546*3).reshape(4864,3546,3)
#get everything the same shape
size = img0.shape[0] // 10
h,w = size*10, size * (img0.shape[1]//size)
img1 = img0[:h,:w].copy()
img2 = img1.copy()

assert np.all(np.logical_and(img1==img2,img2==img0[:h,:w]))
f(img1) # ~44 seconds
g(img2) # ~1.2 seconds
assert(np.all(img1==img2))
if not np.all(img2==img0[:h,:w]):
pass
else:
raise Exception('did not change')
<小时/>

indicesindex array 。它是一个数组元组,每个维度一个数组。 indices[0][0],indices[1][0],indices[2][0]将是 3d 数组中一个元素的索引。完整的元组可用于索引数组的多个元素。

>>> indices
(array([1, 0, 2]), array([1, 0, 0]), array([1, 1, 1]))
>>> list(zip(*indices))
[(1, 1, 1), (0, 0, 1), (2, 0, 1)]
>>> arr = np.arange(27).reshape(3,3,3)
>>> arr[1,1,1], arr[0,0,1],arr[2,0,2]
(13, 1, 20)
>>> arr[indices]
array([13, 1, 19])

# arr[indices] <-> np.array([arr[1,1,1],arr[0,0,1],arr[2,0,1]])

np.logical_and(patches > lower, patches < upper)返回一个 bool 数组和 nonzero() 返回值为 True 的所有元素的索引.

关于python - python中使用openCV进行多线程图像处理,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59520545/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com