gpt4 book ai didi

python - 如何从python中的numpy.searchsorted的结果提高数组屏蔽的性能?

转载 作者:行者123 更新时间:2023-12-03 16:10:08 25 4
gpt4 key购买 nike

我想从numpy.searchsorted()的结果生成一个掩码:

import numpy as np

# generate test examples
x = np.random.rand(1000000)
y = np.random.rand(200)

# sort x
idx = np.argsort(x)
sorted_x = np.take_along_axis(x, idx, axis=-1)

# searchsort y in x
pt = np.searchsorted(sorted_x, y)
pt是一个数组。然后,当索引为 (200, 1000000)时,我想使用True值创建一个大小为 idx[0:pt[i]]的 bool 掩码,并且我想出了一个像这样的for循环:
mask = np.zeros((200, 1000000), dtype='bool')
for i in range(200):
mask[i, idx[0:pt[i]]] = True
任何人都有加速循环的想法吗?

最佳答案

方法#1
根据从 OP's comments 中获得的新发现的信息(仅声明y实时更改),我们可以对x周围的东西进行预处理,从而做得更好。我们将创建一个散列数组,该数组将存储阶梯式蒙版。对于涉及y的部分,我们将使用从searchsorted获得的索引简单地索引到哈希数组中,该索引将近似于最终的掩码数组。鉴于其衣衫nature的性质,分配剩余 bool 值的最后步骤可以卸载到numba。如果我们决定扩大y的长度,这也将是有益的。
让我们看一下实现。
使用x进行预处理:

sidx = x.argsort()
ssidx = x.argsort().argsort()

# Choose a scale factor.
# 1. A small one would store more mapping info, hence faster but occupy more mem
# 2. A big one would store less mapping info, hence slower, but memory efficient.
scale_factor = 100
mapar = np.arange(0,len(x),scale_factor)[:,None] > ssidx
剩余的y步骤:
import numba as nb

@nb.njit(parallel=True,fastmath=True)
def array_masking3(out, starts, idx, sidx):
N = len(out)
for i in nb.prange(N):
for j in nb.prange(starts[i], idx[i]):
out[i,sidx[j]] = True
return out

idx = np.searchsorted(x,y,sorter=sidx)
s0 = idx//scale_factor
starts = s0*scale_factor
out = mapar[s0]
out = array_masking3(out, starts, idx, sidx)
基准化
In [2]: x = np.random.rand(1000000)
...: y = np.random.rand(200)

In [3]: ## Pre-processing step with "x"
...: sidx = x.argsort()
...: ssidx = x.argsort().argsort()
...: scale_factor = 100
...: mapar = np.arange(0,len(x),scale_factor)[:,None] > ssidx

In [4]: %%timeit
...: idx = np.searchsorted(x,y,sorter=sidx)
...: s0 = idx//scale_factor
...: starts = s0*scale_factor
...: out = mapar[s0]
...: out = array_masking3(out, starts, idx, sidx)
41 ms ± 141 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

# A 1/10th smaller hashing array has similar timings
In [7]: scale_factor = 1000
...: mapar = np.arange(0,len(x),scale_factor)[:,None] > ssidx

In [8]: %%timeit
...: idx = np.searchsorted(x,y,sorter=sidx)
...: s0 = idx//scale_factor
...: starts = s0*scale_factor
...: out = mapar[s0]
...: out = array_masking3(out, starts, idx, sidx)
40.6 ms ± 196 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

# @silgon's soln
In [5]: %timeit x[np.newaxis,:] < y[:,np.newaxis]
138 ms ± 896 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

方法#2
这是从 OP's solution 借来的。
import numba as nb

@nb.njit(parallel=True)
def array_masking2(mask1D, mask_out, idx, pt):
n = len(idx)
for j in nb.prange(len(pt)):
if mask1D[j]:
for i in nb.prange(pt[j],n):
mask_out[j, idx[i]] = False
else:
for i in nb.prange(pt[j]):
mask_out[j, idx[i]] = True
return mask_out

def app2(idx, pt):
m,n = len(pt), len(idx)
mask1 = pt>len(x)//2
mask2 = np.broadcast_to(mask1[:,None], (m,n)).copy()
return array_masking2(mask1, mask2, idx, pt)
因此,想法是一次,我们将要设置的索引的一半以上设置为 True,而在将这些行预分配为所有 False之后,我们切换为设置 True。这导致较少的内存访问,因此可以显着提高性能。
标杆管理
OP的解决方案:
@nb.njit(parallel=True,fastmath=True)
def array_masking(mask, idx, pt):
for j in nb.prange(pt.shape[0]):
for i in nb.prange(pt[j]):
mask[j, idx[i]] = True
return mask

def app1(idx, pt):
m,n = len(pt), len(idx)
mask = np.zeros((m, n), dtype='bool')
return array_masking(mask, idx, pt)
时间-
In [5]: np.random.seed(0)
...: x = np.random.rand(1000000)
...: y = np.random.rand(200)

In [6]: %timeit app1(idx, pt)
264 ms ± 8.91 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [7]: %timeit app2(idx, pt)
165 ms ± 3.43 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

关于python - 如何从python中的numpy.searchsorted的结果提高数组屏蔽的性能?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64497080/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com