gpt4 book ai didi

python - 向量化在 ndarray 的子数组上操作的函数

转载 作者:太空狗 更新时间:2023-10-30 01:34:36 26 4
gpt4 key购买 nike

我有一个函数作用于 3D 数组的每个 2D 切片。如何向量化函数以避免循环以提高性能?例如:

def interp_2d(x0,y0,z0,x1,y1):
# x0, y0 and z0 are 2D array
# x1 and y1 are 2D array
# peform 2D interpolation
return z1

# now I want to call the interp_2d for each 2D slice of z0_3d as following:
for k in range(z0_3d.shape[2]):
z1_3d[:,:,k]=interp_2d(x0, y0, z0_3d[:,:,k], x1, y1)

最佳答案

如果不重新实现 interp_2d,则无法对其进行矢量化。然而,假设 interp_2d 是某种类型的插值,那么操作可能是线性的。即 lambda z0: interp_2d(x0, y0, z0, x1, y1) 可能等同于 np.dot(M, z0) 其中 M 是一些(可能是稀疏的)矩阵,它依赖于 x0y0x1y1。现在,通过调用 interp_2d 函数,您将在每次调用时隐式地重新计算该矩阵,即使每次调用都相同。一次弄清楚该矩阵是什么并将其多次重新应用于新的 z0 会更有效。

这是一个非常简单的一维插值示例:

x0 = [0., 1.]
x1 = 0.3
z0_2d = "some very long array with shape=(2, n)"

def interp_1d(x0, z0, x1):
"""x0 and z0 are length 2, 1D arrays, x1 is a float between x0[0] and x0[1]."""

delta_x = x0[1] - x0[0]
w0 = (x1 - x0[0]) / delta_x
w1 = (x0[1] - x1) / delta_x
return w0 * z0[0] + w1 * z0[1]

# The slow way.
for i in range(n):
z1_2d[i] = interp_1d(x0, z0_2d[:,i], x1)
# Notice that the intermediate products w1 and w2 are the same on each
# iteration but we recalculate them anyway.

# The fast way.
def interp_1d_weights(x0, x1):
delta_x = x0[1] - x0[0]
w0 = (x1 - x0[0]) / delta_x
w1 = (x0[1] - x1) / delta_x
return w0, w1

w0, w1 = interp_1d_weights(x0, x1)
z1_2d = w0 * z0_2d[0,:] + w1 * z0_2d[1:0]

如果 n 非常大,预计速度会超过 100 倍。

关于python - 向量化在 ndarray 的子数组上操作的函数,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/16966241/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com