gpt4 book ai didi

python - 更新字典中多个键值对的性能

转载 作者:行者123 更新时间:2023-11-28 22:24:59 25 4
gpt4 key购买 nike

我目前正在 python 中的建模环境中工作,它使用字典来共享连接部分的连接属性。我目前执行此操作的方式大约需要我的程序总运行时间的 15-20%,这相当多,有几百万次迭代......

所以我发现自己在研究如何加速更新字典中的多个值以及从字典中获取多个值。
我的示例字典如下所示(预计键值对的数量将保持在当前的 300 到 1000 范围内,因此我将其填充为这个数量):

val_dict = {'a': 5.0, 'b': 18.8, 'c': -55/2}
for i in range(200):
val_dict[str(i)] = i
val_dict[i] = i**2

keys = ('b', 123, '89', 'c')
new_values = np.arange(10, 41, 10)
length = new_values.shape[0]

虽然 keysnew_values 的形状以及 val_dict 中键值对的数量将始终不变new_values 的值在每次迭代时发生变化,因此必须在每次迭代时更新(并在每次迭代时检索来 self 代码的另一部分)。

我对几种方法进行了计时,其中使用 operator 模块中的 itemgetter 从字典中获取多个值 似乎是最快的。我可以在迭代开始之前定义 getter,因为所需的变量是常量:

getter = itemgetter(*keys)
%timeit getter(val_dict)
The slowest run took 10.45 times longer than the fastest. This could mean that an intermediate result is being cached.
10000000 loops, best of 3: 140 ns per loop

我想这很好,或者有什么更快的方法吗?

但是当通过掩码将这些值分配给 numpy 数组时,速度会非常慢:

result = np.ones(25)
idx = np.array((0, 5, 8, -1))
def getter_fun(result, idx, getter, val_dict):
result[idx] = getter(val_dict)
%timeit getter_fun(result, idx, getter, new_values)
The slowest run took 11.44 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 2.77 µs per loop

有什么办法可以改进吗?我想元组拆包是这里最糟糕的部分......

对于设置多个值,我已经安排了几种方法来做到这一点:解压值的函数,使用给定键值对更新的函数,使用 for 的函数-循环、字典理解和生成器函数。

def unpack_putter(val_dict, keys, new_values):
(val_dict[keys[0]],
val_dict[keys[1]],
val_dict[keys[2]],
val_dict[keys[3]]) = new_values
%timeit unpack_putter(val_dict, keys, new_values)
The slowest run took 8.85 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 1.29 µs per loop

def upd_putter(val_dict, keys, new_values):
val_dict.update({keys[0]: new_values[0],
keys[1]: new_values[1],
keys[2]: new_values[2],
keys[3]: new_values[3]})
%timeit upd_putter(val_dict, keys, new_values)
The slowest run took 15.22 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 963 ns per loop

def for_putter(val_dict, keys, new_values, length):
for i in range(length):
val_dict[keys[i]] = new_values[i]
%timeit for_putter(val_dict, keys, new_values, length)
The slowest run took 12.31 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 1.14 µs per loop

def dictcomp_putter(val_dict, keys, new_values, length):
val_dict.update({keys[i]: new_values[i] for i in range(length)})
%timeit dictcomp_putter(val_dict, keys, new_values, length)
The slowest run took 7.13 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 1.69 µs per loop

def gen_putter(val_dict, keys, new_values, length):
gen = ((keys[i], new_values[i]) for i in range(length))
val_dict.update(dict(gen))
%timeit gen_putter(val_dict, keys, new_values, length)
The slowest run took 10.03 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 2.54 µs per loop

upd_putter 会是最快的,但我能否以某种方式将它与 keysnew_values 的交替形状一起使用(它们仍然是常量在迭代期间,但考虑的每个部分都有不同数量的要更新的键,必须由用户输入确定)。有趣的是,for 循环对我来说似乎还不错。所以我想我做错了,必须有一种更快的方法。

最后要考虑的一件事:我很可能很快就会使用 Cython,所以我想这将使 for 循环更受欢迎?或者我可以使用 joblib 来并行化 for 循环。我也考虑过使用 numba,但我必须摆脱所有的命令......

希望你能帮我解决这个问题。

为 MSeifert 编辑(尽管我不确定你是不是这个意思):

tuplelist = list()
for i in range(200):
tuplelist.append(i)
tuplelist.append(str(i))
keys_long = tuple(tuplelist)
new_values_long = np.arange(0,400)

%timeit for_putter(val_dict, keys_long, new_values_long, 400)
10000 loops, best of 3: 73.5 µs per loop
%timeit dictcomp_putter(val_dict, keys_long, new_values_long, 400)
10000 loops, best of 3: 96.4 µs per loop
%timeit gen_putter(val_dict, keys_long, new_values_long, 400)
10000 loops, best of 3: 129 µs per loop

最佳答案

现在让我们关注两个与性能无关的非常重要的事情:可维护性可扩展性

手动索引的前两种方法:

(val_dict[keys[0]],
val_dict[keys[1]],
val_dict[keys[2]],
val_dict[keys[3]]) = new_values

val_dict.update({keys[0]: new_values[0],
keys[1]: new_values[1],
keys[2]: new_values[2],
keys[3]: new_values[3]})

硬编码(维护噩梦)您插入的元素数量,因此这些方法不能很好地扩展。因此,我不会将它们包含在其余的答案中。我并不是说这些不好 - 它们只是不能很好地扩展并且很难比较仅适用于特定数量条目的函数的时间。

首先让我再介绍两种基于 zip 的方法(如果您使用的是 python-2.x,请使用 itertools.izip):

def new1(val_dict, keys, new_values, length):
val_dict.update(zip(keys, new_values))

def new2(val_dict, keys, new_values, length):
for key, val in zip(keys, new_values):
val_dict[key] = val

这将是解决这个问题的“最 pythonic”方法(至少在我看来)。

我还将 new_values 更改为列表,因为遍历 NumPy 数组比将数组转换为列表然后遍历列表更糟糕 如果您对细节感兴趣,我详细说明了在另一部分answer .

让我们看看这些方法是如何执行的:

import numpy as np

def old_for(val_dict, keys, new_values, length):
for i in range(length):
val_dict[keys[i]] = new_values[i]

def old_update_comp(val_dict, keys, new_values, length):
val_dict.update({keys[i]: new_values[i] for i in range(length)})

def old_update_gen(val_dict, keys, new_values, length):
gen = ((keys[i], new_values[i]) for i in range(length))
val_dict.update(dict(gen))

def new1(val_dict, keys, new_values, length):
val_dict.update(zip(keys, new_values))

def new2(val_dict, keys, new_values, length):
for key, val in zip(keys, new_values):
val_dict[key] = val

val_dict = {'a': 1, 'b': 2, 'c': 3}
keys = ('b', 123, '89', 'c')
new_values = np.arange(10, 41, 10).tolist()
length = len(new_values)
%timeit old_for(val_dict, keys, new_values, length)
# 4.1 µs ± 183 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit old_update_comp(val_dict, keys, new_values, length)
# 9.56 µs ± 180 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit old_update_gen(val_dict, keys, new_values, length)
# 17 µs ± 332 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit new1(val_dict, keys, new_values, length)
# 5.92 µs ± 123 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit new2(val_dict, keys, new_values, length)
# 3.23 µs ± 84.1 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

还有更多的键和值:

val_dict = {'a': 1, 'b': 2, 'c': 3}
keys = range(1000)
new_values = range(1000)
length = len(new_values)
%timeit old_for(val_dict, keys, new_values, length)
# 1.08 ms ± 26 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit old_update_comp(val_dict, keys, new_values, length)
# 1.08 ms ± 13.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit old_update_gen(val_dict, keys, new_values, length)
# 1.44 ms ± 31.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit new1(val_dict, keys, new_values, length)
# 242 µs ± 3.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit new2(val_dict, keys, new_values, length)
# 346 µs ± 8.24 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

因此,对于更大的输入,我的方法似乎比您的方法快得多(2-5 倍)。

您可以尝试使用 Cython 改进您的方法,不幸的是,Cython 不支持 cdefcpdef 函数中的理解,所以我只对其他方法进行了 cython 化:

%load_ext cython

%%cython

cpdef new1_cy(dict val_dict, tuple keys, new_values, Py_ssize_t length):
val_dict.update(zip(keys, new_values.tolist()))

cpdef new2_cy(dict val_dict, tuple keys, new_values, Py_ssize_t length):
for key, val in zip(keys, new_values.tolist()):
val_dict[key] = val

cpdef new3_cy(dict val_dict, tuple keys, int[:] new_values, Py_ssize_t length):
cdef Py_ssize_t i
for i in range(length):
val_dict[keys[i]] = new_values[i]

这次我将 keys 设为 tuple 并将 new_values 设为 NumPy 数组,以便它们与定义的 Cython 函数一起使用:

import numpy as np

val_dict = {'a': 1, 'b': 2, 'c': 3}
keys = tuple(range(4))
new_values = np.arange(4)
length = len(new_values)
%timeit new1(val_dict, keys, new_values, length)
# 7.88 µs ± 317 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit new2(val_dict, keys, new_values, length)
# 4.4 µs ± 140 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit new2_cy(val_dict, keys, new_values, length)
# 5.51 µs ± 56.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

val_dict = {'a': 1, 'b': 2, 'c': 3}
keys = tuple(range(1000))
new_values = np.arange(1000)
length = len(new_values)
%timeit new1_cy(val_dict, keys, new_values, length)
# 208 µs ± 9.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit new2_cy(val_dict, keys, new_values, length)
# 231 µs ± 13.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit new3_cy(val_dict, keys, new_values, length)
# 156 µs ± 4.13 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

因此,如果您有一个元组和一个 numpy 数组,您可以使用使用普通索引和内存 View new3_cy 的函数实现几乎 2 倍的加速。至少如果您有很多需要插入的键值对。


请注意,我没有解决从字典中获取多个值的问题,因为 operator.itemgetter可能是最好的方法。

关于python - 更新字典中多个键值对的性能,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45882166/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com