gpt4 book ai didi

Vectorizing taking longer than loop(矢量化比循环花费的时间更长)

转载 作者:bug小助手 更新时间:2023-10-25 22:00:47 28 4
gpt4 key购买 nike



My function that computes Lorentzian given freq, fwhm, amp. I want to vectorize it so that it does the computation for a list of freqs, fwhms and amps:

我的函数计算洛伦兹给定频率,fwhm,amp。我想对它进行矢量化,以便它计算频率,fwhm和amps的列表:


def lorz1(freq_series, freq, fwhm, amp):
numerator = fwhm
denominator = (2*np.pi) * ((freq_series[:,None] - freq)**2 + fwhm**2/4)
lor = numerator / denominator
main_peak = amp*(lor/np.linalg.norm(lor, axis=0))
return np.sum(main_peak, axis=1)


def lorz2(freq_series, freq, fwhm, amp):
numerator = fwhm[:,None]
denominator = (2*np.pi) * ((freq_series - freq[:,None])**2 + fwhm[:,None]**2/4)
lor = numerator / denominator
main_peak = amp[:,None]*(lor/np.linalg.norm(lor, axis=1)[:,None])
return np.sum(main_peak, axis=0)


def lorz3(freq_series, freq, fwhm, amp):
numerator = fwhm
denominator = (2*np.pi) * ((freq_series - freq)**2 + fwhm**2/4)
lor = numerator / denominator
main_peak = amp*(lor/np.linalg.norm(lor))
return main_peak


series = np.linspace(0,100,50000)
freq = np.random.uniform(5,50,50)
fwhm = np.random.uniform(0.01,0.05,50)
amps = np.random.uniform(5,500,50)

Timing:

计时:


%timeit lorz1(series, freq, fwhm, amps)


38.4 ms ± 1.7 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)



%timeit lorz2(series, freq, fwhm, amps)


29.8 ms ± 1.8 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)



%timeit np.sum(np.array([lorz3(series, item1, item2, item3)
for (item1,item2,item3) in zip(freq, fwhm, amps)]), axis=0)


24.1 ms ± 5.02 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)



Where am I going wrong with the vectorization in lorz1 and lorz2? Aren't they supposed to be faster than lorz3?

我在lorz1和lorz2中的向量化哪里出了问题?他们不是应该比洛兹3更快吗?


更多回答

The result for np.sum(np.array([lorz3(... is different from the other two. Is that by design?

Np.sum(np.array([lorz3(...与其他两个不同。这是故意的吗?

My bad, the results for the first two are supposed to be the same as the third. I actually did not realize they were different. Is that where the problem is? Assume the third is "correct" and i want to vectorize that operation instead of passing the arguments one by one in a for loop.

我的错,前两个的结果应该和第三个一样。我真的没有意识到他们是不同的。这就是问题所在吗假设第三个是“正确的”,我想对这个操作进行向量化,而不是在for循环中逐个传递参数。

@Homer512 Fixed that problem.

@Hmer 512解决了这个问题。

My working hypothesis (untested): Something like freq_series - freq[:,None] generates an array shaped (50, 50000). That's about 19 MiB of memory. Your partially vectorized lorz3 probably fits into your CPU cache much better than the fully vectorized lorz2

我的工作假设(未经测试):类似freq_Series-freq[:,None]的代码会生成一个形如(50,50000)的数组。这大约是19MiB的内存。部分矢量化的lorz3可能比完全矢量化的lorz2更适合您的cpu缓存。

@RomanPerekhrest That's just folding the outer loop (in the last timeit statement) into the function. The total number of operations should be the same unless I'm missing something

@RomanPerekhrest,它只是将外部循环(在最后一条timeit语句中)折叠到函数中。手术的总数应该是相同的,除非我遗漏了什么

优秀答案推荐

I did some further profiling using two versions:

我使用两个版本做了进一步的分析:


Version 1:

版本1:


#!/usr/bin/env python3

import numpy as np


def lorz2(freq_series, freq, fwhm, amp):
numerator = fwhm[:,None]
denominator = (2*np.pi) * ((freq_series - freq[:,None])**2 + fwhm[:,None]**2/4)
lor = numerator / denominator
main_peak = amp[:,None]*(lor/np.linalg.norm(lor, axis=1)[:,None])
return np.sum(main_peak, axis=0)


series = np.linspace(0,100,50000)
freq = np.random.uniform(5,50,50)
fwhm = np.random.uniform(0.01,0.05,50)
amps = np.random.uniform(5,500,50)

for _ in range(100):
lorz2(series, freq, fwhm, amps)

and version 2:

和版本2:


#!/usr/bin/env python3


import numpy as np


def lorz3(freq_series, freq, fwhm, amp):
numerator = fwhm
denominator = (2*np.pi) * ((freq_series - freq)**2 + fwhm**2/4)
lor = numerator / denominator
main_peak = amp*(lor/np.linalg.norm(lor))
return main_peak


series = np.linspace(0,100,50000)
freq = np.random.uniform(5,50,50)
fwhm = np.random.uniform(0.01,0.05,50)
amps = np.random.uniform(5,500,50)


for _ in range(100):
sum(lorz3(series, item1, item2, item3)
for (item1,item2,item3) in zip(freq, fwhm, amps))

Notice how I tweaked the summation for lorz3 into a plain old Python sum. This is faster in my tests since it avoids the temporary array construction.

请注意,我是如何将lorz3的求和调整为普通的Python求和的。这在我的测试中更快,因为它避免了临时数组的构造。


Here are the results of some profiling I did:

以下是我做的一些分析的结果:


perf stat -ddd ./lorz2.py

Performance counter stats for './lorz2.py':

2729.16 msec task-clock:u # 1.000 CPUs utilized
0 context-switches:u # 0.000 /sec
0 cpu-migrations:u # 0.000 /sec
217114 page-faults:u # 79.554 K/sec
8192141440 cycles:u # 3.002 GHz (38.43%)
3178961202 instructions:u # 0.39 insn per cycle (46.12%)
426575242 branches:u # 156.303 M/sec (53.81%)
2177628 branch-misses:u # 0.51% of all branches (61.51%)
42020185035 slots:u # 15.397 G/sec (69.20%)
323473974 topdown-retiring:u # 0.6% Retiring (69.20%)
33616148028 topdown-bad-spec:u # 67.1% Bad Speculation (69.20%)
371211166 topdown-fe-bound:u # 0.7% Frontend Bound (69.20%)
15767347418 topdown-be-bound:u # 31.5% Backend Bound (69.20%)
813550722 L1-dcache-loads:u # 298.096 M/sec (69.19%)
546814255 L1-dcache-load-misses:u # 67.21% of all L1-dcache accesses (69.21%)
82889242 LLC-loads:u # 30.372 M/sec (69.22%)
67633317 LLC-load-misses:u # 81.59% of all LL-cache accesses (69.24%)
<not supported> L1-icache-loads:u
9705348 L1-icache-load-misses:u # 0.00% of all L1-icache accesses (30.81%)
864895659 dTLB-loads:u # 316.909 M/sec (30.79%)
117310 dTLB-load-misses:u # 0.01% of all dTLB cache accesses (30.78%)
<not supported> iTLB-loads:u
85530 iTLB-load-misses:u # 0.00% of all iTLB cache accesses (30.76%)
<not supported> L1-dcache-prefetches:u
<not supported> L1-dcache-prefetch-misses:u

2.729696014 seconds time elapsed

1.932708000 seconds user
0.796504000 seconds sys

And here the faster version:

这里是更快的版本:



Performance counter stats for './lorz3.py':

878.49 msec task-clock:u # 0.999 CPUs utilized
0 context-switches:u # 0.000 /sec
0 cpu-migrations:u # 0.000 /sec
52869 page-faults:u # 60.182 K/sec
3704170220 cycles:u # 4.217 GHz (38.22%)
3735225800 instructions:u # 1.01 insn per cycle (45.96%)
568575253 branches:u # 647.221 M/sec (53.70%)
2580294 branch-misses:u # 0.45% of all branches (61.43%)
18355798588 slots:u # 20.895 G/sec (69.17%)
3328525030 topdown-retiring:u # 17.5% Retiring (69.17%)
6982401815 topdown-bad-spec:u # 36.6% Bad Speculation (69.17%)
1297505291 topdown-fe-bound:u # 6.8% Frontend Bound (69.17%)
7459691283 topdown-be-bound:u # 39.1% Backend Bound (69.17%)
858082535 L1-dcache-loads:u # 976.773 M/sec (69.28%)
430569310 L1-dcache-load-misses:u # 50.18% of all L1-dcache accesses (69.40%)
15723297 LLC-loads:u # 17.898 M/sec (69.49%)
73709 LLC-load-misses:u # 0.47% of all LL-cache accesses (69.50%)
<not supported> L1-icache-loads:u
38705486 L1-icache-load-misses:u # 0.00% of all L1-icache accesses (30.72%)
860276161 dTLB-loads:u # 979.270 M/sec (30.60%)
86213 dTLB-load-misses:u # 0.01% of all dTLB cache accesses (30.51%)
<not supported> iTLB-loads:u
91069 iTLB-load-misses:u # 0.00% of all iTLB cache accesses (30.50%)
<not supported> L1-dcache-prefetches:u
<not supported> L1-dcache-prefetch-misses:u

0.878946776 seconds time elapsed

0.852205000 seconds user
0.026744000 seconds sys

Notice how the number of instructions is actually slightly higher in the faster code, which makes sense since it is less vectorized, but the much higher instructions per cycle make it faster overall. There are twice as many LLC loads in the slower version, of which most miss while here almost all hit. I'm not sure how to interpret the topdown-bad-spec counter. Maybe someone else can comment on that.

请注意,在速度较快的代码中,指令数量实际上略高,这是有意义的,因为它的矢量化程度较低,但每个周期的指令数量要高得多,因此总体上速度更快。在较慢的版本中有两倍的有限责任公司的加载,其中大多数未命中,而这里几乎所有命中。我不知道如何解释自上而下的不良规格计数器。也许其他人可以对此发表评论。


The CPU even clocks down (this is reproducible) which supports the idea that it is simply waiting on memory.

CPU甚至向下计时(这是可重现的),这支持了它只是在等待内存的想法。


Further, notice the sys time in the last line. lorz2 spends 28% of its runtime in kernel space. Since it doesn't do anything IO-related, that is all memory allocation and deallocation overhead.

此外,请注意最后一行中的sys时间。Lorz2将其运行时的28%花费在内核空间中。因为它不做任何与IO相关的事情,所以这就是所有的内存分配和释放开销。


We can look a bit further at the stall reasons:

我们可以更深入地了解停滞的原因:


perf stat -e cycles,l1d_pend_miss.l2_stall,cycle_activity.stalls_l3_miss ./lorz2.py

Performance counter stats for './lorz2.py':

8446540078 cycles:u
1953955881 l1d_pend_miss.l2_stall:u
3050292324 cycle_activity.stalls_l3_miss:u

2.748141433 seconds time elapsed

1.959570000 seconds user
0.788443000 seconds sys

perf stat -e cycles,l1d_pend_miss.l2_stall,cycle_activity.stalls_l3_miss ./lorz3.py

Performance counter stats for './lorz3.py':

3674547216 cycles:u
303870088 l1d_pend_miss.l2_stall:u
16939496 cycle_activity.stalls_l3_miss:u

0.869909182 seconds time elapsed

0.848122000 seconds user
0.021752000 seconds sys

So, the lorz2 version just stalls constantly on level 2 or 3 cache misses.

因此,lorz2版本只是在2级或3级缓存未命中时不断停滞。


We can further look at a simple perf report

我们可以进一步查看一个简单的Perf报告


perf record ./lorz2.py
perf report


# Overhead Command Shared Object Symbol
# ........ ....... ................................................. ...............................................
#
32.44% python3 _multiarray_umath.cpython-311-x86_64-linux-gnu.so [.] DOUBLE_multiply
27.03% python3 _multiarray_umath.cpython-311-x86_64-linux-gnu.so [.] DOUBLE_divide
17.34% python3 _multiarray_umath.cpython-311-x86_64-linux-gnu.so [.] DOUBLE_add
6.93% python3 _multiarray_umath.cpython-311-x86_64-linux-gnu.so [.] DOUBLE_subtract
6.12% python3 _multiarray_umath.cpython-311-x86_64-linux-gnu.so [.] DOUBLE_pairwise_sum
5.32% python3 _multiarray_umath.cpython-311-x86_64-linux-gnu.so [.] DOUBLE_square
0.51% python3 [unknown] [k] 0xffffffffb2800fe7
0.46% python3 libpython3.11.so.1.0 [.] _PyEval_EvalFrameDefault
0.19% python3 libpython3.11.so.1.0 [.] unicodekeys_lookup_unicode
0.18% python3 libpython3.11.so.1.0 [.] gc_collect_main
...

perf record ./lorz3.py
perf report

# Overhead Command Shared Object Symbol
# ........ ....... ................................................. .............................................
#
27.56% python3 libblas.so.3.11.0 [.] ddot_
27.47% python3 _multiarray_umath.cpython-311-x86_64-linux-gnu.so [.] DOUBLE_divide
8.64% python3 _multiarray_umath.cpython-311-x86_64-linux-gnu.so [.] DOUBLE_subtract
8.34% python3 _multiarray_umath.cpython-311-x86_64-linux-gnu.so [.] DOUBLE_add
5.84% python3 _multiarray_umath.cpython-311-x86_64-linux-gnu.so [.] DOUBLE_multiply
3.70% python3 _multiarray_umath.cpython-311-x86_64-linux-gnu.so [.] DOUBLE_square
1.47% python3 libpython3.11.so.1.0 [.] _PyEval_EvalFrameDefault
1.38% python3 libgcc_s.so.1 [.] execute_cfa_program
0.89% python3 libgcc_s.so.1 [.] uw_update_context_1
0.88% python3 libgcc_s.so.1 [.] _Unwind_Find_FDE
0.64% python3 libgcc_s.so.1 [.] uw_frame_state_for
0.61% python3 _multiarray_umath.cpython-311-x86_64-linux-gnu.so [.] ufunc_generic_fastcall
...

Huh, that's interesting. Where does the dot product come from? I assume this is how linalg.norm is implemented for simple vectors.

哈,这很有趣。点阵产品从何而来?我假设这就是对简单向量实现linalg.Norm的方式。


Incidentally, we can speed up the lorz2 and lorz3 versions slightly via 3 measures:

顺便说一句,我们可以通过3项措施略微加快lorz2和lorz3版本的速度:



  1. Fold multiplication and summation into one matrix multiplication

  2. Reorder some operations to execute them on smaller arrays (or scalars)

  3. Replace divisions with multiplications by the inverse


def lorz2a(freq_series, freq, fwhm, amp):
numerator = fwhm[:,None] * (0.5 / np.pi)
denominator = (freq_series - freq[:,None])**2 + 0.25 * fwhm[:,None]**2
lor = numerator / denominator
return (amp / np.linalg.norm(lor, axis=1)) @ lor


def lorz3a(freq_series, freq, fwhm, amp):
numerator = fwhm * (0.5 / np.pi)
denominator = (freq_series - freq)**2 + 0.25 * fwhm**2
lor = numerator / denominator
main_peak = amp / np.linalg.norm(lor) * lor
return main_peak

This does not change anything on the overall trends, however.

然而,这并没有改变总体趋势的任何变化。


In conclusion


Numpy vectorization primarily helps reducing per-call overhead. Once the arrays are large enough, we don't get much benefit from it since the remaining interpreter overhead is small compared to the computations itself. Simultaneously, larger arrays result in reduced memory efficiency. Typically there is a sweet-spot somewhere around L2 or L3 cache size. The lorz3 implementation hits this spot better than the others.

块块矢量化主要有助于降低每次调用的开销。一旦数组足够大,我们就不会从中获得太多好处,因为与计算本身相比,剩余的解释器开销很小。同时,较大的阵列会导致内存效率降低。通常,在L2或L3缓存大小附近有一个最佳位置。Lorz3实现比其他实现更好地切中了这一点。


For a smaller series size and a larger size of the other arrays, we can expect lorz2 to perform better. For example this data set makes my lorz2a faster than my lorz3a:

对于较小的系列大小和较大的其他数组大小,我们可以预期lorz2的性能会更好。例如,此数据集使我的lorz2a比我的lorz3a快:


series = np.linspace(0,100,1000)
freq = np.random.uniform(5,50,2000)
fwhm = np.random.uniform(0.01,0.05,2000)
amps = np.random.uniform(5,500,2000)

Numpy's simple, eager evaluation scheme puts the onus of tuning for this on the user. Other libraries like NumExpr try to avoid this.

Numpy简单而急切的评估方案将调整这一点的责任推给了用户。像NumExpr这样的其他库试图避免这种情况。


更多回答

yes, I've often noted that a few iterations (e.g.5) on a complex task can be faster than the equivalent 'whole array' code. I attribute it to the increased complexity of handling larger arrays, though others can explain it better in terms of memory layout, paging and caching.

是的,我经常注意到,复杂任务的几次迭代(例如5次)可以比同等的“整个数组”代码更快。我将其归因于处理更大数组的复杂性增加,尽管其他人可以从内存布局、分页和缓存方面更好地解释这一点。

What a detailed answer! I highly appreciate it. Learnt quite a few things today. Thank you.

多详细的回答啊!我非常感激。今天学了不少东西。谢谢。

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com