gpt4 book ai didi

c# - 为什么 Parallel.For 对这个特定函数的增益如此之小?

转载 作者:太空狗 更新时间:2023-10-30 01:18:30 29 4
gpt4 key购买 nike

我无法理解为什么我的这个 http://www.codeproject.com/Tips/447938/High-performance-Csharp-byte-array-to-hex-string-t 函数的“并发”实现只有大约 20% 的性能提升。

为方便起见,这里是该站点的代码:

static readonly int[] toHexTable = new int[] {
3145776, 3211312, 3276848, 3342384, 3407920, 3473456, 3538992, 3604528, 3670064, 3735600,
4259888, 4325424, 4390960, 4456496, 4522032, 4587568, 3145777, 3211313, 3276849, 3342385,
3407921, 3473457, 3538993, 3604529, 3670065, 3735601, 4259889, 4325425, 4390961, 4456497,
4522033, 4587569, 3145778, 3211314, 3276850, 3342386, 3407922, 3473458, 3538994, 3604530,
3670066, 3735602, 4259890, 4325426, 4390962, 4456498, 4522034, 4587570, 3145779, 3211315,
3276851, 3342387, 3407923, 3473459, 3538995, 3604531, 3670067, 3735603, 4259891, 4325427,
4390963, 4456499, 4522035, 4587571, 3145780, 3211316, 3276852, 3342388, 3407924, 3473460,
3538996, 3604532, 3670068, 3735604, 4259892, 4325428, 4390964, 4456500, 4522036, 4587572,
3145781, 3211317, 3276853, 3342389, 3407925, 3473461, 3538997, 3604533, 3670069, 3735605,
4259893, 4325429, 4390965, 4456501, 4522037, 4587573, 3145782, 3211318, 3276854, 3342390,
3407926, 3473462, 3538998, 3604534, 3670070, 3735606, 4259894, 4325430, 4390966, 4456502,
4522038, 4587574, 3145783, 3211319, 3276855, 3342391, 3407927, 3473463, 3538999, 3604535,
3670071, 3735607, 4259895, 4325431, 4390967, 4456503, 4522039, 4587575, 3145784, 3211320,
3276856, 3342392, 3407928, 3473464, 3539000, 3604536, 3670072, 3735608, 4259896, 4325432,
4390968, 4456504, 4522040, 4587576, 3145785, 3211321, 3276857, 3342393, 3407929, 3473465,
3539001, 3604537, 3670073, 3735609, 4259897, 4325433, 4390969, 4456505, 4522041, 4587577,
3145793, 3211329, 3276865, 3342401, 3407937, 3473473, 3539009, 3604545, 3670081, 3735617,
4259905, 4325441, 4390977, 4456513, 4522049, 4587585, 3145794, 3211330, 3276866, 3342402,
3407938, 3473474, 3539010, 3604546, 3670082, 3735618, 4259906, 4325442, 4390978, 4456514,
4522050, 4587586, 3145795, 3211331, 3276867, 3342403, 3407939, 3473475, 3539011, 3604547,
3670083, 3735619, 4259907, 4325443, 4390979, 4456515, 4522051, 4587587, 3145796, 3211332,
3276868, 3342404, 3407940, 3473476, 3539012, 3604548, 3670084, 3735620, 4259908, 4325444,
4390980, 4456516, 4522052, 4587588, 3145797, 3211333, 3276869, 3342405, 3407941, 3473477,
3539013, 3604549, 3670085, 3735621, 4259909, 4325445, 4390981, 4456517, 4522053, 4587589,
3145798, 3211334, 3276870, 3342406, 3407942, 3473478, 3539014, 3604550, 3670086, 3735622,
4259910, 4325446, 4390982, 4456518, 4522054, 4587590
};

public static unsafe string ToHex1(byte[] source)
{
fixed (int* hexRef = toHexTable)
fixed (byte* sourceRef = source)
{
byte* s = sourceRef;
int resultLen = (source.Length << 1);

var result = new string(' ', resultLen);
fixed (char* resultRef = result)
{
int* pair = (int*)resultRef;

while (*pair != 0)
*pair++ = hexRef[*s++];
return result;
}
}
}

这是我的“改进”:

public static unsafe string ToHex1p(byte[] source)
{
var chunks = Environment.ProcessorCount;
var n = (int)Math.Ceiling(source.Length / (double)chunks);

int resultLen = (source.Length << 1);

var result = new string(' ', resultLen);

Parallel.For(0, chunks, k =>
{
var l = Math.Min(source.Length, (k + 1) * n);
fixed (char* resultRef = result) fixed (byte* sourceRef = source)
{
int from = n * k;
int to = (int)resultRef + (l << 2);

int* pair = (int*)resultRef + from;
byte* s = sourceRef + from;
while ((int)pair != to)
*pair++ = toHexTable[*s++];
}
});

return result;
}


编辑 1这就是我为函数计时的方式:

var n = 0xff;
var s = new System.Diagnostics.Stopwatch();
var d = Enumerable.Repeat<byte>(0xce, (int)Math.Pow(2, 23)).ToArray();

s.Start();
for (var i = 0; i < n; ++i)
{
Binary.ToHex1(d);
}
Console.WriteLine(s.ElapsedMilliseconds / (double)n);

s.Restart();
for (var i = 0; i < n; ++i)
{
Binary.ToHex1p(d);
}
Console.WriteLine(s.ElapsedMilliseconds / (double)n);

最佳答案

在尝试了一些您的示例之后,我得出结论,您所看到的大部分时间差异是由于 GC 开销造成的,两种情况下的初始化开销都足够高,即使在 GC 之后,性能差异也相对不重要开销从测试中移除。

当我切换测试顺序时,并行测试比非并行测试更快结束。这是测试不公平的第一个迹象。 :)

当我更改测试以便在每次测试后调用 GC.Collect() 以确保 GC 在后续测试期间保持安静时,并行版本始终领先。但只是勉强如此;每个线程的启动时间在所有情况下都超过每个并行测试总持续时间的一半。

作为我测试的一部分,我修改了代码,以便它跟踪在 For() 版本的每个线程中花费的实际时间。在这里,我发现在此代码中花费的时间大约是我基于非并行版本的预期(即相当接近时间除以线程数)。

(当然,也可以使用分析器获取这些信息)。

这是我使用 GC.Collect() 运行的测试的输出。对于并行测试,这还显示每个线程的开始时间(相对于整体测试开始时间)和持续时间。

首先运行非并行版本,然后运行并行版本:

Single-thread version: 00:00:00.6726813
Parallel version: 00:00:00.6270247
  Thread #0: start: 00:00:00.3343985, duration: 00:00:00.2925963
  Thread #1: start: 00:00:00.3345640, duration: 00:00:00.2805527

Single-thread version: 00:00:00.7027335
Parallel version: 00:00:00.5610246
  Thread #0: start: 00:00:00.3305695, duration: 00:00:00.2304486
  Thread #1: start: 00:00:00.3305857, duration: 00:00:00.2300950

Single-thread version: 00:00:00.6609645
Parallel version: 00:00:00.6143675
  Thread #0: start: 00:00:00.3391491, duration: 00:00:00.2750529
  Thread #1: start: 00:00:00.3391560, duration: 00:00:00.2705631

Single-thread version: 00:00:00.6655265
Parallel version: 00:00:00.6246624
  Thread #0: start: 00:00:00.3227595, duration: 00:00:00.2924611
  Thread #1: start: 00:00:00.3227831, duration: 00:00:00.3018066

Single-thread version: 00:00:00.6815009
Parallel version: 00:00:00.5707794
  Thread #0: start: 00:00:00.3227074, duration: 00:00:00.2480668
  Thread #1: start: 00:00:00.3227330, duration: 00:00:00.2478351

首先运行并行版本,其次运行非并行版本:

Parallel version: 00:00:00.5807343
  Thread #0: start: 00:00:00.3397320, duration: 00:00:00.2409767
  Thread #1: start: 00:00:00.3398103, duration: 00:00:00.2408334
Single-thread version: 00:00:00.6974992

Parallel version: 00:00:00.5801044
  Thread #0: start: 00:00:00.3305571, duration: 00:00:00.2495409
  Thread #1: start: 00:00:00.3305746, duration: 00:00:00.2492993
Single-thread version: 00:00:00.7442493

Parallel version: 00:00:00.5845514
  Thread #0: start: 00:00:00.3454512, duration: 00:00:00.2352147
  Thread #1: start: 00:00:00.3454756, duration: 00:00:00.2389522
Single-thread version: 00:00:00.6542540

Parallel version: 00:00:00.5909125
  Thread #0: start: 00:00:00.3356177, duration: 00:00:00.2550365
  Thread #1: start: 00:00:00.3356250, duration: 00:00:00.2552392
Single-thread version: 00:00:00.7609139

Parallel version: 00:00:00.5777678
  Thread #0: start: 00:00:00.3440084, duration: 00:00:00.2337504
  Thread #1: start: 00:00:00.3440323, duration: 00:00:00.2329294
Single-thread version: 00:00:00.6596119

经验教训:

  • 性能测试很棘手,尤其是在托管环境中。诸如垃圾收集和即时编译之类的事情很难进行同类比较
  • 与程序可能花时间做的其他事情(例如准备和调用线程)相比,将字节转换为字符的实际计算成本完全无关紧要。这种特殊的算法似乎不值得并行化;即使您确实在速度上获得了持续的改进,但由于实际计算的所有开销,它是非常微不足道的。

最后一点:此类测试中的另一个错误来源是 Intel 的超线程。或者更确切地说,如果您在使用支持超线程的 CPU 计数时进行测试,您将得到误导性的结果。例如,我在基于 Intel i5 的笔记本电脑上对此进行了测试,该笔记本电脑报告有 4 个内核。但是运行四个线程不会接近非并行实现的 4 倍加速,而运行两个线程将接近 2 倍的加速(对于正确的问题)。这就是为什么尽管我的计算机报告有 4 个 CPU,但我只测试了 2 个线程。

在这里,在这个测试中还有很多其他误导性开销,我认为超线程不会产生很大的不同。但这是需要注意的事情。

关于c# - 为什么 Parallel.For 对这个特定函数的增益如此之小?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/27135312/

29 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com