gpt4 book ai didi

java - 什么启发式使用 TPL 来确定何时使用多核

转载 作者:塔克拉玛干 更新时间:2023-11-03 03:51:38 25 4
gpt4 key购买 nike

我们知道 TPL(所以 PLINQ 也是如此)如果他认为任务很简单并在单核上执行,则不会消耗所有核。但即使是复杂的任务,他也会这样做!例如,这里是关于 Java 并行性的文章中的代码:

import org.openjdk.jmh.infra.Blackhole;
import org.openjdk.jmh.annotations.*;
import java.util.concurrent.TimeUnit;
import java.util.stream.IntStream;
import java.math.BigInteger;

@Warmup(iterations=5)
@Measurement(iterations=10)
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MICROSECONDS)
@State(Scope.Benchmark)
@Fork(2)
public class Factorial {
private static final BigInteger ONE = BigInteger.valueOf(1);

@Param({"10", "100", "1000", "10000", "50000"})
private int n;

public static BigInteger naive(int n) {
BigInteger r = ONE;
for (int i = 2; i <= n; ++i)
r = r.multiply(BigInteger.valueOf(i));
return r;
}

public static BigInteger streamed(int n) {
if(n < 2) return ONE;
return IntStream.rangeClosed(2, n).mapToObj(BigInteger::valueOf).reduce(BigInteger::multiply).get();
}

public static BigInteger streamedParallel(int n) {
if(n < 2) return ONE;
return IntStream.rangeClosed(2, n).parallel().mapToObj(BigInteger::valueOf).reduce(BigInteger::multiply).get();
}

public static BigInteger fourBlocks(int n) {
if(n < 2) return ONE;
BigInteger r1 = ONE, r2 = ONE, r3 = ONE, r4 = ONE;
int i;
for (i = n; i > 4; i -= 4)
{
r1 = r1.multiply(BigInteger.valueOf(i));
r2 = r2.multiply(BigInteger.valueOf(i - 1));
r3 = r3.multiply(BigInteger.valueOf(i - 2));
r4 = r4.multiply(BigInteger.valueOf(i - 3));
}
int mult = i == 4 ? 24 : i == 3 ? 6 : i == 2 ? 2 : 1;
return r1.multiply(r2).multiply(r3.multiply(r4)).multiply(BigInteger.valueOf(mult));
}

public static BigInteger streamedShift(int n) {
if(n < 2) return ONE;
int p = 0, c = 0;
while ((n >> p) > 1) {
p++;
c += n >> p;
}
return IntStream.rangeClosed(2, n).map(i -> i >> Integer.numberOfTrailingZeros(i))
.mapToObj(BigInteger::valueOf).reduce(BigInteger::multiply).get().shiftLeft(c);
}

public static BigInteger streamedParallelShift(int n) {
if(n < 2) return ONE;
int p = 0, c = 0;
while ((n >> p) > 1) {
p++;
c += n >> p;
}
return IntStream.rangeClosed(2, n).parallel().map(i -> i >> Integer.numberOfTrailingZeros(i))
.mapToObj(BigInteger::valueOf).reduce(BigInteger::multiply).get().shiftLeft(c);
}

@Benchmark
public void testNaive(Blackhole bh) {
bh.consume(naive(n));
}

@Benchmark
public void testStreamed(Blackhole bh) {
bh.consume(streamed(n));
}

@Benchmark
public void testStreamedParallel(Blackhole bh) {
bh.consume(streamedParallel(n));
}

@Benchmark
public void testFourBlocks(Blackhole bh) {
bh.consume(fourBlocks(n));
}

@Benchmark
public void testStreamedShift(Blackhole bh) {
bh.consume(streamedShift(n));
}

@Benchmark
public void testStreamedParallelShift(Blackhole bh) {
bh.consume(streamedParallelShift(n));
}
}

和结果:

Benchmark                              (n)  Mode  Cnt       Score       Error  Units
Factorial.testFourBlocks 10 avgt 20 0.409 ± 0.027 us/op
Factorial.testFourBlocks 100 avgt 20 4.752 ± 0.147 us/op
Factorial.testFourBlocks 1000 avgt 20 113.801 ± 7.159 us/op
Factorial.testFourBlocks 10000 avgt 20 10626.187 ± 54.785 us/op
Factorial.testFourBlocks 50000 avgt 20 281522.808 ± 13619.674 us/op
Factorial.testNaive 10 avgt 20 0.297 ± 0.002 us/op
Factorial.testNaive 100 avgt 20 5.060 ± 0.036 us/op
Factorial.testNaive 1000 avgt 20 277.902 ± 1.311 us/op
Factorial.testNaive 10000 avgt 20 32471.921 ± 1092.640 us/op
Factorial.testNaive 50000 avgt 20 970355.227 ± 64386.653 us/op
Factorial.testStreamed 10 avgt 20 0.326 ± 0.002 us/op
Factorial.testStreamed 100 avgt 20 5.393 ± 0.190 us/op
Factorial.testStreamed 1000 avgt 20 265.550 ± 1.772 us/op
Factorial.testStreamed 10000 avgt 20 29871.366 ± 234.457 us/op
Factorial.testStreamed 50000 avgt 20 894549.237 ± 5453.425 us/op
Factorial.testStreamedParallel 10 avgt 20 6.114 ± 0.500 us/op
Factorial.testStreamedParallel 100 avgt 20 10.719 ± 0.786 us/op
Factorial.testStreamedParallel 1000 avgt 20 72.225 ± 0.509 us/op
Factorial.testStreamedParallel 10000 avgt 20 2811.977 ± 14.599 us/op
Factorial.testStreamedParallel 50000 avgt 20 49501.716 ± 729.646 us/op
Factorial.testStreamedParallelShift 10 avgt 20 6.684 ± 0.549 us/op
Factorial.testStreamedParallelShift 100 avgt 20 11.176 ± 0.779 us/op
Factorial.testStreamedParallelShift 1000 avgt 20 71.056 ± 3.918 us/op
Factorial.testStreamedParallelShift 10000 avgt 20 2641.108 ± 142.571 us/op
Factorial.testStreamedParallelShift 50000 avgt 20 46480.544 ± 405.648 us/op
Factorial.testStreamedShift 10 avgt 20 0.402 ± 0.006 us/op
Factorial.testStreamedShift 100 avgt 20 5.086 ± 0.039 us/op
Factorial.testStreamedShift 1000 avgt 20 237.279 ± 1.566 us/op
Factorial.testStreamedShift 10000 avgt 20 27572.709 ± 135.489 us/op
Factorial.testStreamedShift 50000 avgt 20 874699.213 ± 53645.087 us/o

您可以看到多线程版本的执行速度比单线程快 19 倍(使用了Core i7-4702MQ)。但是在C#版本中

static BigInteger Streamed(int n)
{
return n < 2 ? 1 : Enumerable.Range(2, n - 1).Aggregate(BigInteger.One, (acc, elm) => acc*elm);
}

static BigInteger StreamedParallel(int n)
{
return n < 2 ? 1 : Enumerable.Range(2, n - 1).AsParallel().Aggregate(BigInteger.One, (acc, elm) => acc * elm);
}

与所有其他代码相比,此代码的性能最差,这并不奇怪,因为 TPL 开销没有从多线程中获得性能优势。

所以问题是:为什么 Java 标准多线程库如此明智(任何需要 100us+ 的操作都会被提升,参见引用 http://gee.cs.oswego.edu/dl/html/StreamParallelGuidance.html),而 C# 不能提升我的 1500ms 的操作机器。

我喜欢 C# 而不是真的很喜欢 Java,这就是它带来伤害的原因,我想了解它的原因...

最佳答案

当像这样使用Aggregate 方法时,PLinq 将按顺序执行聚合,因此在单个线程上执行。当然,乘法可以按任何顺序执行,但 PLinq 无法猜测。例如,如果操作是除法,则更改执行顺序会改变最终结果。

告诉 PLinq 查询可以并行化的一种方法是使用另一个聚合重载,它指示如何合并来自多个线程的结果:

return n < 2 ? 1 : Enumerable.Range(2, n - 1).AsParallel().Aggregate(BigInteger.One, (acc, elm) => acc * elm, (i, j) => i * j, i => i);

对于这个版本,当 n = 100000 时,顺序版本大约需要 9000 毫秒,并行版本需要 4400 毫秒。这几乎快了一倍,这与我的硬件(双核处理器)一致。

您可以阅读这篇文章,了解有关聚合如何与 PLinq 协同​​工作的更多信息:http://blogs.msdn.com/b/pfxteam/archive/2008/01/22/7211660.aspx

关于java - 什么启发式使用 TPL 来确定何时使用多核,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/29669009/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com