gpt4 book ai didi

java - JMH 微基准测试递归快速排序

转载 作者:搜寻专家 更新时间:2023-11-01 03:05:22 27 4
gpt4 key购买 nike

你好,我正在尝试对各种排序算法进行微基准测试,我遇到了一个关于 jmh 和基准快速排序的奇怪问题。也许我的实现有问题。如果有人可以帮我看看问题出在哪里,我会很感兴趣。首先,我将 ubuntu 14.04 与 jdk 7 和 jmh 0.9.1 一起使用。以下是我尝试进行基准测试的方法:

@OutputTimeUnit(TimeUnit.MILLISECONDS)
@BenchmarkMode(Mode.AverageTime)
@Warmup(iterations = 3, time = 1)
@Measurement(iterations = 3, time = 1)
@State(Scope.Thread)
public class SortingBenchmark {

private int length = 100000;

private Distribution distribution = Distribution.RANDOM;

private int[] array;

int i = 1;

@Setup(Level.Iteration)
public void setUp() {
array = distribution.create(length);
}

@Benchmark
public int timeQuickSort() {
int[] sorted = Sorter.quickSort(array);
return sorted[i];
}

@Benchmark
public int timeJDKSort() {
Arrays.sort(array);
return array[i];
}

public static void main(String[] args) throws RunnerException {
Options opt = new OptionsBuilder().include(".*" + SortingBenchmark.class.getSimpleName() + ".*").forks(1)
.build();

new Runner(opt).run();
}
}

还有其他算法,但我将它们排除在外,因为它们或多或少都还可以。现在由于某种原因快速排序非常慢。时间慢了多少倍!甚至更多 - 我需要为其分配更多堆栈空间以使其在没有 StackOverflowException 的情况下运行。看起来由于某种原因,快速排序只是做了很多递归调用。有趣的是,当我在我的主类中简单地运行算法时 - 它运行良好(具有相同的随机分布和 100000 个元素)。无需增加堆栈,简单的纳米级基准测试显示的时间与其他算法非常接近。在基准测试中,JDK 排序在使用 jmh 进行测试时速度非常快,并且与其他具有朴素纳米时间基准测试的算法更加一致。我在这里做错了什么或错过了什么吗?这是我的快速排序算法:

public static int[] quickSort(int[] data) {
Sorter.quickSort(data, 0, data.length - 1);
return data;
}
private static void quickSort(int[] data, int sublistFirstIndex, int sublistLastIndex) {
if (sublistFirstIndex < sublistLastIndex) {
// move smaller elements before pivot and larger after
int pivotIndex = partition(data, sublistFirstIndex, sublistLastIndex);
// apply recursively to sub lists
Sorter.quickSort(data, sublistFirstIndex, pivotIndex - 1);
Sorter.quickSort(data, pivotIndex + 1, sublistLastIndex);
}
}
private static int partition(int[] data, int sublistFirstIndex, int sublistLastIndex) {
int pivotElement = data[sublistLastIndex];
int pivotIndex = sublistFirstIndex - 1;
for (int i = sublistFirstIndex; i < sublistLastIndex; i++) {
if (data[i] <= pivotElement) {
pivotIndex++;
ArrayUtils.swap(data, pivotIndex, i);
}
}
ArrayUtils.swap(data, pivotIndex + 1, sublistLastIndex);
return pivotIndex + 1; // return index of pivot element
}

现在我明白,由于我选择了枢轴,如果我在已经排序的数据上运行我的算法,我的算法会非常慢 (O(n^2))。但是我仍然在随机数据上运行它,甚至当我尝试在我的主要方法中对排序数据运行它时,它比在随机数据上使用 jmh 的版本快得多。我很确定我在这里遗漏了一些东西。您可以在此处找到包含其他算法的完整项目:https://github.com/ignl/SortingAlgos/

最佳答案

好吧,既然这里真的应该有一个答案(而不是必须通过问题下面的评论),我就把它放在这里,因为我被烧了。

JMH 中的迭代 是一批基准方法调用(取决于迭代设置为多长时间)。因此,使用 @Setup(Level.Iteration) 只会在一系列调用的开头进行设置。由于数组是在第一次调用后排序的,因此在后续调用的最坏情况(排序数组)中会调用快速排序。这就是为什么它需要这么长时间或破坏堆栈的原因。

所以一个解决方案是使用@Setup(Level.Invocation)。但是,如 Javadoc 中所述:

**
* Invocation level: to be executed for each benchmark method execution.
*
* <p><b>WARNING: HERE BE DRAGONS! THIS IS A SHARP TOOL.
* MAKE SURE YOU UNDERSTAND THE REASONING AND THE IMPLICATIONS
* OF THE WARNINGS BELOW BEFORE EVEN CONSIDERING USING THIS LEVEL.</b></p>
*
* <p>This level is only usable for benchmarks taking more than a millisecond
* per single {@link Benchmark} method invocation. It is a good idea to validate
* the impact for your case on ad-hoc basis as well.</p>
*
* <p>WARNING #1: Since we have to subtract the setup/teardown costs from
* the benchmark time, on this level, we have to timestamp *each* benchmark
* invocation. If the benchmarked method is small, then we saturate the
* system with timestamp requests, which introduce artificial latency,
* throughput, and scalability bottlenecks.</p>
*
* <p>WARNING #2: Since we measure individual invocation timings with this
* level, we probably set ourselves up for (coordinated) omission. That means
* the hiccups in measurement can be hidden from timing measurement, and
* can introduce surprising results. For example, when we use timings to
* understand the benchmark throughput, the omitted timing measurement will
* result in lower aggregate time, and fictionally *larger* throughput.</p>
*
* <p>WARNING #3: In order to maintain the same sharing behavior as other
* Levels, we sometimes have to synchronize (arbitrage) the access to
* {@link State} objects. Other levels do this outside the measurement,
* but at this level, we have to synchronize on *critical path*, further
* offsetting the measurement.</p>
*
* <p>WARNING #4: Current implementation allows the helper method execution
* at this Level to overlap with the benchmark invocation itself in order
* to simplify arbitrage. That matters in multi-threaded benchmarks, when
* one worker thread executing {@link Benchmark} method may observe other
* worker thread already calling {@link TearDown} for the same object.</p>
*/

正如 Aleksey Shipilev 建议的那样,将数组复制成本吸收到每个基准测试方法中。由于您比较的是相对性能,因此这应该不会影响您的结果。

关于java - JMH 微基准测试递归快速排序,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/24573020/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com