gpt4 book ai didi

c++ - 以前的代码似乎影响了以后函数调用的时间

转载 作者:塔克拉玛干 更新时间:2023-11-03 00:42:23 24 4
gpt4 key购买 nike

我正在尝试对用 C++ 实现的一组较大算法中相对较小的部分进行基准测试。简单地说,可以说每个算法都是通过两个函数(让我们称它们为 foo()bar() )实现的,这些函数可以以任意顺序重复调用,并且这些函数可能会对某些算法内部数据结构进行更改。除此之外,我想通过单独测量 foo() 中花费的总时间来比较算法的性能。和 bar() , 分别。
现在我有两种算法: 算法 A 在 foo() 中做了一些工作,但在 bar() 中很少,而算法 B 在 foo() 中完全没有做任何事情(foo()这里其实是一个空函数),但是bar()里面做了很多工作.我在这里观察到的意外是算法 B 在 foo() 上花费的总时间在许多场景中比算法 A 在 foo() 中花费的总时间更长.经过一些调试,我发现对于算法 B,在调用 bar() 之后第一次调用 foo() 需要很多时间,而随后对 foo() 的调用往往更快。
为了确定这种影响,我想出了算法 B 的以下简化,它由一个空函数(对应于 foo() )和两个我试图模拟工作的函数(对应于 bar() ,即"real"bar() 基本上也只是分配空间并遍历数据结构):
b.h:

#ifndef B_H
#define B_H

void foo_emptyFunction(unsigned long long u); // foo()
void bar_expensiveFunction1(); // bar() - version 1
void bar_expensiveFunction2(); // bar() - version 2

#endif
b.cpp
#include "b.h"

#include <iostream>
#include <vector>
#include <math.h>


void foo_emptyFunction(unsigned long long )
{
// nothing
}

void bar_expensiveFunction1() {
std::vector<unsigned long> vec;
for (auto i = 0UL; i < 1000000UL; i++) {
vec.push_back(i);
}
std::cout << "Created and filled a vector with " << vec.size() << " elements." << std::endl;
}

void bar_expensiveFunction2() {
std::vector<unsigned long> vec;
for (auto i = 1UL; i <= 1000000UL; i++) {
vec.push_back(i);
}
auto sum = 0ULL;
auto sumSqrts = 0.0;
for (auto i : vec) {
sum += i;
sumSqrts += sqrt(i);
}
std::cout << "Sum of elements from " << vec.front()
<< " to " << vec.back() << " is " << sum
<< ", the sum of their square roots is " << sumSqrts << "." << std::endl;
}
然后我尝试测量在“昂贵”的调用之后多次调用空函数所需的时间:
主.cpp:
#include "b.h"

#include <chrono>
#include <thread>

#include <iostream>

#include <math.h>

typedef std::chrono::high_resolution_clock sclock;
typedef unsigned long long time_interval;
typedef std::chrono::duration<time_interval, std::chrono::nanoseconds::period> time_as;

void timeIt() {
auto start = sclock::now();
auto end = start;

for (auto i = 0U; i < 10U; i++) {
start = sclock::now();
asm volatile("" ::: "memory");
foo_emptyFunction(1000ULL);
asm volatile("" ::: "memory");
end = sclock::now();
std::cout << "Call #" << i << " to empty function took " << std::chrono::duration_cast<time_as>(end - start).count() << "ns." << std::endl;
}
}

int main()
{
timeIt();

bar_expensiveFunction1();

timeIt();

std::this_thread::sleep_for(std::chrono::milliseconds(100));
std::cout << "Slept for 100ms." << std::endl;

timeIt();

bar_expensiveFunction2();

timeIt();

bar_expensiveFunction1();

timeIt();

return 0;
}
如果我编译( g++ -o test main.cpp b.cppg++ -O3 -o test main.cpp b.cpp )并运行代码,我会得到类似这样的输出:
。/测试
Call #0 to empty function took 79ns.
Call #1 to empty function took 57ns.
Call #2 to empty function took 55ns.
Call #3 to empty function took 31ns.
Call #4 to empty function took 35ns.
Call #5 to empty function took 26ns.
Call #6 to empty function took 26ns.
Call #7 to empty function took 36ns.
Call #8 to empty function took 24ns.
Call #9 to empty function took 26ns.
Created and filled a vector with 1000000 elements.
Call #0 to empty function took 84ns.
Call #1 to empty function took 27ns.
Call #2 to empty function took 28ns.
Call #3 to empty function took 27ns.
Call #4 to empty function took 29ns.
Call #5 to empty function took 27ns.
Call #6 to empty function took 29ns.
Call #7 to empty function took 33ns.
Call #8 to empty function took 28ns.
Call #9 to empty function took 23ns.
Slept for 100ms.
Call #0 to empty function took 238ns.
Call #1 to empty function took 106ns.
Call #2 to empty function took 102ns.
Call #3 to empty function took 118ns.
Call #4 to empty function took 199ns.
Call #5 to empty function took 92ns.
Call #6 to empty function took 216ns.
Call #7 to empty function took 118ns.
Call #8 to empty function took 113ns.
Call #9 to empty function took 107ns.
Sum of elements from 1 to 1000000 is 500000500000, the sum of their square roots is 6.66667e+08.
Call #0 to empty function took 126ns.
Call #1 to empty function took 35ns.
Call #2 to empty function took 31ns.
Call #3 to empty function took 30ns.
Call #4 to empty function took 38ns.
Call #5 to empty function took 54ns.
Call #6 to empty function took 29ns.
Call #7 to empty function took 35ns.
Call #8 to empty function took 30ns.
Call #9 to empty function took 29ns.
Created and filled a vector with 1000000 elements.
Call #0 to empty function took 112ns.
Call #1 to empty function took 23ns.
Call #2 to empty function took 23ns.
Call #3 to empty function took 23ns.
Call #4 to empty function took 23ns.
Call #5 to empty function took 22ns.
Call #6 to empty function took 23ns.
Call #7 to empty function took 23ns.
Call #8 to empty function took 24ns.
Call #9 to empty function took 23ns.
我怀疑运行时间的差异,特别是第一次调用时的峰值,可能是由于某种缓存效果,但我真的很想了解这里发生了什么。
编辑:我在这里观察到的效果与真实代码中的效果非常相似。第一次调用时几乎总是有一个巨大的峰值,并且从第三次调用开始相当稳定。实际代码中效果更明显,我怀疑是因为 B::bar()在现实中做了更多的工作(它遍历一个图形而不仅仅是一个整数列表)。不幸的是,真正的代码是一个相当大的项目的一部分,所以我不能在这里发布。我上面发布的代码是对原始代码的相当大的简化,但它似乎显示了相同的效果。实际上,两者都是 foo()bar()是虚拟的(我知道这会带来时间损失)并且在不同的编译单元中,因此编译器无法优化函数调用。另外,我还检查了真实程序的汇编程序。我也知道我不可避免地也测量了调用 now() 的时间 - 但我对算法 A 使用了相同的基准测试代码(它至少在 foo() 的实现中做了一些事情)和测量的总时间 A::foo()少...
优化级别似乎对这种效果没有(太大)影响,我用 clang 得到了相同的行为。
编辑 2:我还在专用机器上运行了算法基准测试(Linux,只有系统进程,cpu 频率调节器设置为性能)。
另外,我知道通常当你进行这种微基准测试时,你会做一些事情,比如缓存预热和多次重复你想要进行基准测试的代码部分。不幸的是,每次调用 foo()bar()可能会修改内部数据结构,所以我不能重复它们。我很感激任何改进建议。
谢谢!

最佳答案

我注意到在 sleep 之后基准测试表现更差。这可能是由于 CPU 进入较低的频率/功率模式。

在基准测试之前将 CPU 频率固定到其最大值,以便 CPU 在基准测试期间不会对其进行调整。

在 Linux 上:

$ sudo cpupower --cpu all frequency-set --related --governor performance

在 Windows 上,将电源计划设置为“高性能”。

关于c++ - 以前的代码似乎影响了以后函数调用的时间,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54367890/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com