gpt4 book ai didi

c++ - 为什么多线程代码在更快的机器上运行得更慢?

转载 作者:塔克拉玛干 更新时间:2023-11-03 01:12:51 25 4
gpt4 key购买 nike

考虑以下 C++ 代码:

#include "threadpool.hpp"
#include <chrono>
#include <list>
#include <iostream>
#include <cmath>

int loop_size;

void process(int num) {
double x = 0;
double sum = 0;
for(int i = 0; i < loop_size; ++i) {
x += 0.0001;
sum += sin(x) / cos(x) + cos(x) * cos(x);
}
}

int main(int argc, char* argv[]) {
if(argc < 3) {
std::cerr << argv[0] << " [thread_pool_size] [threads] [sleep_time]" << std::endl;
exit(0);
}
thread_pool* pool = nullptr;
int th_count = std::atoi(argv[1]);
if(th_count != 0) {
pool = new thread_pool(th_count);
}
loop_size = std::stoi(argv[3]);
int max = std::stoi(argv[2]);
auto then = std::chrono::steady_clock::now();
std::list<std::thread> ths;
if(th_count == 0) {
for(int i = 0; i < max; ++i) {
ths.emplace_back(&process, i);
}
for(std::thread& t : ths) {
t.join();
}
} else {
for(int i = 0; i < max; ++i) {
pool->enqueue(std::bind(&process, i));
}
delete pool;
}
int diff = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now() - then).count();
std::cerr << "Time: " << diff << '\n';
return 0;
}

"threadpool.hpp"this github repo 的修改版本可用here

我在我的机器 (Corei7-6700) 和一个 88 核服务器 (2x Xeon E5-2696 v4) 上编译了上面的代码。结果我无法解释。

这是我运行代码的方式:

tp <threadpool size> <number of threads> <iterations>

同样的代码在更快的机器上运行得更慢!我的本地机器上有 8 个内核,远程服务器上有 88 个内核,这些是结果:(最后两列表示每台机器上完成的平均时间(以毫秒为单位)

+============+=========+============+=============+====================+
| Threadpool | Threads | Iterations | Corei7-6700 | 2x Xeon E5-2696 v4 |
+============+=========+============+=============+====================+
| 100 | 100000 | 1000 | 1300 | 6000 |
+------------+---------+------------+-------------+--------------------+
| 1000 | 100000 | 1000 | 1400 | 5000 |
+------------+---------+------------+-------------+--------------------+
| 10000 | 100000 | 1000 | 1470 | 3400 |
+------------+---------+------------+-------------+--------------------+

似乎拥有更多内核会使代码运行速度变慢。所以我将服务器 ( taskset ) 上的 CPU 亲和性减少到 8 个内核,然后再次运行代码:

taskset 0-7 tp <threadpool size> <number of threads> <iterations>

这是新数据:

+============+=========+============+=============+====================+
| Threadpool | Threads | Iterations | Corei7-6700 | 2x Xeon E5-2696 v4 |
+============+=========+============+=============+====================+
| 100 | 100000 | 1000 | 1300 | 900 |
+------------+---------+------------+-------------+--------------------+
| 1000 | 100000 | 1000 | 1400 | 1000 |
+------------+---------+------------+-------------+--------------------+
| 10000 | 100000 | 1000 | 1470 | 1070 |
+------------+---------+------------+-------------+--------------------+

我在 32 核至强和 22 核旧至强机器上测试了相同的代码,模式相似:内核越少,多线程代码运行得越快。但是为什么?

重要说明:这是为了解决我原来的问题:

Why having more and faster cores makes my multithreaded software slower?

注意事项:

  1. 所有机器上的操作系统和编译器都相同:debian 9.0 amd64 运行内核 4.0.9-3、6.3.0 20170516
  2. 没有额外的flasg,默认优化:g++ ./threadpool.cpp -o ./tp -lpthread

最佳答案

您正在将大量工作人员排入线程池,而这些工作人员只需很少的时间即可执行。因此,线程池的实现(不是 实际工作)成为瓶颈,特别是其互斥锁处理争用的方式。我尝试用 folly::CPUThreadPoolExecutor 替换 thread_pool , 哪种帮助:

thread_pool version:
2180 ms | thread_pool_size=100 num_workers=100000 loop_size=1000 affinity=0-23
2270 ms | thread_pool_size=1000 num_workers=100000 loop_size=1000 affinity=0-23
2400 ms | thread_pool_size=10000 num_workers=100000 loop_size=1000 affinity=0-23
530 ms | thread_pool_size=100 num_workers=100000 loop_size=1000 affinity=0-7
1930 ms | thread_pool_size=1000 num_workers=100000 loop_size=1000 affinity=0-7
2300 ms | thread_pool_size=10000 num_workers=100000 loop_size=1000 affinity=0-7
folly::CPUThreadPoolExecutor version:
830 ms | thread_pool_size=100 num_workers=100000 loop_size=1000 affinity=0-23
780 ms | thread_pool_size=1000 num_workers=100000 loop_size=1000 affinity=0-23
800 ms | thread_pool_size=10000 num_workers=100000 loop_size=1000 affinity=0-23
880 ms | thread_pool_size=100 num_workers=100000 loop_size=1000 affinity=0-7
1130 ms | thread_pool_size=1000 num_workers=100000 loop_size=1000 affinity=0-7
1120 ms | thread_pool_size=10000 num_workers=100000 loop_size=1000 affinity=0-7

我建议您(1) 在每个线程中做更多的工作; (2) 使用与 CPU 一样多的线程; (3) 使用更好的线程池。让我们将thread_pool_size设置为CPU的数量,并将loop_size乘以10:

thread_pool version:
1880 ms | thread_pool_size=24 num_workers=100000 loop_size=10000 affinity=0-23
4100 ms | thread_pool_size=8 num_workers=100000 loop_size=10000 affinity=0-7
folly::CPUThreadPoolExecutor version:
1520 ms | thread_pool_size=24 num_workers=100000 loop_size=10000 affinity=0-23
2310 ms | thread_pool_size=8 num_workers=100000 loop_size=10000 affinity=0-7

请注意,通过将每个线程的工作量增加 10 倍,我们实际上使 thread_pool 版本更快,而 folly::CPUThreadPoolExecutor 版本仅花费了 2 倍时间。让我们将 loop_size 乘以 10 倍以上:

thread_pool version:
28695 ms | thread_pool_size=24 num_workers=100000 loop_size=100000 affinity=0-23
81600 ms | thread_pool_size=8 num_workers=100000 loop_size=100000 affinity=0-7
folly::CPUThreadPoolExecutor version:
6830 ms | thread_pool_size=24 num_workers=100000 loop_size=100000 affinity=0-23
14400 ms | thread_pool_size=8 num_workers=100000 loop_size=100000 affinity=0-7

对于 folly::CPUThreadPoolExecutor,结果不言自明:在每个线程中做更多的工作可以使您更接近并行性带来的真正线性 yield 。而 thread_pool 似乎不能胜任这项任务;它无法正确处理这种规模的互斥锁争用。

这是我用来测试的代码(使用 gcc 5.5 编译,完全优化):

#include <chrono>
#include <cmath>
#include <iostream>
#include <memory>
#include <vector>

#define USE_FOLLY 1

#if USE_FOLLY
#include <folly/executors/CPUThreadPoolExecutor.h>
#include <folly/futures/Future.h>
#else
#include "threadpool.hpp"
#endif

int loop_size;
thread_local double dummy = 0.0;

void process(int num) {
double x = 0;
double sum = 0;
for (int i = 0; i < loop_size; ++i) {
x += 0.0001;
sum += sin(x) / cos(x) + cos(x) * cos(x);
}
dummy += sum; // prevent optimization
}

int main(int argc, char* argv[]) {
if (argc < 3) {
std::cerr << argv[0] << " [thread_pool_size] [threads] [sleep_time]"
<< std::endl;
exit(0);
}
int th_count = std::atoi(argv[1]);
#if USE_FOLLY
auto executor = std::make_unique<folly::CPUThreadPoolExecutor>(th_count);
#else
auto pool = std::make_unique<thread_pool>(th_count);
#endif
loop_size = std::stoi(argv[3]);
int max = std::stoi(argv[2]);

auto then = std::chrono::steady_clock::now();
#if USE_FOLLY
std::vector<folly::Future<folly::Unit>> futs;
for (int i = 0; i < max; ++i) {
futs.emplace_back(folly::via(executor.get()).then([i]() { process(i); }));
}
folly::collectAll(futs).get();
#else
for (int i = 0; i < max; ++i) {
pool->enqueue([i]() { process(i); });
}
pool = nullptr;
#endif

int diff = std::chrono::duration_cast<std::chrono::milliseconds>(
std::chrono::steady_clock::now() - then)
.count();
std::cerr << "Time: " << diff << '\n';
return 0;
}

关于c++ - 为什么多线程代码在更快的机器上运行得更慢?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51390084/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com