gpt4 book ai didi

c++ - Eigen 张量代码非常慢

转载 作者:行者123 更新时间:2023-11-28 01:33:28 25 4
gpt4 key购买 nike

我是 Eigen 张量的新手,所以我可能做错了什么。我有一个代码可以计算两个浮点矩阵之间差异的 Z 分数。我发现该代码的运行速度比 Python 和 numpy 中的相同代码慢 500 倍。我做错了什么?

C++ 代码

  int scale = atoi(argv[1]);
Eigen::array<int, 2> bbcast({scale, 1});
long startTime = get_nanos();
Eigen::Tensor<float, 2> a(2, 5);
a.setRandom();
Eigen::Tensor<float, 2> b(2, 5);
b.setRandom();
Eigen::Tensor<float, 2> scaled_a = a.broadcast(bbcast);
Eigen::Tensor<float, 2> scaled_b = b.broadcast(bbcast);

Eigen::array<int, 1> dims({0 /* dimension to reduce */});
Eigen::array<int, 2> good_dims{{1,(int)scaled_a.dimension(1)}};
auto means = (scaled_a - scaled_b).mean(dims).reshape(good_dims);
std::cout << means << std::endl;
printf("Calculated means, took %f seconds\n",(float)(get_nanos() - startTime) / 1000000000L);

Eigen::array<int, 2> bcast({(int)scaled_a.dimension(0), 1});
auto submean = (scaled_a - scaled_b) - means.broadcast(bcast);
auto stds = submean.mean(dims).reshape(good_dims).abs().square().mean(dims).reshape(good_dims).sqrt();
std::cout << stds << std::endl;
printf("Calculated std, took %f seconds\n",(float)(get_nanos() - startTime) / 1000000000L);

这在我的 Linux VM 上用 20000 x 5 浮点矩阵运行了大约 3 秒

Python 代码:

import numpy as np
import time
start = time.time()
a = np.random.rand(2*10000,5)
b = np.random.rand(2*10000,5)
stds = np.std(a - b, axis = 0)
means = np.mean(a - b, axis = 0)
#diffs = np.sum(np.abs(net_out - correct_out)/stds,axis=1)
diffs = np.abs(a - b - means)/stds
print(diffs)
print("Took", time.time() - start )

这在同一个虚拟机上运行了 0.0068 秒。

非常感谢,摩西

最佳答案

对于二维张量,最好使用MatrixArray,这样可以简化代码:

ArrayXXd a = ArrayXXd::Random(2*10000,5);
ArrayXXd b = ArrayXXd::Random(2*10000,5);
auto means = (a-b).colwise().mean().eval();
auto stds = (((a-b).rowwise()-means).square().colwise().sum() / (a.rows()-1)).sqrt().eval();
ArrayXXd diffs = abs((a-b).rowwise() - means).rowwise()/stds;

注意 .eval() 使用 auto 的行,参见 why .

在普通笔记本电脑上使用 gcc 和 -O3 编译时,此代码需要 0.000324919s(不考虑可能更昂贵但不具有代表性的随机数生成) .

这是我提出的 Tensor 版本,再次注意 eval() 调用:

int n = a.dimension(0);
Eigen::array<int, 1> dims({0 /* dimension to reduce */});
Eigen::array<int, 2> good_dims{{1,(int)a.dimension(1)}};
Eigen::array<int,2> bc({n,1});

auto means = (a - b).mean(dims).eval();
auto submean = (a - b) - means.reshape(good_dims).broadcast(bc);
auto stds = (submean.square().eval().sum(dims) * 1.f/(float(n-1))).sqrt().eval();
diffs = submean.abs() / stds.reshape(good_dims).broadcast(bc);

但是好像比较慢,这里大概0.007s。要将 Tensor 查看为 Array,您可以使用 Map:

Map<const ArrayXXf> a(tensor_a.data(), tensor_a.dimension(0), tensor_a.dimension(1));

关于c++ - Eigen 张量代码非常慢,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50508058/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com