gpt4 book ai didi

machine-learning - 选择 GeForce 或 Quadro GPU 通过 TensorFlow 进行机器学习

转载 作者:行者123 更新时间:2023-11-30 08:20:32 27 4
gpt4 key购买 nike

如果使用 Quadro GPU 与 GeForce GPU,TensorFlow 性能是否有明显差异?

例如它是否使用 double 运算或其他会导致 GeForce 卡性能下降的因素?

我即将购买一个用于 TensorFlow 的 GPU,并且想知道 GeForce 是否合适。感谢并感谢您的帮助

最佳答案

我认为 GeForce TITAN 非常棒,并且广泛应用于机器学习 (ML)。在机器学习中,大多数情况下单精度就足够了。

有关 GTX 系列(当前为 GeForce 10)性能的更多详细信息,请参阅维基百科 here .

网络上的其他来源也支持这一说法。这是报价from doc-ok in 2013 (permalink)。

For comparison, an “entry-level” $700 Quadro 4000 is significantly slower than a $530 high-end GeForce GTX 680, at least according to my measurements using several Vrui applications, and the closest performance-equivalent to a GeForce GTX 680 I could find was a Quadro 6000 for a whopping $3660.

具体到 ML,包括深度学习,有一个 Kaggle forum discussion dedicated to this subject (2014 年 12 月,permalink),其中对 Quadro、GeForce 和 Tesla 系列进行了比较:

Quadro GPUs aren't for scientific computation, Tesla GPUs are. Quadro cards are designed for accelerating CAD, so they won't help you to train neural nets. They can probably be used for that purpose just fine, but it's a waste of money.

Tesla cards are for scientific computation, but they tend to be pretty expensive. The good news is that many of the features offered by Tesla cards over GeForce cards are not necessary to train neural networks.

For example, Tesla cards usually have ECC memory, which is nice to have but not a requirement. They also have much better support for double precision computations, but single precision is plenty for neural network training, and they perform about the same as GeForce cards for that.

One useful feature of Tesla cards is that they tend to have is a lot more RAM than comparable GeForce cards. More RAM is always welcome if you're planning to train bigger models (or use RAM-intensive computations like FFT-based convolutions).

If you're choosing between Quadro and GeForce, definitely pick GeForce. If you're choosing between Tesla and GeForce, pick GeForce, unless you have a lot of money and could really use the extra RAM.

注意:请注意您正在使用的平台以及其中的默认精度。例如,here in the CUDA forums (2016 年 8 月),一位开发人员拥有两台 Titan X(GeForce 系列),并且没有看到任何 R 或 Python 脚本的性能提升。诊断此问题的原因是 R 默认为 double ,并且在新 GPU 上的性能比 CPU(Xeon 处理器)更差。 Tesla GPU 被认为是 double 性能最佳的 GPU。在这种情况下,将所有数字转换为 float32 可以在 TITAN X 上将性能从 12.437 秒(使用 nvBLAS 0.324 秒)和 gmatrix+float32 提高(请参阅第一个基准测试)。引用此论坛讨论:

Double precision performance of Titan X is pretty low.

关于machine-learning - 选择 GeForce 或 Quadro GPU 通过 TensorFlow 进行机器学习,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34715055/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com