gpt4 book ai didi

performance - Tensorflow 与 Numpy 性能对比

转载 作者:行者123 更新时间:2023-12-02 00:47:26 25 4
gpt4 key购买 nike

我在 numpy 中计算均值和标准差。为了提高性能,我在 Tensorflow 中尝试了同样的方法,但 Tensorflow 至少慢了 10 倍。我在 Tensorflow 中尝试了两种方法(下面的代码)。第一种方法使用 tf.nn.moments(),它有一个错误,导致它有时会返回负方差值。在第二种方法中,我通过其他 Tensorflow 函数计算方差。

我尝试了 CPU-only 和 GPU; numpy 总是更快。

我使用 time.time() 而不是 time.clock() 来测量使用 GPU 时的挂钟时间。

为什么 Tensorflow 变慢了?我认为这可能是由于将数据传输到 GPU,但即使对于非常小的数据集(传输时间应该可以忽略不计),并且仅使用 CPU 时,TF 的速度也较慢。这是因为初始化 TF 所需的开销时间吗?

import tensorflow as tf
import numpy
import time
import math

class Timer:
def __enter__(self):
self.start = time.time()
return self

def __exit__(self, *args):
self.end = time.time()
self.interval = self.end - self.start

inData = numpy.random.uniform(low=-1, high=1, size=(40000000,))

with Timer() as t:
mean = numpy.mean(inData)
print 'python mean', mean, 'time', t.interval

with Timer() as t:
stdev = numpy.std(inData)
print 'python stdev', stdev, 'time', t.interval

# Approach 1 (Note tf.nn.moments() has a bug)
with Timer() as t:
with tf.Graph().as_default():
meanTF, varianceTF = tf.nn.moments(tf.constant(inData), axes=[0])
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_op)
mean, variance = sess.run([meanTF, varianceTF])
sess.close()
print 'variance', variance
stdev = math.sqrt(variance)
print 'tensorflow mean', mean, 'stdev', stdev, 'time', t.interval

# Approach 2
with Timer() as t:
with tf.Graph().as_default():
inputVector = tf.constant(inData)
meanTF = tf.reduce_mean(inputVector)
length = tf.size(inputVector)
varianceTF = tf.divide(tf.reduce_sum(tf.squared_difference(inputVector, mean)), tf.to_double(length))
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_op)
mean, variance = sess.run([meanTF, varianceTF])
sess.close()
print 'variance', variance
stdev = math.sqrt(variance)
print 'tensorflow mean', mean, 'stdev', stdev, 'time', t.interval

最佳答案

下面是一个稍微好一点的基准。在 Xeon V3 上测试,仅使用 TensorFlow CPU 编译所有优化选项 + 来自 here 的 XLA与最新 anaconda 附带的 numpy MKL 相比。

XLA 在这里可能没有什么不同,但留给了后代。

注意事项:

  1. 从计时中排除前几次运行,它们可以包括初始化/分析

  2. 使用变量避免将输入复制到 Tensorflow 运行时。

  3. 扰动调用之间的变量以确保没有缓存

结果:

   numpy 23.5 ms, 25.7 ms
tf 14.7 ms, 20.5 ms

代码:

import numpy as np
import tensorflow as tf
import time
from tensorflow.contrib.compiler import jit
jit_scope = jit.experimental_jit_scope

inData = np.random.uniform(low=-1, high=1, size=(40000000,)).astype(np.float32)
#inDataFeed = tf.placeholder(inData.dtype)

with jit_scope(compile_ops=True):
inDataVar = tf.Variable(inData)
meanTF = tf.reduce_mean(inDataVar)


sess = tf.Session()
times = []
sess.run(tf.global_variables_initializer())
num_tries = 10


times = []
for i in range(num_tries):
t0 = time.perf_counter()
mean = np.mean(inData)
times.append(time.perf_counter()-t0)

print("%10s %.1f ms, %.1f ms" %("numpy", 10**3*min(times),
10**3*np.median(times)))

times = []
perturb = inDataVar.assign_add(tf.random_uniform(inData.shape))
for i in range(num_tries):
sess.run(perturb)
t0 = time.perf_counter()
mean, = sess.run([meanTF])
times.append(time.perf_counter()-t0)

times = times[2:] # discard first few because they could include profiling runs
print("%10s %.1f ms, %.1f ms" %("tf", 10**3*min(times),
10**3*np.median(times)))

关于performance - Tensorflow 与 Numpy 性能对比,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42702586/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com