gpt4 book ai didi

python - tf.reduce_mean 的数值稳定性

转载 作者:太空宇宙 更新时间:2023-11-04 11:17:20 25 4
gpt4 key购买 nike

tf.reduce_mean 的数值稳定性在 cpu 和 gpu 上都比 np.mean 差。

每当总和超过浮点类型的限制时,tf.reduce_mean 是否会出现数值问题?

有没有更好的方法来计算 tensorflow 中的 float16 数组的平均值?

结果(cpu、tf 1.13.1、linux):

np.mean 64: 0.499978537075602
np.sum 64: 499978.53707560204
np.mean 16: 0.5
np.sum 16: inf
tf.reduce_mean 16: nan

结果(gpu,计算能力5.2,tf 1.13.1,cuda 10.1,linux):

np.mean 64: 0.500100701606694
np.sum 64: 500100.7016066939
np.mean 16: 0.5
np.sum 16: inf
tf.reduce_mean 16: nan

结果(gpu,计算能力7.0,tf 1.13.1,cuda 9.0,linux):

np.mean 64: 0.4996047117607758
np.sum 64: 499604.7117607758
np.mean 16: 0.4995
np.sum 16: inf
tf.reduce_mean 16: nan

测试:

"""
Test numerical stability of reduce_mean
"""

import numpy as np
import tensorflow as tf


N = int(1e6)
dtype = np.float16

x = np.random.random(size=N)

print("np.mean 64:", np.mean(x))
print("np.sum 64:", np.sum(x))
x = x.astype(np.float16)
mean16 = np.mean(x)
print("np.mean 16:", np.mean(x))
print("np.sum 16:", np.sum(x))

with tf.Session() as sess:
x = tf.constant(x, dtype=np.float16)
print("tf.reduce_mean 16:",
sess.run(tf.reduce_mean(x)))

最佳答案

来自 numpy documentation :

By default, float16 results are computed using float32 intermediates for extra precision.

来自 tensorflow documentation :

Please note that np.mean has a dtype parameter that could be used to specify the output type. By default this is dtype=float64. On the other hand, tf.reduce_mean has an aggressive type inference from input_tensor...

所以可能没有比 sess.run(tf.reduce_mean(tf.cast(x, np.float32)))) 更好的方法了。

关于python - tf.reduce_mean 的数值稳定性,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56726448/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com