gpt4 book ai didi

python - 警告 :tensorflow:Layer my_model is casting an input tensor from dtype float64 to the layer's dtype of float32, 这是 TensorFlow 2 中的新行为

转载 作者:行者123 更新时间:2023-12-03 16:12:22 26 4
gpt4 key购买 nike

在我的 Tensorflow 神经网络开始训练之前,会打印出以下警告:

WARNING:tensorflow:Layer my_model is casting an input tensor fromdtype float64 to the layer's dtype of float32, which is new behaviorin TensorFlow 2. The layer has dtype float32 because it's dtypedefaults to floatx. If you intended to run this layer in float32, youcan safely ignore this warning.


If in doubt, this warning is likelyonly an issue if you are porting a TensorFlow 1.X model to TensorFlow2. To change all layers to have dtype float64 by default, call tf.keras.backend.set_floatx('float64').


To change just this layer,pass dtype='float64' to the layer constructor. If you are the authorof this layer, you can disable autocasting by passing autocast=Falseto the base Layer constructor.


现在,根据错误消息, 我可以消除此错误消息 通过将后端设置为 'float64' .但是,我想深入了解并设置正确的 dtypes手动。
完整代码:
import tensorflow as tf
from tensorflow.keras.layers import Dense, Concatenate
from tensorflow.keras import Model
from sklearn.datasets import load_iris
iris, target = load_iris(return_X_y=True)

X = iris[:, :3]
y = iris[:, 3]

ds = tf.data.Dataset.from_tensor_slices((X, y)).shuffle(25).batch(8)

class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self.d0 = Dense(16, activation='relu')
self.d1 = Dense(32, activation='relu')
self.d2 = Dense(1, activation='linear')

def call(self, x):
x = self.d0(x)
x = self.d1(x)
x = self.d2(x)
return x

model = MyModel()

loss_object = tf.keras.losses.MeanSquaredError()

optimizer = tf.keras.optimizers.Adam(learning_rate=5e-4)

loss = tf.keras.metrics.Mean(name='loss')
error = tf.keras.metrics.MeanSquaredError()

@tf.function
def train_step(inputs, targets):
with tf.GradientTape() as tape:
predictions = model(inputs)
run_loss = loss_object(targets, predictions)
gradients = tape.gradient(run_loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
loss(run_loss)
error(predictions, targets)

for epoch in range(10):
for data, labels in ds:
train_step(data, labels)

template = 'Epoch {:>2}, Loss: {:>7.4f}, MSE: {:>6.2f}'
print(template.format(epoch+1,
loss.result(),
error.result()*100))
# Reset the metrics for the next epoch
loss.reset_states()
error.reset_states()

最佳答案

tl;博士 为避免这种情况,请将您的输入转换为 float32

X = tf.cast(iris[:, :3], tf.float32) 
y = tf.cast(iris[:, 3], tf.float32)
或与 numpy :
X = np.array(iris[:, :3], dtype=np.float32)
y = np.array(iris[:, 3], dtype=np.float32)
说明
默认情况下,Tensorflow 使用 floatx ,默认为 float32 ,这是深度学习的标准。您可以验证这一点:
import tensorflow as tf
tf.keras.backend.floatx()
Out[3]: 'float32'
您提供的输入(鸢尾花数据集)是 dtype float64 ,因此 Tensorflow 的默认权重 dtype 与输入之间存在不匹配。 Tensorflow 不喜欢那样,因为转换(更改 dtype)成本很高。 Tensorflow 在操作不同 dtype 的张量时通常会抛出错误(例如,比较 float32 logits 和 float64 标签)。
它正在谈论的“新行为”:

Layer my_model_1 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2


是它会自动将输入的数据类型转换为 float32 .在这种情况下,Tensorflow 1.X 可能会抛出异常,尽管我不能说我曾经使用过它。

关于python - 警告 :tensorflow:Layer my_model is casting an input tensor from dtype float64 to the layer's dtype of float32, 这是 TensorFlow 2 中的新行为,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59400128/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com