gpt4 book ai didi

python - Tensorflow 'nan' 损失和 '-inf' 权重,即使学习率为 0

转载 作者:行者123 更新时间:2023-11-30 09:27:14 25 4
gpt4 key购买 nike

我正在 AWS GPU 机器上训练深度卷积神经网络。数据集 -> Google SVHN训练规模 -> 200,000+

我得到 Loss = 'nan' 和 W = '-inf'

即使学习率为 0

Loss at step 0: 14.024256
Minibatch accuracy: 5.8%
Learning rate : 0.0
W : [ 0.1968164 0.19992708 0.19999388 0.19999997]
b : [ 0.1 0.1 0.1 0.1]

Loss at step 52: 14.553226
Minibatch accuracy: 5.9%
Learning rate : 0.0
W : [ 0.19496706 0.19928116 0.19977403 0.1999999 ]
b : [ 0.1 0.1 0.1 0.1]

# STEP 53 ---> LOSS : NAN, ALL WEIGHTS STILL OKAY
Loss at step 53: nan
Minibatch accuracy: 6.4%
Learning rate : 0.0
W : [ 0.19496706 0.19928116 0.19977403 0.1999999 ]
b : [ 0.1 0.1 0.1 0.1]

# STEP 54 ---> LOSS : NAN, WEIGHTS START GOINT TO -INF
Loss at step 54: nan
Minibatch accuracy: 49.2%
Learning rate : 0.0
W : [ -inf -inf 0.19694112 -inf]
b : [-inf -inf 0.1 -inf]

# STEP 54 ---> LOSS : NAN, W & B -INF
Loss at step 55: nan
Minibatch accuracy: 46.9%
Learning rate : 0.0
W : [-inf -inf -inf -inf]
b : [-inf -inf -inf -inf]

我尝试过以下技术:

  1. 使用了多种不同的优化器(Adam、SGD 等)
  2. 在最后一层使用不同的激活函数(ReLU、Sigmoid、tanH)
  3. 以不同方式初始化权重和偏差
  4. 尝试不同的学习率和速率衰减(从 0.001 到 0.0001)
  5. 我认为我的数据集中可能存在错误,因此删除了前 10000 个条目。没有成功

这些东西似乎都不适合我。1500 步后我仍然遇到“nan”损失。

我的代码:

权重初始化

W1 = tf.Variable(tf.truncated_normal([6, 6, 1, K], stddev=0.1))    
B1 = tf.Variable(tf.constant(0.1, tf.float32, [K]))
# Similarly W2, B2, W3, B3, W4 and B4

W5_1 = tf.Variable(tf.truncated_normal([N, 11], stddev=0.1))
B5_1 = tf.Variable(tf.constant(0.1, tf.float32, [11]))
# Similarly W5_2, B5_2, W5_3, B5_3, W5_4, B5_4, W5_5, B5_5,

# Model
Y1 = tf.nn.relu(tf.nn.conv2d(X, W1, strides=[1, 1, 1, 1], padding='SAME') + B1)
# Similarly Y2 and Y3 with stride 2

shape = Y3.get_shape().as_list()
YY = tf.reshape(Y3, shape=[-1, shape[1] * shape[2] * shape[3]])
Y4 = tf.sigmoid(tf.matmul(YY, W4) + B4)
YY4 = tf.nn.dropout(Y4, pkeep)

Ylogits_1 = tf.matmul(YY4, W5_1) + B5_1
# Ylogits_2,3,4,5

Y_1 = tf.nn.softmax(Ylogits_1)
# Y_2,3,4,5

损失

cross_entropy = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(Ylogits_1, Y_[:,1])) +\
# ....... (Ylogits_5, Y_[:,5]))

train_prediction = tf.pack([Y_1, Y_2, Y_3, Y_4, Y_5])
train_step = tf.train.AdamOptimizer(alpha).minimize(cross_entropy)

W_s = tf.pack([tf.reduce_max(tf.abs(W1)),tf.reduce_max(tf.abs(W2)),tf.reduce_max(tf.abs(W3)),tf.reduce_max(tf.abs(W4))])
b_s = tf.pack([tf.reduce_max(tf.abs(B1)),tf.reduce_max(tf.abs(B2)),tf.reduce_max(tf.abs(B3)),tf.reduce_max(tf.abs(B4))])

model_saver = tf.train.Saver()

tensorflow session

for step in range(num_steps):
# I have set the Learning Rate = 0
learning_rate = 0
batch_data = train_data[step*batch_size:(step + 1)*batch_size, :, :, :]
batch_labels = label_data[step*batch_size:(step + 1)*batch_size, :]

feed_dict = {X : batch_data, Y_ : batch_labels, pkeep : 0.80, alpha : learning_rate}
_, l, train_pred, W, b = session.run([train_step, cross_entropy, train_prediction, W_s, b_s], feed_dict=feed_dict)

if (step % 20 == 0):
print('Loss at step %d: %f' % (step, l))
print('Minibatch accuracy: %.1f%%' % acc(train_pred, batch_labels[:,1:6]))
print('Learning rate : ', learning_rate)
print('W : ', W)
print('b : ', b)
print(' ')

由于如果学习率为 0,则不会进行学习,因此损失和权重如何变化并变为 nan 和 -inf。

感谢任何帮助。

最佳答案

我见过当一个标签超出范围时会发生这种情况。你能检查一下你的标签是否都在 (0 - (num_labels-1) ) 范围内吗?

关于python - Tensorflow 'nan' 损失和 '-inf' 权重,即使学习率为 0,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42059230/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com