gpt4 book ai didi

python - 在 TensorFlow 中添加多个层导致损失函数变为 Nan

转载 作者:太空狗 更新时间:2023-10-29 17:25:46 26 4
gpt4 key购买 nike

我正在 TensorFlow/Python 中为 notMNIST 编写一个神经网络分类器数据集。我已经在隐藏层上实现了 l2 正则化和丢失。只要只有一个隐藏层,它就可以正常工作,但是当我添加更多层(以提高准确性)时,损失函数在每一步都迅速增加,在第 5 步变成 NaN。我尝试暂时禁用 Dropout 和 L2 正则化,但是只要有 2+ 层,我就会得到相同的行为。我什至从头开始重写我的代码(进行一些重构以使其更灵活),但结果相同。层的数量和大小由 hidden_​​layer_spec 控制。我错过了什么?

#works for np.array([1024]) with about 96.1% accuracy
hidden_layer_spec = np.array([1024, 300])
num_hidden_layers = hidden_layer_spec.shape[0]
batch_size = 256
beta = 0.0005

epochs = 100
stepsPerEpoch = float(train_dataset.shape[0]) / batch_size
num_steps = int(math.ceil(float(epochs) * stepsPerEpoch))

l2Graph = tf.Graph()
with l2Graph.as_default():
#with tf.device('/cpu:0'):
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)

weights = []
biases = []
for hi in range(0, num_hidden_layers + 1):
width = image_size * image_size if hi == 0 else hidden_layer_spec[hi - 1]
height = num_labels if hi == num_hidden_layers else hidden_layer_spec[hi]
weights.append(tf.Variable(tf.truncated_normal([width, height]), name = "w" + `hi + 1`))
biases.append(tf.Variable(tf.zeros([height]), name = "b" + `hi + 1`))
print(`width` + 'x' + `height`)

def logits(input, addDropoutLayer = False):
previous_layer = input
for hi in range(0, hidden_layer_spec.shape[0]):
previous_layer = tf.nn.relu(tf.matmul(previous_layer, weights[hi]) + biases[hi])
if addDropoutLayer:
previous_layer = tf.nn.dropout(previous_layer, 0.5)
return tf.matmul(previous_layer, weights[num_hidden_layers]) + biases[num_hidden_layers]

# Training computation.
train_logits = logits(tf_train_dataset, True)

l2 = tf.nn.l2_loss(weights[0])
for hi in range(1, len(weights)):
l2 = l2 + tf.nn.l2_loss(weights[0])
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(train_logits, tf_train_labels)) + beta * l2

# Optimizer.
global_step = tf.Variable(0) # count the number of steps taken.
learning_rate = tf.train.exponential_decay(0.5, global_step, int(stepsPerEpoch) * 2, 0.96, staircase = True)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)

# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(train_logits)
valid_prediction = tf.nn.softmax(logits(tf_valid_dataset))
test_prediction = tf.nn.softmax(logits(tf_test_dataset))
saver = tf.train.Saver()

with tf.Session(graph=l2Graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Learning rate: " % learning_rate)
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
save_path = saver.save(session, "l2_degrade.ckpt")
print("Model save to " + `save_path`)

最佳答案

原来这与其说是一个编码问题,不如说是一个深度学习问题。额外的层使梯度太不稳定,导致损失函数迅速退化为 NaN。解决此问题的最佳方法是使用 Xavier initialization .否则,初始值的方差将趋于过高,从而导致不稳定。此外,降低学习率可能会有帮助。

关于python - 在 TensorFlow 中添加多个层导致损失函数变为 Nan,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36565430/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com