gpt4 book ai didi

python - dropout 中的 keep_prob 值以及通过 dropout 获得最差结果

转载 作者:行者123 更新时间:2023-12-01 02:14:05 24 4
gpt4 key购买 nike

根据此链接,keep_prob 的值必须在 (0,1] 之间: Tensorflow manual

否则我会得到值错误:

ValueError: If keep_prob is not in (0, 1] or if x is not a floating point tensor.

我将以下代码用于具有一个隐藏层的简单神经网络:

n_nodes_input = len(train_x.columns) # number of input features
n_nodes_hl = 30 # number of units in hidden layer
n_classes = len(np.unique(Y_train_numeric))
lr = 0.25
x = tf.placeholder('float', [None, len(train_x.columns)])
y = tf.placeholder('float')
dropout_keep_prob = tf.placeholder(tf.float32)

def neural_network_model(data, dropout_keep_prob):
# define weights and biases for all each layer
hidden_layer = {'weights':tf.Variable(tf.truncated_normal([n_nodes_input, n_nodes_hl], stddev=0.3)),
'biases':tf.Variable(tf.constant(0.1, shape=[n_nodes_hl]))}
output_layer = {'weights':tf.Variable(tf.truncated_normal([n_nodes_hl, n_classes], stddev=0.3)),
'biases':tf.Variable(tf.constant(0.1, shape=[n_classes]))}
# feed forward and activations
l1 = tf.add(tf.matmul(data, hidden_layer['weights']), hidden_layer['biases'])
l1 = tf.nn.sigmoid(l1)
l1 = tf.nn.dropout(l1, dropout_keep_prob)
output = tf.matmul(l1, output_layer['weights']) + output_layer['biases']

return output

def main():
prediction = neural_network_model(x, dropout_keep_prob)
cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=y,logits=prediction))
optimizer = tf.train.AdamOptimizer(lr).minimize(cost)

sess = tf.InteractiveSession()

tf.global_variables_initializer().run()
for epoch in range(1000):
loss = 0
_, c = sess.run([optimizer, cost], feed_dict = {x: train_x, y: train_y, dropout_keep_prob: 4.})
loss += c

if (epoch % 100 == 0 and epoch != 0):
print('Epoch', epoch, 'completed out of', 1000, 'Training loss:', loss)
correct = tf.equal(tf.argmax(prediction,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name='op_accuracy')

writer = tf.summary.FileWriter('graph',sess.graph)
writer.close()

print('Train set Accuracy:', sess.run(accuracy, feed_dict = {x: train_x, y: train_y, dropout_keep_prob: 1.}))
print('Test set Accuracy:', sess.run(accuracy, feed_dict = {x: test_x, y: test_y, dropout_keep_prob: 1.}))
sess.close()


if __name__ == '__main__':
main()

如果我在 sess.run 中使用 dropout_keep_prob 范围内的数字 (0,1],准确度会急剧下降。如果我使用大于 1 的数字,例如 4,准确度会超过 0.9。一旦我在 tf.nn.dropout() 前面使用shift+tab,这就会被写为描述的一部分:

With probability `keep_prob`, outputs the input element scaled up by
`1 / keep_prob`, otherwise outputs `0`. The scaling is so that the expected
sum is unchanged.

在我看来,keep_prob 必须大于 1,否则不会丢失任何内容!

底线,我很困惑。我在 dropout 的哪一部分实现了错误,导致我的结果变得最差,keep_drop 的最佳数字是多少?

谢谢

最佳答案

which seems to me that keep_prob has to be greater than 1 otherwise nothing would be dropped!

描述说:

With probability keep_prob, outputs the input element scaled up by 1 / keep_prob, otherwise outputs 0. The scaling is so that the expected sum is unchanged.

这意味着:

  • keep_prob用作概率,因此根据定义它应始终位于 [0, 1] 中(超出该范围的数字永远不可能是概率)
  • 有概率keep_prob ,输入元素乘以 1 / keep_prob 。因为我们刚刚写了 0 <= keep_prob <= 1 ,师1 / keep_prob总是大于 1.0 (或者恰好是 1.0 如果 keep_prob == 1 )。因此,概率 keep_prob ,一些元素会变得比没有 dropout 时更大
  • 有概率1 - keep_prob (描述中的“否则”),元素设置为 0 。这是 dropout,如果元素设置为 0,则元素将被丢弃。 。如果您设置keep_prob正好1.0 ,这意味着丢弃任何节点的概率变为 0 。所以,如果你想删除一些节点,你应该设置 keep_prob < 1 ,如果您不想删除任何内容,请设置 keep_prob = 1 .

重要说明:您只想在训练期间使用 dropout,而不是在测试期间使用。

If I use a number in range (0,1] for dropout_keep_prob in the sess.run, the accuracy drops drastically.

如果您对测试集执行此操作,或者您的意思是报告训练集的准确性,那么我并不感到惊讶。 Dropout意味着丢失信息,所以确实会丢失准确性。不过,这应该是一种规范化的方式;您在训练阶段故意失去准确性,但希望这会提高泛化能力,从而提高测试阶段(当您不应再使用 dropout 时)的准确性。

If I use a number bigger than 1, like 4, the accuracy goes beyond 0.9.

我很惊讶你竟然能运行这段代码。基于source code ,我不希望它能够运行?

关于python - dropout 中的 keep_prob 值以及通过 dropout 获得最差结果,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48507778/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com