gpt4 book ai didi

machine-learning - 我应该在哪里将 dropout 应用于卷积层?

转载 作者:行者123 更新时间:2023-11-30 08:29:17 24 4
gpt4 key购买 nike

因为“层”一词在应用于卷积层时通常意味着不同的事物(有些将池化后的所有内容视为单个层,另一些将卷积、非线性和池化视为单独的“层”;see fig 9.7)我不清楚在卷积层中何处应用 dropout。

非线性和池化之间是否会发生dropout?

<小时/>

例如,在 TensorFlow 中,它会是这样的:

kernel_logits = tf.nn.conv2d(input_tensor, ...) + biases
activations = tf.nn.relu(kernel_logits)
kept_activations = tf.nn.dropout(activations, keep_prob)
output = pool_fn(kept_activations, ...)

最佳答案

您可能可以尝试在不同的地方应用 dropout,但在防止过度拟合方面,不确定您会在池化之前看到很多问题。我在 CNN 中看到的是,tensorflow.nn.dropout 在非线性和池化之后应用:

    # Create a convolution + maxpool layer for each filter size
pooled_outputs = []
for i, filter_size in enumerate(filters):
with tf.name_scope("conv-maxpool-%s" % filter_size):
# Convolution Layer
filter_shape = [filter_size, embedding_size, 1, num_filters]
W = tf.Variable(tf.truncated_normal(filter_shape, stddev=0.1), name="W")
b = tf.Variable(tf.constant(0.1, shape=[num_filters]), name="b")
conv = tf.nn.conv2d(
self.embedded_chars_expanded,
W,
strides=[1, 1, 1, 1],
padding="VALID",
name="conv")
# Apply nonlinearity
h = tf.nn.relu(tf.nn.bias_add(conv, b), name="relu")
# Maxpooling over the outputs
pooled = tf.nn.max_pool(
h,
ksize=[1, sequence_length - filter_size + 1, 1, 1],
strides=[1, 1, 1, 1],
padding='VALID',
name="pool")
pooled_outputs.append(pooled)



# Combine all the pooled features
num_filters_total = num_filters * len(filters)
self.h_pool = tf.concat(3, pooled_outputs)
self.h_pool_flat = tf.reshape(self.h_pool, [-1, num_filters_total])

# Add dropout
with tf.name_scope("dropout"):
self.h_drop = tf.nn.dropout(self.h_pool_flat, self.dropout_keep_prob)

关于machine-learning - 我应该在哪里将 dropout 应用于卷积层?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37573674/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com