gpt4 book ai didi

Tensorflow:批量大小 > 1 时无法过度拟合训练数据

转载 作者:行者123 更新时间:2023-11-30 08:48:56 26 4
gpt4 key购买 nike

我使用 Tensorflow 编写了一个小型 RNN 网络,以返回给定一些参数的总能耗。我的代码似乎有问题。当我使用批量大小 > 1 时(即使只有 4 个样本!),它也不会过度拟合训练数据。在下面的代码中,当我将 BatchSize 设置为 1 时,损失值达到 0。但是,通过将 BatchSize 设置为 2,网络无法过拟合,损失值趋向于 12.500000,并永远卡在那里。

我怀疑这与 LSTM 状态有关。如果我不在每次迭代中更新状态,我也会遇到同样的问题。或者也许是成本函数?感谢您的帮助。谢谢。

import tensorflow as tf
import numpy as np
import os

from utils import loadData

Epochs = 10000
LearningRate = 0.0001
MaxGradNorm = 5

SeqLen = 1
NChannels = 28
NClasses = 1

NLayers = 2
NUnits = 256

BatchSize = 1
NumSamples = 4
#################################################################

trainingFile = "./training.dat"

X_values, Y_values = loadData(trainingFile, SeqLen, NumSamples)

X = tf.placeholder(tf.float32, [BatchSize, SeqLen, NChannels], name='inputs')

Y = tf.placeholder(tf.float32, [BatchSize, SeqLen, NClasses], name='labels')

keep_prob = tf.placeholder(tf.float32, name='keep')

initializer = tf.contrib.layers.xavier_initializer()

Xin = tf.unstack(tf.transpose(X, perm=[1, 0, 2]))

lstm_layers = []

for i in range(NLayers):

lstm_layer = tf.nn.rnn_cell.LSTMCell(num_units=NUnits, initializer=initializer, use_peepholes=True, state_is_tuple=True)

dropout_layer = tf.contrib.rnn.DropoutWrapper(lstm_layer, output_keep_prob=keep_prob)

#[LSTM ---> DROPOUT] ---> [LSTM ---> DROPOUT] ---> etc...
lstm_layers.append(dropout_layer)

rnn = tf.nn.rnn_cell.MultiRNNCell(lstm_layers, state_is_tuple=True)

initial_state = rnn.zero_state(BatchSize, tf.float32)

outputs, final_state = tf.nn.static_rnn(rnn, Xin, dtype=tf.float32, initial_state=initial_state)

outputs = tf.transpose(outputs, [1,0,2])
outputs = tf.reshape(outputs, [-1, NUnits])

weight = tf.Variable(tf.truncated_normal([NUnits, NClasses]))
bias = tf.Variable(tf.constant(0.1, shape=[NClasses]))
prediction = tf.matmul(outputs, weight) + bias
prediction = tf.reshape(prediction, [BatchSize, SeqLen, NClasses])

cost = tf.reduce_sum(tf.pow(tf.subtract(prediction, Y), 2)) / (2 * BatchSize)

tvars = tf.trainable_variables()

grad, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), MaxGradNorm)

optimizer = tf.train.AdamOptimizer(learning_rate = LearningRate)

train_step = optimizer.apply_gradients(zip(grad, tvars))

sess = tf.Session()

sess.run(tf.global_variables_initializer())

iteration = 1

for e in range(0, Epochs):

train_loss = []

state = sess.run(initial_state)

for i in xrange(0, len(X_values), BatchSize):
x = X_values[i:i + BatchSize]
y = Y_values[i:i + BatchSize]

y = np.expand_dims(y, 2)

feed = {X : x, Y : y, keep_prob : 1.0, initial_state : state}

_ , loss, state, pred = sess.run([train_step, cost, final_state, prediction], feed_dict = feed)

train_loss.append(loss)

iteration += 1

print("Epoch: {}/{}".format(e, Epochs), "Iteration: {:d}".format(iteration), "Train average rmse: {:6f}".format(np.mean(train_loss)))

Batch size = 1

Batch size = 2

最佳答案

标准化输入数据解决了问题。

关于Tensorflow:批量大小 > 1 时无法过度拟合训练数据,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48055846/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com