gpt4 book ai didi

python - Keras LSTM - 验证损失从时期 #1 开始增加

转载 作者:太空宇宙 更新时间:2023-11-04 02:33:23 25 4
gpt4 key购买 nike

我目前正在进行我的第一个“真正的”深度学习项目(惊喜)预测股票走势。我知道我可以 1000:1 做出任何有用的东西,但我很享受它并希望看到它,在我尝试这个的几周里,我学到的东西比我在完成 MOOC 的前 6 个月里学到的更多。

我正在使用 Keras 构建一个 LSTM,以预测下一步的下一步,并尝试将任务作为分类(向上/向下/稳定)和现在作为回归问题。两者都会导致类似的障碍,因为我的验证损失从未从时期 #1 改善。

我可以让模型过度拟合,使得 MSE 的训练损失接近于零(如果分类,则准确率为 100%),但在任何阶段验证损失都不会减少。这对我未经训练的眼睛来说是过度拟合,所以我添加了不同数量的 dropout,但所做的只是扼杀了模型/训练准确性的学习,并且没有显示出验证准确性的提高。

我尝试更改大量超参数 - 学习率、优化器、批量大小、回溯窗口、#layers、#units、dropout、#samples 等,还尝试了数据子集和特征子集,但我只是无法让它工作,所以我非常感谢任何帮助。

Example graph with no dropout

下面的代码(我知道它不漂亮):

# Import saved full dataframe ~ 200 features
import feather
df = feather.read_dataframe('df_feathered')
df.set_index('time', inplace=True)

# Difference the dataset to make stationary
df = df.diff(periods=1, axis=0)

# MAKE LARGE SAMPLE FOR TESTING
df_train = df.loc['2017-3-1':'2017-6-30']
df_val = df.loc['2017-7-1':'2017-8-31']
df_test = df.loc['2017-9-1':'2017-9-30']

# Make x_train, x_val sets by dropping target variable
x_train = df_train.drop('close+1', axis=1)
x_val = df_val.drop('close+1', axis=1)

# Scale the training data first then fit the transform to the test set
scaler = StandardScaler()
x_train = scaler.fit_transform(x_train)
x_test = scaler.transform(x_val)

# scaler = MinMaxScaler(feature_range=(0,1))
# x_train = scaler.fit_transform(df_train1)
# x_test = scaler.transform(df_val1)

# Create y_train, y_test, simply target variable for regression
y_train = df_train['close+1']
y_test = df_val['close+1']

# Define Lookback window for LSTM input
sliding_window = 15

# Convert x_train, x_test, y_train, y_test into 3d array (samples,
timesteps, features) for LSTM input
dataXtrain = []
for i in range(len(x_train)-sliding_window-1):
a = x_train[i:(i+sliding_window), 0:(x_train.shape[1])]
dataXtrain.append(a)

dataXtest = []
for i in range(len(x_test)-sliding_window-1):
a = x_test[i:(i+sliding_window), 0:(x_test.shape[1])]
dataXtest.append(a)

dataYtrain = []
for i in range(len(y_train)-sliding_window-1):
dataYtrain.append(y_train[i + sliding_window])

dataYtest = []
for i in range(len(y_test)-sliding_window-1):
dataYtest.append(y_test[i + sliding_window])

# Make data the divisible by a variety of batch_sizes for training
# Started at 1000 to not include replaced NaN values
dataXtrain = np.array(dataXtrain[1000:172008])
dataYtrain = np.array(dataYtrain[1000:172008])
dataXtest = np.array(dataXtest[1000:83944])
dataYtest = np.array(dataYtest[1000:83944])

# Checking input shapes
print('dataXtrain size is: {}'.format((dataXtrain).shape))
print('dataXtest size is: {}'.format((dataXtest).shape))
print('dataYtrain size is: {}'.format((dataYtrain).shape))
print('dataYtest size is: {}'.format((dataYtest).shape))

### ACTUAL LSTM MODEL

batch_size = 256
timesteps = dataXtrain.shape[1]
features = dataXtrain.shape[2]

# Model set-up, stacked 4 layer stateful LSTM
model = Sequential()
model.add(LSTM(512, return_sequences=True, stateful=True,
batch_input_shape=(batch_size, timesteps, features)))
model.add(LSTM(256,stateful=True, return_sequences=True))
model.add(LSTM(256,stateful=True, return_sequences=True))
model.add(LSTM(128,stateful=True))
model.add(Dense(1, activation='linear'))

model.summary()

reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.9, patience=5, min_lr=0.000001, verbose=1)

def coeff_determination(y_true, y_pred):
from keras import backend as K
SS_res = K.sum(K.square( y_true-y_pred ))
SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )
return ( 1 - SS_res/(SS_tot + K.epsilon()) )

model.compile(loss='mse',
optimizer='nadam',
metrics=[coeff_determination,'mse','mae','mape'])

history = model.fit(dataXtrain, dataYtrain,validation_data=(dataXtest, dataYtest),
epochs=100,batch_size=batch_size, shuffle=False, verbose=1, callbacks=[reduce_lr])

score = model.evaluate(dataXtest, dataYtest,batch_size=batch_size, verbose=1)
print(score)

predictions = model.predict(dataXtest, batch_size=batch_size)
print(predictions)

import matplotlib.pyplot as plt
%matplotlib inline
#plt.plot(history.history['mean_squared_error'])
#plt.plot(history.history['val_mean_squared_error'])
plt.plot(history.history['coeff_determination'])
plt.plot(history.history['val_coeff_determination'])
#plt.plot(history.history['mean_absolute_error'])
#plt.plot(history.history['mean_absolute_percentage_error'])
#plt.plot(history.history['val_mean_absolute_percentage_error'])
#plt.title("MSE")
plt.ylabel("R2")
plt.xlabel("epoch")
plt.legend(["train", "val"], loc="best")
plt.show()

plt.plot(history.history["loss"][5:])
plt.plot(history.history["val_loss"][5:])
plt.title("model loss")
plt.ylabel("loss")
plt.xlabel("epoch")
plt.legend(["train", "val"], loc="best")
plt.show()

plt.figure(figsize=(20,8))
plt.plot(dataYtest)
plt.plot(predictions)
plt.title("Prediction")
plt.ylabel("Price")
plt.xlabel("Time")
plt.legend(["Truth", "Prediction"], loc="best")
plt.show()

最佳答案

也许您应该记住,您是在预测 socks 的 yield ,而这很可能什么也预测不到。所以 val_loss 增加根本不是过度拟合。与其添加更多的 dropout,也许你应该考虑添加更多的层来增加它的能力。

关于python - Keras LSTM - 验证损失从时期 #1 开始增加,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48542473/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com