gpt4 book ai didi

python - 塑造 LSTM 的数据,并将密集层的输出馈送到 LSTM

转载 作者:行者123 更新时间:2023-12-01 01:17:59 37 4
gpt4 key购买 nike

我正在尝试找出适合我想要拟合的模型的正确语法。这是一个时间序列预测问题,我想在将时间序列输入 LSTM 之前使用一些密集层来改进时间序列的表示。

这是我正在使用的虚拟系列:

import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import numpy as np
import keras as K
import tensorflow as tf

d = pd.DataFrame(data = {"x": np.linspace(0, 100, 1000)})
d['l1_x'] = d.x.shift(1)
d['l2_x'] = d.x.shift(2)
d.fillna(0, inplace = True)
d["y"] = np.sin(.1*d.x*np.sin(d.l1_x))*np.sin(d.l2_x)
plt.plot(d.x, d.y)

enter image description here

首先,我将拟合一个前面没有密集层的 LSTM。这需要我 reshape 数据:

X = d[["x", "l1_x", "l2_x"]].values.reshape(len(d), 3,1)
y = d.y.values

这是正确的吗?

这些教程看起来单个时间序列的第一维应为 1,后跟时间步数 (1000),最后是协变量数 (3)。但是当我这样做时,模型无法编译。

这里我编译并训练模型:

model = K.Sequential()
model.add(K.layers.LSTM(10, input_shape=(X.shape[1], X.shape[2]), batch_size = 1, stateful=True))
model.add(K.layers.Dense(1))
callbacks = [K.callbacks.EarlyStopping(monitor='loss', min_delta=0, patience=5, verbose=1, mode='auto', baseline=None, restore_best_weights=True)]
model.compile(loss='mean_squared_error', optimizer='rmsprop')

model.fit(X, y, epochs=50, batch_size=1, verbose=1, shuffle=False, callbacks = callbacks)
model.reset_states()

yhat = model.predict(X, 1)
plt.clf()
plt.plot(d.x, d.y)
plt.plot(d.x, yhat)

为什么我无法让模型过度拟合?是因为我错误地重新调整了数据吗?当我在 LSTM 中使用更多节点时,它并没有真正变得更加过度拟合。

enter image description here

(我也不清楚“有状态”是什么意思。神经网络只是非线性模型。“状态”指的是哪些参数以及为什么要重置它们?)

如何在输入和 LSTM 之间插入密集层?最后,我想添加一堆密集层,基本上在到达 LSTM 之前对 x 进行基础扩展。但 LSTM 需要 3D 数组和密集层输出矩阵。我在这里做什么?这不起作用:

model = K.Sequential()
model.add(K.layers.Dense(10, activation = "relu", input_dim = 3))
model.add(K.layers.LSTM(3, input_shape=(10, X.shape[2]), batch_size = 1, stateful=True))
model.add(K.layers.Dense(1))

ValueError: Input 0 is incompatible with layer lstm_2: expected ndim=3, found ndim=2

最佳答案

对于第一个问题,我正在做同样的事情,我没有收到任何错误,请分享您的错误。

注意:我将给你使用函数式 API 的示例,这给了你更多的自由(个人意见)

from keras.layers import Dense, Flatten, LSTM, Activation
from keras.layers import Dropout, RepeatVector, TimeDistributed
from keras import Input, Model

seq_length = 15
input_dims = 10
output_dims = 8
n_hidden = 10
model1_inputs = Input(shape=(seq_length,input_dims,))
model1_outputs = Input(shape=(output_dims,))

net1 = LSTM(n_hidden, return_sequences=True)(model1_inputs)
net1 = LSTM(n_hidden, return_sequences=False)(net1)
net1 = Dense(output_dims, activation='relu')(net1)
model1_outputs = net1

model1 = Model(inputs=model1_inputs, outputs = model1_outputs, name='model1')

## Fit the model
model1.summary()


_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_11 (InputLayer) (None, 15, 10) 0
_________________________________________________________________
lstm_8 (LSTM) (None, 15, 10) 840
_________________________________________________________________
lstm_9 (LSTM) (None, 10) 840
_________________________________________________________________
dense_9 (Dense) (None, 8) 88
_________________________________________________________________

对于你的第二个问题,有两种方法:

  1. 如果您发送数据时没有进行序列化,其尺寸为(batch, input_dims),那么可以使用此方法RepeatVector ,它通过 n_steps 重复相同的权重,这只不过是 LSTM 中的 rolling_steps

{

seq_length = 15
input_dims = 16
output_dims = 8
n_hidden = 20
lstm_dims = 10
model1_inputs = Input(shape=(input_dims,))
model1_outputs = Input(shape=(output_dims,))

net1 = Dense(n_hidden)(model1_inputs)
net1 = Dense(n_hidden)(net1)

net1 = RepeatVector(3)(net1)
net1 = LSTM(lstm_dims, return_sequences=True)(net1)
net1 = LSTM(lstm_dims, return_sequences=False)(net1)
net1 = Dense(output_dims, activation='relu')(net1)
model1_outputs = net1

model1 = Model(inputs=model1_inputs, outputs = model1_outputs, name='model1')

## Fit the model
model1.summary()

_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_13 (InputLayer) (None, 16) 0
_________________________________________________________________
dense_13 (Dense) (None, 20) 340
_________________________________________________________________
dense_14 (Dense) (None, 20) 420
_________________________________________________________________
repeat_vector_2 (RepeatVecto (None, 3, 20) 0
_________________________________________________________________
lstm_14 (LSTM) (None, 3, 10) 1240
_________________________________________________________________
lstm_15 (LSTM) (None, 10) 840
_________________________________________________________________
dense_15 (Dense) (None, 8) 88
=================================================================
  • 如果您要发送 Dim 序列 (seq_len, input_dims),那么您可以 TimeDistributed ,它在整个序列上重复相同的密集层权重。
  • {

    seq_length = 15
    input_dims = 10
    output_dims = 8
    n_hidden = 10
    lstm_dims = 6
    model1_inputs = Input(shape=(seq_length,input_dims,))
    model1_outputs = Input(shape=(output_dims,))

    net1 = TimeDistributed(Dense(n_hidden))(model1_inputs)
    net1 = LSTM(output_dims, return_sequences=True)(net1)
    net1 = LSTM(output_dims, return_sequences=False)(net1)
    net1 = Dense(output_dims, activation='relu')(net1)
    model1_outputs = net1

    model1 = Model(inputs=model1_inputs, outputs = model1_outputs, name='model1')

    ## Fit the model
    model1.summary()


    _________________________________________________________________
    Layer (type) Output Shape Param #
    =================================================================
    input_17 (InputLayer) (None, 15, 10) 0
    _________________________________________________________________
    time_distributed_3 (TimeDist (None, 15, 10) 110
    _________________________________________________________________
    lstm_18 (LSTM) (None, 15, 8) 608
    _________________________________________________________________
    lstm_19 (LSTM) (None, 8) 544
    _________________________________________________________________
    dense_19 (Dense) (None, 8) 72
    =================================================================

    注意:我堆叠了两层,这样做时,在第一层中我使用了return_sequence,它返回每个时间步骤的输出,由第二层使用层,仅在最后一个 time_step 返回输出。

    关于python - 塑造 LSTM 的数据,并将密集层的输出馈送到 LSTM,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54138205/

    37 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com