gpt4 book ai didi

python - Keras LSTM 输入/输出维度

转载 作者:太空宇宙 更新时间:2023-11-04 01:46:19 41 4
gpt4 key购买 nike

我正在使用 Keras 构建 LSTM 预测器。我的输入数组是历史价格数据。我将数据分割成 window_size block ,以便提前预测 prediction length block 。我的数据是 4246 个 float 的列表。我将我的数据分成 4055 个数组,每个数组的长度为 168,以便提前预测 24 个单位。

这为我提供了一个 x_train 集,维度为 (4055,168)。然后我缩放我的数据并尝试拟合数据但遇到尺寸错误。

df = pd.DataFrame(data)
print(f"Len of df: {len(df)}")
min_max_scaler = MinMaxScaler()
H = 24

window_size = 7*H
num_pred_blocks = len(df)-window_size-H+1

x_train = []
y_train = []
for i in range(num_pred_blocks):
x_train_block = df['C'][i:(i + window_size)]
x_train.append(x_train_block)
y_train_block = df['C'][(i + window_size):(i + window_size + H)]
y_train.append(y_train_block)

LEN = int(len(x_train)*window_size)
x_train = min_max_scaler.fit_transform(x_train)
batch_size = 1

def build_model():
model = Sequential()
model.add(LSTM(input_shape=(window_size,batch_size),
return_sequences=True,
units=num_pred_blocks))
model.add(TimeDistributed(Dense(H)))
model.add(Activation("linear"))
model.compile(loss="mse", optimizer="rmsprop")
return model

num_epochs = epochs
model= build_model()
model.fit(x_train, y_train, batch_size = batch_size, epochs = 50)

返回的错误是这样的。

ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 4055 arrays: [array([[0.00630006],

我没有正确分段吗?正确加载?单元数是否应该与预测 block 数不同?我感谢任何帮助。谢谢。

编辑

将它们转换为 Numpy 数组的建议是正确的但是 MinMixScalar() 返回一个 numpy 数组。我将数组重新整形为正确的维度但是现在我的计算机出现 CUDA 内存错误。我认为问题已经解决了。谢谢。

df = pd.DataFrame(data)
min_max_scaler = MinMaxScaler()
H = prediction_length

window_size = 7*H
num_pred_blocks = len(df)-window_size-H+1

x_train = []
y_train = []
for i in range(num_pred_blocks):
x_train_block = df['C'][i:(i + window_size)].values
x_train.append(x_train_block)
y_train_block = df['C'][(i + window_size):(i + window_size + H)].values
y_train.append(y_train_block)

x_train = min_max_scaler.fit_transform(x_train)
y_train = min_max_scaler.fit_transform(y_train)
x_train = np.reshape(x_train, (len(x_train), 1, window_size))
y_train = np.reshape(y_train, (len(y_train), 1, H))
batch_size = 1

def build_model():
model = Sequential()
model.add(LSTM(batch_input_shape=(batch_size, 1, window_size),
return_sequences=True,
units=100))
model.add(TimeDistributed(Dense(H)))
model.add(Activation("linear"))
model.compile(loss="mse", optimizer="rmsprop")
return model

num_epochs = epochs
model = build_model()
model.fit(x_train, y_train, batch_size = batch_size, epochs = 50)

最佳答案

我认为您没有在模型中传递批量大小。

input_shape=(window_size,batch_size) 是数据维度。这是正确的,但你应该使用 input_shape=(window_size, 1)

如果你想使用batch,你必须添加另一个维度,像这样LSTM(n_neurons, batch_input_shape=(n_batch, X.shape[1], X.shape[2])) (引用自 Keras)

在你的情况下:

def build_model():
model = Sequential()
model.add(LSTM(input_shape=(batch_size, 1, window_size),
return_sequences=True,
units=num_pred_blocks))
model.add(TimeDistributed(Dense(H)))
model.add(Activation("linear"))
model.compile(loss="mse", optimizer="rmsprop")
return model

您还需要使用 np.shape 来更改数据的维度,它应该是 (batch_dim, data_dim_1, data_dim_2)。我使用 numpy,所以 numpy.reshape() 会起作用。

首先你的数据应该是按行的,所以对于每一行,你应该有一个 (1, 168) 的形状,然后添加批处理维度,它将是 (batch_n , 1, 168)

希望这对您有所帮助。

关于python - Keras LSTM 输入/输出维度,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59020754/

41 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com