gpt4 book ai didi

Auto Encoder didn't improve my neural network loss(自动编码器没有改善我的神经网络损失)

转载 作者:bug小助手 更新时间:2023-10-25 14:37:26 27 4
gpt4 key购买 nike



so I'm trying to improve my neural network result by using auto encoder for preprocessing. The data looks more or less like this :

因此,我正试图通过使用自动编码器进行预处理来改善我的神经网络结果。数据大体上如下所示:


















































Column A Column B Column C Column D Target
-70 -76 -76 -80 1
-93 -100 -94 -100 4
-88 -83 -89 -85 2
-100 -95 -91 -99 4
-100 -86 -82 -80 3


-100 is actually the default value for null because the device can't get the data.

-100实际上是NULL的缺省值,因为设备无法获取数据。


I used tensorflow for the neural network model, and also using parametric tuning for searching the best results. Before I used auto encoder, the loss is around 1 to 2. But after using the auto encoder, the loss become 3. My target is to reduce losses so that it become less than 1. I use simple auto encoder input_size -> input_size/2 -> input_size/2 -> input_size for the layer without using the Target column.

对于神经网络模型,我使用了TensorFlow,还使用了参数调优来搜索最佳结果。在我使用自动编码器之前,损耗大约是1到2。但使用自动编码器后,损耗变为3。我的目标是降低损耗,使其小于1。我使用简单的自动编码器INPUT_SIZE->INPUT_SIZE/2->INPUT_SIZE/2->INPUT_SIZE没有使用Target列。


What I do is just training the auto encoder using all raw data, then save the auto encoder model. After that, I predict the raw data to model A, then the data that has been processed by model A is used to train the neural network (parametric tuning) to predict the Target.

我所做的就是用所有原始数据训练自动编码器,然后保存自动编码器模型。之后,我将原始数据预测到模型A,然后使用模型A处理后的数据来训练神经网络(参数整定)来预测目标。


This is the raw data graphic :

以下是原始数据图表:


raw data

原始数据


and this is after the data is processed by the auto encoder :

这是在数据被自动编码器处理之后:


processed data

已处理的数据


Is there something wrong with my step? Or is there any other way to improve my model loss? And I can only use neural networks because it's my assignment

我的脚步有问题吗?或者,有没有其他方法来改善我的模特损失?我只能使用神经网络,因为这是我的任务


I have tried to use simple auto encoder, and the graphic after the data is processed looks like it is good. But when I use the processed data to train the neural network for predicting the answer, the losses are worse than before using auto encoder. I'm looking to improve the loss so it become less than 1.

我试过使用简单的自动编码器,数据处理后的图形看起来很好。但当我用处理后的数据训练神经网络来预测答案时,损失比使用自动编码器之前更严重。我希望改善损失,使其低于1。


This is the code for hyperparametric tuning that I used :

这是我使用的超参数调优代码:


def build_model(hp):
model = keras.Sequential()
#model.add(layers.Flatten())
model.add(InputLayer(input_shape=(x_train.shape[1], )))
# Tune the number of layers.
for i in range(hp.Int("num_layers", 1, 5)):
model.add(
layers.Dense(
# Tune number of units separately.
units=hp.Int(f"units_{i}", min_value=2, max_value=10, step=1),
activation=hp.Choice("activation", ["relu"]),
)
)
model.add(layers.Dense(1, activation="relu"))
learning_rate = hp.Float("lr", min_value=1e-4, max_value=3e-1, sampling="log")
model.compile(
optimizer=keras.optimizers.SGD(learning_rate=learning_rate),
loss="mean_squared_error",
metrics=["mae"],
)
return model

build_model(keras_tuner.HyperParameters())

BUILD_MODEL(keras_Tuner.HyperParameters())


tuner = keras_tuner.BayesianOptimization(
hypermodel=build_model,
objective="val_loss",
max_trials=40,
overwrite=True,
directory="my_dir",
project_name="helloworld",

)

)


tuner.search(x_train, y_train, epochs=25, validation_data=(x_test, y_test),callbacks=[keras.callbacks.TensorBoard("/tmp/tb_logs")])

更多回答
优秀答案推荐

In your description of your implementation it doesn't seem to me that there are any major problems in your training pipeline, however in my opinion there are aspects that can be improved:

在您对您的实施情况的描述中,在我看来,您的培训流程中似乎没有任何重大问题,但在我看来,有一些方面可以改进:



  1. The architecture of the model: absolutely you should add more layers to your architecture. You can also implement a convolutional autoencoder, that is, one that introduces 1D convolutional layers. These architectural changes I am quite convinced will bring improvements.

  2. Training hyperparams: you didn't report much information about training parameters, however I think these can influence training. Reduce the learning rate and introduce an early stopping condition with a patience value equal to 10/15.


更多回答

Ah yes, I forgot to add the code for the model using hyperparametric tuning. I have added the source code for the model using hyperparametric tuning. I only used max of 5 layers because all the best result only cost 3 or less layers so I thought it's enough? Maybe I'll try your suggestion first

啊,是的,我忘了添加使用超参数调优的模型的代码。我已经添加了使用超参数调优的模型的源代码。我只用了最多5层,因为所有最好的效果只需要3层或更少的层,所以我想这已经足够了?也许我会先试试你的建议

Okei, thank you for adding the source code.. let me know if it works!

Okei,谢谢你添加源代码..如果管用了,请告诉我!

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com