so I'm trying to improve my neural network result by using auto encoder for preprocessing. The data looks more or less like this :
因此,我正试图通过使用自动编码器进行预处理来改善我的神经网络结果。数据大体上如下所示:
Column A |
Column B |
Column C |
Column D |
Target |
-70 |
-76 |
-76 |
-80 |
1 |
-93 |
-100 |
-94 |
-100 |
4 |
-88 |
-83 |
-89 |
-85 |
2 |
-100 |
-95 |
-91 |
-99 |
4 |
-100 |
-86 |
-82 |
-80 |
3 |
-100 is actually the default value for null because the device can't get the data.
-100实际上是NULL的缺省值,因为设备无法获取数据。
I used tensorflow for the neural network model, and also using parametric tuning for searching the best results. Before I used auto encoder, the loss is around 1 to 2. But after using the auto encoder, the loss become 3. My target is to reduce losses so that it become less than 1. I use simple auto encoder input_size -> input_size/2 -> input_size/2 -> input_size for the layer without using the Target column.
对于神经网络模型,我使用了TensorFlow,还使用了参数调优来搜索最佳结果。在我使用自动编码器之前,损耗大约是1到2。但使用自动编码器后,损耗变为3。我的目标是降低损耗,使其小于1。我使用简单的自动编码器INPUT_SIZE->INPUT_SIZE/2->INPUT_SIZE/2->INPUT_SIZE没有使用Target列。
What I do is just training the auto encoder using all raw data, then save the auto encoder model. After that, I predict the raw data to model A, then the data that has been processed by model A is used to train the neural network (parametric tuning) to predict the Target.
我所做的就是用所有原始数据训练自动编码器,然后保存自动编码器模型。之后,我将原始数据预测到模型A,然后使用模型A处理后的数据来训练神经网络(参数整定)来预测目标。
This is the raw data graphic :
以下是原始数据图表:
raw data
原始数据
and this is after the data is processed by the auto encoder :
这是在数据被自动编码器处理之后:
processed data
已处理的数据
Is there something wrong with my step? Or is there any other way to improve my model loss? And I can only use neural networks because it's my assignment
我的脚步有问题吗?或者,有没有其他方法来改善我的模特损失?我只能使用神经网络,因为这是我的任务
I have tried to use simple auto encoder, and the graphic after the data is processed looks like it is good. But when I use the processed data to train the neural network for predicting the answer, the losses are worse than before using auto encoder. I'm looking to improve the loss so it become less than 1.
我试过使用简单的自动编码器,数据处理后的图形看起来很好。但当我用处理后的数据训练神经网络来预测答案时,损失比使用自动编码器之前更严重。我希望改善损失,使其低于1。
This is the code for hyperparametric tuning that I used :
这是我使用的超参数调优代码:
def build_model(hp):
model = keras.Sequential()
#model.add(layers.Flatten())
model.add(InputLayer(input_shape=(x_train.shape[1], )))
# Tune the number of layers.
for i in range(hp.Int("num_layers", 1, 5)):
model.add(
layers.Dense(
# Tune number of units separately.
units=hp.Int(f"units_{i}", min_value=2, max_value=10, step=1),
activation=hp.Choice("activation", ["relu"]),
)
)
model.add(layers.Dense(1, activation="relu"))
learning_rate = hp.Float("lr", min_value=1e-4, max_value=3e-1, sampling="log")
model.compile(
optimizer=keras.optimizers.SGD(learning_rate=learning_rate),
loss="mean_squared_error",
metrics=["mae"],
)
return model
build_model(keras_tuner.HyperParameters())
BUILD_MODEL(keras_Tuner.HyperParameters())
tuner = keras_tuner.BayesianOptimization(
hypermodel=build_model,
objective="val_loss",
max_trials=40,
overwrite=True,
directory="my_dir",
project_name="helloworld",
)
)
tuner.search(x_train, y_train, epochs=25, validation_data=(x_test, y_test),callbacks=[keras.callbacks.TensorBoard("/tmp/tb_logs")])
更多回答
我是一名优秀的程序员,十分优秀!