gpt4 book ai didi

python - val_loss 减半,但 val_acc 保持不变

转载 作者:行者123 更新时间:2023-11-30 08:57:16 32 4
gpt4 key购买 nike

我正在训练神经网络并得到以下输出。 loss 和 val_loss 都在减少,这让我很高兴。然而,val_acc 保持不变。这能有什么原因呢?我的数据非常不平衡,但我通过 sklearn compute_class_weight 函数对其进行权衡。

Train on 109056 samples, validate on 27136 samples
Epoch 1/200
- 1174s - loss: 1.0353 - acc: 0.5843 - val_loss: 1.0749 - val_acc: 0.7871

Epoch 00001: val_acc improved from -inf to 0.78711, saving model to
nn_best_weights.h5
Epoch 2/200
- 1174s - loss: 1.0122 - acc: 0.6001 - val_loss: 1.0642 - val_acc: 0.9084

Epoch 00002: val_acc improved from 0.78711 to 0.90842, saving model to
nn_best_weights.h5
Epoch 3/200
- 1176s - loss: 0.9974 - acc: 0.5885 - val_loss: 1.0445 - val_acc: 0.9257

Epoch 00003: val_acc improved from 0.90842 to 0.92571, saving model to
nn_best_weights.h5
Epoch 4/200
- 1177s - loss: 0.9834 - acc: 0.5760 - val_loss: 1.0071 - val_acc: 0.9260

Epoch 00004: val_acc improved from 0.92571 to 0.92597, saving model to
nn_best_weights.h5
Epoch 5/200
- 1182s - loss: 0.9688 - acc: 0.5639 - val_loss: 1.0175 - val_acc: 0.9260

Epoch 00005: val_acc did not improve from 0.92597
Epoch 6/200
- 1177s - loss: 0.9449 - acc: 0.5602 - val_loss: 0.9976 - val_acc: 0.9246

Epoch 00006: val_acc did not improve from 0.92597
Epoch 7/200
- 1186s - loss: 0.9070 - acc: 0.5598 - val_loss: 0.9667 - val_acc: 0.9258

Epoch 00007: val_acc did not improve from 0.92597
Epoch 8/200
- 1178s - loss: 0.8541 - acc: 0.5663 - val_loss: 0.9254 - val_acc: 0.9221

Epoch 00008: val_acc did not improve from 0.92597
Epoch 9/200
- 1171s - loss: 0.7859 - acc: 0.5853 - val_loss: 0.8686 - val_acc: 0.9237

Epoch 00009: val_acc did not improve from 0.92597
Epoch 10/200
- 1172s - loss: 0.7161 - acc: 0.6139 - val_loss: 0.8119 - val_acc: 0.9260

Epoch 00010: val_acc did not improve from 0.92597
Epoch 11/200
- 1168s - loss: 0.6500 - acc: 0.6416 - val_loss: 0.7531 - val_acc: 0.9259

Epoch 00011: val_acc did not improve from 0.92597
Epoch 12/200
- 1164s - loss: 0.5967 - acc: 0.6676 - val_loss: 0.7904 - val_acc: 0.9260

Epoch 00012: val_acc did not improve from 0.92597
Epoch 13/200
- 1175s - loss: 0.5608 - acc: 0.6848 - val_loss: 0.7589 - val_acc: 0.9259

Epoch 00013: val_acc did not improve from 0.92597
Epoch 14/200
- 1221s - loss: 0.5377 - acc: 0.6980 - val_loss: 0.7811 - val_acc: 0.9260

Epoch 00014: val_acc did not improve from 0.92597

我的模型如下。我知道内核大小相当大,但这是故意的,因为数据是以某种方式构造的。

    cnn = Sequential()
cnn.add(Conv2D(16, kernel_size=(2, 100), padding='same', data_format="channels_first", input_shape=(1,10, 100)))
cnn.add(LeakyReLU(alpha=0.01))
cnn.add(BatchNormalization())
cnn.add(Conv2D(16, (1, 1)))
cnn.add(LeakyReLU(alpha=0.01))
cnn.add(Conv2D(16, (1, 8)))
cnn.add(Flatten())
rnn = Sequential()
rnn = LSTM(100, return_sequences=False, dropout=0.2)
dense = Sequential()
dense.add(Dense(3, activation='softmax'))
main_input = Input(batch_shape=(512, 1, 1, 10, 100))
model = TimeDistributed(cnn)(main_input)
model = rnn(model)
model = dense(model)
replica = Model(inputs=main_input, outputs=model)
replica.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

最佳答案

不知道你的型号很难回答你的问题。

可能的答案是:

  • 您的模型没有任何问题。这可能是最高的 您可以获得的准确性。
  • 您的数据可能不平衡或未打乱。 val_acc 高于 acc 表明训练、评估可能有问题 并测试拆分。一开始训练的准确度往往比 val_acc 更高。然后 val_acc catch ,或者不 catch ;)我还可以表明您的数据集中没有太多差异,那么您可能会出现这种行为。
  • 你的学习率可能太大了。尝试减少它。

我猜模型最小化的实际指标是损失,因此在优化过程中您应该跟踪损失并监控其改进。

检查this链接以获取有关如何检查模型的更多信息。

关于python - val_loss 减半,但 val_acc 保持不变,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55065924/

32 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com