gpt4 book ai didi

python - 为什么我的模型在第二个时期过度拟合?

转载 作者:行者123 更新时间:2023-12-05 06:13:11 24 4
gpt4 key购买 nike

我是深度学习的初学者,我正在尝试使用 Mobilenet_v2 和 Inception 训练深度学习模型来对不同的美国手语手势进行分类。

以下是我创建一个 ImageDataGenerator 的代码,用于创建训练和验证集。

# Reformat Images and Create Batches

IMAGE_RES = 224
BATCH_SIZE = 32

datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rescale=1./255,
validation_split = 0.4
)

train_generator = datagen.flow_from_directory(
base_dir,
target_size = (IMAGE_RES,IMAGE_RES),
batch_size = BATCH_SIZE,
subset = 'training'
)

val_generator = datagen.flow_from_directory(
base_dir,
target_size= (IMAGE_RES, IMAGE_RES),
batch_size = BATCH_SIZE,
subset = 'validation'
)

以下是训练模型的代码:

# Do transfer learning with Tensorflow Hub
URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
feature_extractor = hub.KerasLayer(URL,
input_shape=(IMAGE_RES, IMAGE_RES, 3))
# Freeze pre-trained model
feature_extractor.trainable = False

# Attach a classification head
model = tf.keras.Sequential([
feature_extractor,
layers.Dense(5, activation='softmax')
])

model.summary()

# Train the model
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])

EPOCHS = 5

history = model.fit(train_generator,
steps_per_epoch=len(train_generator),
epochs=EPOCHS,
validation_data = val_generator,
validation_steps=len(val_generator)
)

Epoch 1/594/94 [==============================] - 19s 199ms/step - loss: 0.7333 - accuracy: 0.7730 - val_loss: 0.6276 - val_accuracy: 0.7705

Epoch 2/594/94 [==============================] - 18s 190ms/step - loss: 0.1574 - accuracy: 0.9893 - val_loss: 0.5118 - val_accuracy: 0.8145

Epoch 3/594/94 [==============================] - 18s 191ms/step - loss: 0.0783 - accuracy: 0.9980 - val_loss: 0.4850 - val_accuracy: 0.8235

Epoch 4/594/94 [==============================] - 18s 196ms/step - loss: 0.0492 - accuracy: 0.9997 - val_loss: 0.4541 - val_accuracy: 0.8395

Epoch 5/594/94 [==============================] - 18s 193ms/step - loss: 0.0349 - accuracy: 0.9997 - val_loss: 0.4590 - val_accuracy: 0.8365

我尝试过使用数据增强,但模型仍然过拟合,所以我想知道我的代码是否做错了什么。

最佳答案

您的数据非常小。尝试使用随机种子拆分并检查问题是否仍然存在。

如果是,则使用正则化并降低神经网络的复杂性。

同时尝试不同的优化器和较小的学习率(尝试 lr 调度器)

关于python - 为什么我的模型在第二个时期过度拟合?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/63406460/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com