gpt4 book ai didi

python - fit_generator 的训练精度为 0

转载 作者:行者123 更新时间:2023-11-30 09:26:45 28 4
gpt4 key购买 nike

我尝试使用 TensorFlow、Keras 和 ImageDataGenerator 从头开始​​创建一个模型,但它没有按预期进行。我仅使用生成器加载图像,因此不使用数据增强。有两个包含训练数据和测试数据的文件夹,每个文件夹有 36 个充满图像的子文件夹。我得到以下输出:

Using TensorFlow backend.
Found 13268 images belonging to 36 classes.
Found 3345 images belonging to 36 classes.
Epoch 1/2
1/3 [=========>....................] - ETA: 0s - loss: 15.2706 - acc: 0.0000e+00
3/3 [==============================] - 1s 180ms/step - loss: 14.7610 - acc: 0.0667 - val_loss: 15.6144 - val_acc: 0.0312
Epoch 2/2
1/3 [=========>....................] - ETA: 0s - loss: 14.5063 - acc: 0.1000
3/3 [==============================] - 0s 32ms/step - loss: 15.5808 - acc: 0.0333 - val_loss: 15.6144 - val_acc: 0.0312

尽管看起来不错,但显然它根本没有训练。我尝试过使用不同数量的纪元、步骤和更大的数据集——几乎没有任何变化。即使有超过 60k 的图像,训练每个 epoch 也需要大约半秒!奇怪的是,当我尝试将图像保存到各自的文件夹时,它只保存了大约 500-600 个图像,并且很可能会停止。

from tensorflow.python.keras.applications import ResNet50
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense, Flatten, GlobalAveragePooling2D, Conv2D, Dropout
from tensorflow.python.keras.applications.resnet50 import preprocess_input
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
import keras
import os

if __name__ == '__main__':
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

image_size = 28
img_rows = 28
img_cols = 28
num_classes = 36

data_generator = ImageDataGenerator()

train_generator = data_generator.flow_from_directory(
directory="/final train 1 of 5/",
save_to_dir="/image generator output/train/",
target_size=(image_size, image_size),
color_mode="grayscale",
batch_size=10,
class_mode='categorical')

validation_generator = data_generator.flow_from_directory(
directory="/final test 1 of 5/",
save_to_dir="/image generator output/test/",
target_size=(image_size, image_size),
color_mode="grayscale",
class_mode='categorical')

model = Sequential()
model.add(Conv2D(20, kernel_size=(3, 3),
activation='relu',
input_shape=(img_rows, img_cols, 1)))
model.add(Conv2D(20, kernel_size=(3, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))

model.compile(loss=keras.losses.categorical_crossentropy,
optimizer='adam', # adam/sgd
metrics=['accuracy'])

model.fit_generator(train_generator,
steps_per_epoch=3,
epochs=2,
validation_data=validation_generator,
validation_steps=1)

似乎有什么东西默默地失败并削弱了训练过程。

最佳答案

问题是您误解了 fit_generatorsteps_per_epoch 参数。让我们看一下文档:

steps_per_epoch: Integer. Total number of steps (batches of samples) to yield from generator before declaring one epoch finished and starting the next epoch. It should typically be equal to the number of samples of your dataset divided by the batch size. Optional for Sequence: if unspecified, will use the len(generator) as a number of steps.

所以基本上,它决定了每个时期将生成多少批处理。根据定义,一个纪元意味着遍历整个训练数据,因此我们必须将此参数设置为样本总数除以批量大小。因此,在您的示例中,它将是 steps_per_epoch = 13268//10。当然,正如文档中提到的,您可以不指定它,它会自动推断出这一点。

此外,同样的事情也适用于 validation_steps 参数。

关于python - fit_generator 的训练精度为 0,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53654594/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com