gpt4 book ai didi

python - Keras 模型为多标签图像分类提供非常低的训练和验证精度

转载 作者:行者123 更新时间:2023-11-28 22:16:51 25 4
gpt4 key购买 nike

我的代码有 50 个类别的图像被传递到以下模型。但是在我完成任何参数调整后,收到的准确度几乎相同。训练和验证数据是正确的。

每个类别都有 34 张训练图像和 6 张验证图像。

import keras
from keras.layers import Activation, Dense, Dropout, Conv2D, Flatten, MaxPooling2D, BatchNormalization
from keras.models import Sequential
from keras.optimizers import Adam, SGD

model = Sequential()
input_shape=(256, 256, 3)
adam = Adam(lr=0.000001,decay=0.001)
#sgd = SGD(lr=0.1, decay=1e-2, momentum=0.9)
chanDim=-1
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization(axis=chanDim))

model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization(axis=chanDim))

model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization(axis=chanDim))

# model.add(Conv2D(64, (3, 3)))
# model.add(Activation('relu'))
# model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())
# model.add(Dense(300))
# model.add(Dropout(rate=0.5))
# model.add(Activation('relu'))
model.add(Dense(512))
model.add(Dropout(rate=0.5))
model.add(Activation('relu'))
model.add(Dropout(rate=0.5))
model.add(Dense(50))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy',optimizer=adam,metrics=['accuracy'])

import PIL
from PIL import Image
from keras.preprocessing.image import ImageDataGenerator
train_data_dir = 'C:/Users/abhir/Desktop/Difference4/train'
validation_data_dir = 'C:/Users/abhir/Desktop/Difference4/validate'

epochs = 10
# adding more parameters to training generator did not affect much too
train_datagen = ImageDataGenerator(rescale=1./255)
validate_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(train_data_dir, target_size=(256,256), batch_size=12, class_mode='categorical',seed=7)
validate_generator = validate_datagen.flow_from_directory(validation_data_dir, target_size=(256,256), batch_size=6, class_mode='categorical',seed=7)

#increasing the steps_per_epoch and batch size doesn't affect much as well..
model.fit_generator(train_generator, steps_per_epoch=100,epochs=5, validation_data=validate_generator, validation_steps=50)

结果如下:100/100 [==============================] - 337s 3s/step - loss: 5.7115 - acc: 0.0308 - val_loss : 3.9834 - val_acc: 0.0367

最佳答案

您正在训练一个包含数千个参数的神经网络,每个类别有 34 张图像,10 个时期(340 张图像)。您可以使用一条经验法则,即您应该拥有比可训练参数更多的训练示例。可训练参数使用 model.summary()

打印

因此,您可以尝试使用大约 1000 个 epoch,看看您的网络如何过度拟合训练数据,但最终数据不足。查看损失曲线,并检查张量板直方图,看看您的模型是否正在学习一些东西。

关于python - Keras 模型为多标签图像分类提供非常低的训练和验证精度,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51879667/

25 4 0