gpt4 book ai didi

tensorflow - 在迁移学习 : InceptionV3 中,我的损失是 "nan",准确度是 "0.0000e+00 "

转载 作者:行者123 更新时间:2023-12-02 03:21:17 24 4
gpt4 key购买 nike

我正在研究迁移学习。我的用例是对两类图像进行分类。我使用 InceptionV3 对图像进行分类。在训练我的模型时,我在每个时期都会得到 nan 损失和 0.0000e+00 准确性。我使用 20 个 epoch,因为我的数据量很小:我有 1000 个图像用于训练,100 个图像用于测试,每批 5 条记录。

from keras.applications.inception_v3 import InceptionV3
from keras.preprocessing import image
from keras.models import Model
from keras.layers import Dense, GlobalAveragePooling2D
from keras import backend as K

# create the base pre-trained model
base_model = InceptionV3(weights='imagenet', include_top=False)

# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# let's add a fully-connected layer
x = Dense(1024, activation='relu')(x)

x = Dense(512, activation='relu')(x)
x = Dense(32, activation='relu')(x)
# and a logistic layer -- we have 2 classes
predictions = Dense(1, activation='softmax')(x)

# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)


for layer in base_model.layers:
layer.trainable = False

# we chose to train the top 2 inception blocks, i.e. we will freeze
# the first 249 layers and unfreeze the rest:
for layer in model.layers[:249]:
layer.trainable = False
for layer in model.layers[249:]:
layer.trainable = True

model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)

test_datagen = ImageDataGenerator(rescale=1./255)

training_set = train_datagen.flow_from_directory(
'C:/Users/Desktop/Transfer/train/',
target_size=(64, 64),
batch_size=5,
class_mode='binary')

test_set = test_datagen.flow_from_directory(
'C:/Users/Desktop/Transfer/test/',
target_size=(64, 64),
batch_size=5,
class_mode='binary')

model.fit_generator(
training_set,
steps_per_epoch=1000,
epochs=20,
validation_data=test_set,
validation_steps=100)

最佳答案

听起来你的梯度正在爆炸。可能有以下几个原因:

  • 检查您的输入是否正确生成。例如,使用 flow_from_directory
  • save_to_dir 参数
  • 由于批量大小为 5,请将 steps_per_epoch1000 修复为 1000/5=200
  • 使用 sigmoid 激活而不是 softmax
  • 在 Adam 中设置较低的学习率;为此,您需要单独创建优化器,例如 adam = Adam(0.0001) 并将其传递到 model.compile(..., optimizationr=adam)
  • 尝试使用 VGG16 而不是 InceptionV3

当您尝试以上所有方法时,请告诉我们。

关于tensorflow - 在迁移学习 : InceptionV3 中,我的损失是 "nan",准确度是 "0.0000e+00 ",我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54707735/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com