ai didi

python - Keras和VGG训练: why do I "lose" training and validation examples following model. Predict_generator

转载 作者:行者123 更新时间:2023-11-30 09:08:58 24 4
gpt4 key购买 nike

我正在使用我自己的一些图像来训练 VGG。我有以下代码:

img_width, img_height = 512, 512
top_model_weights_path = 'UIP-versus-inconsistent.h5'
train_dir = 'MasterHRCT/Limited-Cuts-UIP-Inconsistent/train'
validation_dir = 'MasterHRCT/Limited-Cuts-UIP-Inconsistent/validation'
nb_train_samples = 1500
nb_validation_samples = 500
epochs = 50
batch_size = 16

def save_bottleneck_features():

datagen = ImageDataGenerator(rescale=1. / 255)

model = applications.VGG16(include_top=False, weights='imagenet')

generator = datagen.flow_from_directory(
train_dir,
target_size=(img_width, img_height),
shuffle=False,
class_mode=None,
batch_size=batch_size
)

bottleneck_features_train = model.predict_generator(generator=generator, steps=nb_train_samples // batch_size)

np.save(file="UIP-versus-inconsistent_train.npy", arr=bottleneck_features_train)

generator = datagen.flow_from_directory(
validation_dir,
target_size=(img_width, img_height),
shuffle=False,
class_mode=None,
batch_size=batch_size,
)

bottleneck_features_validation = model.predict_generator(generator, nb_validation_samples // batch_size)

np.save(file="UIP-versus-inconsistent_validate.npy", arr=bottleneck_features_validation)

generator = datagen.flow_from_directory(
validation_dir,
target_size=(img_width, img_height),
shuffle=False,
class_mode=None,
batch_size=batch_size,
)

bottleneck_features_validation = model.predict_generator(generator, nb_validation_samples // batch_size)

np.save(file="UIP-versus-inconsistent_validate.npy", arr=bottleneck_features_validation)

根据我的目录执行此操作后,我得到了预期的结果

 Found 1500 images belonging to 2 classes.
Found 500 images belonging to 2 classes

然后我就跑了

 train_data = np.load(file="UIP-versus-inconsistent_train.npy")
train_labels = np.array([0] * 750 + [1] * 750)
validation_data = np.load(file="UIP-versus-inconsistent_validate.npy")
validation_labels = np.array([0] * 250 + [1] * 250)

然后检查数据

 print("Train data shape", train_data.shape)
print("Train_labels shape", train_labels.shape)
print("Validation_data shape", validation_labels.shape)
print("Validation_labels", validation_labels.shape)

我明白了

Train data shape (1488, 16, 16, 512)
Train_labels shape (1488,)
Validation_data shape (496,)
Validation_labels (496,)

这是可变的 - 我没有 1500 个训练数据示例和 500 个验证示例,而是“丢失”了一些。有时当我运行时 save_bottleneck_features():数字返回正确,但有时则不然。当这个过程需要很长时间时,这种情况经常发生。对此有可重复的解释吗?也许图像已损坏?

最佳答案

很简单:

1488 = (1500 // batch_size) * batch_size
496 = (500 // batch_size) * batch_size

您的损失来自整数除法的不准确。

关于python - Keras和VGG训练: why do I "lose" training and validation examples following model. Predict_generator,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45164085/

24 4 0
文章推荐: python-3.x - Keras fit_generator(),这是正确的用法吗?
文章推荐: java - 在两个不同的内部框架java中的两个表之间拖放
文章推荐: javascript - 为目标 div 设置默认内容
文章推荐: machine-learning - Caffe:使用 Scale 层添加 Softmax 温度
行者123
个人简介

我是一名优秀的程序员,十分优秀!

滴滴打车优惠券免费领取
滴滴打车优惠券
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com