gpt4 book ai didi

python - 训练深度学习模型时出错

转载 作者:行者123 更新时间:2023-12-04 15:22:28 24 4
gpt4 key购买 nike

所以我设计了一个 CNN 并使用以下参数进行编译,

training_file_loc = "8-SignLanguageMNIST/sign_mnist_train.csv"
testing_file_loc = "8-SignLanguageMNIST/sign_mnist_test.csv"

def getData(filename):
images = []
labels = []
with open(filename) as csv_file:
file = csv.reader(csv_file, delimiter = ",")
next(file, None)

for row in file:
label = row[0]
data = row[1:]
img = np.array(data).reshape(28,28)

images.append(img)
labels.append(label)

images = np.array(images).astype("float64")
labels = np.array(labels).astype("float64")

return images, labels

training_images, training_labels = getData(training_file_loc)
testing_images, testing_labels = getData(testing_file_loc)

print(training_images.shape, training_labels.shape)
print(testing_images.shape, testing_labels.shape)

training_images = np.expand_dims(training_images, axis = 3)
testing_images = np.expand_dims(testing_images, axis = 3)

training_datagen = ImageDataGenerator(
rescale = 1/255,
rotation_range = 45,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True,
fill_mode = "nearest"
)

training_generator = training_datagen.flow(
training_images,
training_labels,
batch_size = 64,
)


validation_datagen = ImageDataGenerator(
rescale = 1/255,
rotation_range = 45,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True,
fill_mode = "nearest"
)

validation_generator = training_datagen.flow(
testing_images,
testing_labels,
batch_size = 64,
)

model = tf.keras.Sequential([
keras.layers.Conv2D(16, (3, 3), input_shape = (28, 28, 1), activation = "relu"),
keras.layers.MaxPooling2D(2, 2),
keras.layers.Conv2D(32, (3, 3), activation = "relu"),
keras.layers.MaxPooling2D(2, 2),
keras.layers.Flatten(),
keras.layers.Dense(256, activation = "relu"),
keras.layers.Dropout(0.25),
keras.layers.Dense(512, activation = "relu"),
keras.layers.Dropout(0.25),
keras.layers.Dense(26, activation = "softmax")
])

model.compile(
loss = "categorical_crossentropy",
optimizer = RMSprop(lr = 0.001),
metrics = ["accuracy"]
)

但是,当我运行 model.fit() 时,出现以下错误,

ValueError: Shapes (None, 1) and (None, 24) are incompatible

将损失函数更改为 sparse_categorical_crossentropy 后,程序运行良好。

我不明白为什么会这样。

谁能解释一下这个以及这些损失函数之间的区别?

最佳答案

问题是,categorical_crossentropy 需要单热编码标签,这意味着,对于每个样本,它需要一个长度为 num_classes 的张量,其中 label第一个元素设置为 1,其他所有元素都为 0。

另一方面,sparse_categorical_crossentropy 直接使用整数标签(因为这里的用例是大量的类,所以单热编码标签会浪费大量零的内存).我相信,但我无法证实,categorical_crossentropy 比它的稀疏对应物运行得更快。

对于你的情况,对于 26 个类,我建议使用非稀疏版本并将你的标签转换为单热编码,如下所示:

def getData(filename):
images = []
labels = []
with open(filename) as csv_file:
file = csv.reader(csv_file, delimiter = ",")
next(file, None)

for row in file:
label = row[0]
data = row[1:]
img = np.array(data).reshape(28,28)

images.append(img)
labels.append(label)

images = np.array(images).astype("float64")
labels = np.array(labels).astype("float64")

return images, tf.keras.utils.to_categorical(labels, num_classes=26) # you can omit num_classes to have it computed from the data

旁注:除非您有理由对图像使用 float64,否则我会切换到 float32(它可能会将数据集和模型所需的内存减半将它们转换为 float32 作为第一个操作)

关于python - 训练深度学习模型时出错,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/63011026/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com