猫和 1--> 狗为-6ren">
gpt4 book ai didi

python - 为什么这个 keras 网络不是 "learning"?

转载 作者:行者123 更新时间:2023-12-04 15:57:53 26 4
gpt4 key购买 nike

我正在尝试构建一个卷积神经网络来对猫和狗进行分类(这是一个非常基本的问题,因为我想学习)。我正在尝试的一种方法是让 2 个输出神经元来检查类(而不是只使用 1 个并使 0 --> 猫和 1--> 狗为例)但由于某种原因网络没有学习,有人可以帮助我吗?

这是模型:

from keras.models import Sequential
from keras.layers import Input, Dropout, Flatten, Conv2D, MaxPooling2D, Dense, Activation
from keras.optimizers import RMSprop,Adam
from keras.callbacks import ModelCheckpoint, Callback, EarlyStopping
from keras.utils import np_utils

optimizer = Adam(lr=1e-4)
objective = 'categorical_crossentropy'


def classifier():

model = Sequential()

model.add(Conv2D(64, 3, padding='same',input_shape=train.shape[1:],activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), data_format="channels_first"))

model.add(Conv2D(256, 3, padding='same',activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), data_format="channels_first"))

model.add(Conv2D(256, 3, padding='same',activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), data_format="channels_first"))

model.add(Conv2D(256, 3, padding='same',activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), data_format="channels_first"))


model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))

model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))

model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))

model.add(Dense(2))
model.add(Activation('softmax'))

print("Compiling model...")
model.compile(loss=objective, optimizer=optimizer, metrics=['accuracy'])
return model

print("Creating model:")
model = classifier()

这是主循环

from keras.models import Sequential
from keras.layers import Input, Dropout, Flatten, Conv2D, MaxPooling2D, Dense, Activation
from keras.optimizers import RMSprop
from keras.callbacks import ModelCheckpoint, Callback, EarlyStopping
from keras.utils import np_utils

epochs = 5000
batch_size = 16

class LossHistory(Callback):
def on_train_begin(self, logs={}):
self.losses = []
self.val_losses = []

def on_epoch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
self.val_losses.append(logs.get('val_loss'))

early_stopping = EarlyStopping(monitor='val_loss', patience=4, verbose=1, mode='min')


def run():

history = LossHistory()
print("running model...")
model.fit(train, labels, batch_size=batch_size, epochs=epochs,
validation_split=0.10, verbose=2, shuffle=True, callbacks=[history, early_stopping])

print("making predictions on test set...")
predictions = model.predict(test, verbose=0)
return predictions, history

predictions, history = run()

loss = history.losses
val_loss = history.val_losses

这是输入标签的一个例子:

array([[1, 0],
[0, 1],
[1, 0],
...,
[0, 1],
[0, 1],
[0, 1]])

PS:不要担心输入格式,因为使用相同的输入它适用于二元分类器。

最佳答案

你的 dropout 层的 rate 参数太大了。 Dropout 层用作深度学习神经网络的正则化技术,并用于克服过度拟合。您的 rate 参数指定在训练时从前一层的激活中下降多少百分比。 0.5 rate 意味着降低前一层激活的 50%。虽然有时这种大百分比的 rate 参数是可行的,但有时它会阻碍神经网络的学习率。因此在选择 dropout 层的 rate 参数时要小心。

关于python - 为什么这个 keras 网络不是 "learning"?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51187664/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com