- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我基于VGG16模型构建keras CNN来对花卉进行分类,数据集是 here 。我构建了两个具有相同架构和参数总和但方法不同的模型。一种使用 Model (功能 API),另一种使用 Sequential 。顺序给了我很好的结果(84% val_acc),但模型给了我很差的结果(50% val_acc)。我希望有人能指出有什么区别。谢谢!
顺序
import tensorflow as tf
import keras
from keras.preprocessing import image
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, Model
from keras.layers import Input, Dense, Flatten, Dropout, GlobalAveragePooling2D
from keras import backend as K
from keras import optimizers
from keras.callbacks import ModelCheckpoint
from keras.callbacks import TensorBoard
import numpy as np
import time
## image path
train_data_dir = 'dataset/training_set'
validation_data_dir = 'dataset/test_set'
## other
img_width, img_height = 299, 299
nb_train_samples = 100
nb_validation_samples = 800
top_epochs = 50
fit_epochs = 50
batch_size = 24
nb_classes = 5
nb_epoch = 10
# start measurement
start = time.time()
# import vgg16 model
input_tensor = Input(shape=(img_width, img_height, 3))
vgg16 = keras.applications.VGG16(weights='imagenet', include_top=False, input_tensor=input_tensor)
# creating an FC layer
top_model = Sequential()
top_model.add(Flatten(input_shape=vgg16.output_shape[1:]))
top_model.add(Dense(256, activation='relu'))
top_model.add(Dropout(0.5))
top_model.add(Dense(nb_classes, activation='softmax'))
top_model.summary()
# bound VGG 16 and FC layer
vgg_model = Model(inputs=vgg16.input, outputs=top_model(vgg16.output))
print(vgg_model.layers[:15])
# prevent re-learning of the layer before the last convolution layer
for layer in vgg_model.layers[:15]:
layer.trainable = False
vgg_model.summary()
# create model
vgg_model.compile(loss='categorical_crossentropy',
optimizer=optimizers.SGD(lr=1e-3, momentum=0.9),
metrics=['accuracy']
)
# Setting learning data
train_datagen = ImageDataGenerator(rescale=1.0 / 255, zoom_range=0.2, horizontal_flip=True)
validation_datagen = ImageDataGenerator(rescale=1.0 / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
color_mode='rgb',
class_mode='categorical',
batch_size=batch_size,
shuffle=True
)
validation_generator = validation_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
color_mode='rgb',
class_mode='categorical',
batch_size=batch_size,
shuffle=True
)
history = vgg_model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples,
epochs=nb_epoch,
validation_data=validation_generator,
validation_steps=nb_validation_samples
)
顺序网络
Layer (type) Output Shape Param #
=================================================================
input_10 (InputLayer) (None, 299, 299, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 299, 299, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 299, 299, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 149, 149, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 149, 149, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 149, 149, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 74, 74, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 74, 74, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 74, 74, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 74, 74, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 37, 37, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 37, 37, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 37, 37, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 37, 37, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 18, 18, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 18, 18, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 18, 18, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 18, 18, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 9, 9, 512) 0
_________________________________________________________________
sequential_6 (Sequential) (None, 5) 10618373
=================================================================
Total params: 25,333,061
Trainable params: 17,697,797
Non-trainable params: 7,635,264
顺序 - 结果
Epoch 1/10
100/100 [==============================] - 50s 498ms/step - loss: 1.2821 - acc: 0.4912 - val_loss: 0.7209 - val_acc: 0.7327
Epoch 2/10
100/100 [==============================] - 48s 477ms/step - loss: 0.5827 - acc: 0.7787 - val_loss: 0.5326 - val_acc: 0.7816
Epoch 3/10
100/100 [==============================] - 47s 466ms/step - loss: 0.5355 - acc: 0.8101 - val_loss: 0.4951 - val_acc: 0.8150
Epoch 4/10
100/100 [==============================] - 46s 458ms/step - loss: 0.4020 - acc: 0.8612 - val_loss: 0.4458 - val_acc: 0.8413
Epoch 5/10
100/100 [==============================] - 49s 485ms/step - loss: 0.3465 - acc: 0.8767 - val_loss: 0.3904 - val_acc: 0.8496
Epoch 6/10
100/100 [==============================] - 46s 460ms/step - loss: 0.3330 - acc: 0.8747 - val_loss: 0.3961 - val_acc: 0.8568
Epoch 7/10
100/100 [==============================] - 45s 448ms/step - loss: 0.3188 - acc: 0.8896 - val_loss: 0.4462 - val_acc: 0.8389
Epoch 8/10
100/100 [==============================] - 47s 472ms/step - loss: 0.2302 - acc: 0.9208 - val_loss: 0.4048 - val_acc: 0.8568
Epoch 9/10
100/100 [==============================] - 45s 453ms/step - loss: 0.2172 - acc: 0.9192 - val_loss: 0.4101 - val_acc: 0.8795
Epoch 10/10
100/100 [==============================] - 45s 453ms/step - loss: 0.1867 - acc: 0.9321 - val_loss: 0.3337 - val_acc: 0.8878
型号
from keras.applications.vgg16 import VGG16
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Input, Flatten, Dense, Dropout
from keras.models import Model
from keras import optimizers
train_data_dir = 'dataset/training_set'
validation_data_dir = 'dataset/test_set'
## other
img_width, img_height = 299, 299
nb_train_samples = 100
nb_validation_samples = 800
top_epochs = 50
fit_epochs = 50
batch_size = 24
nb_classes = 5
nb_epoch = 10
#build CNN
model_vgg16_conv = VGG16(weights='imagenet', include_top=False)
input = Input(shape=(299,299, 3),name = 'image_input')
output_vgg16_conv = model_vgg16_conv(input)
for layer in model_vgg16_conv.layers[:15]:
layer.trainable = False
model_vgg16_conv.summary()
x = Flatten(name='flatten')(output_vgg16_conv)
x = Dense(256, activation='softmax')(x)
x = Dropout(0.5)(x)
x = Dense(5, activation='softmax', name='predictions')(x)
vgg_model = Model(inputs=input, outputs=x)
vgg_model.summary()
#Image preprocessing and image augmentation with keras
vgg_model.compile(loss='categorical_crossentropy',
optimizer=optimizers.SGD(lr=1e-3, momentum=0.9),
metrics=['accuracy']
)
# Setting learning data
train_datagen = ImageDataGenerator(rescale=1.0 / 255, zoom_range=0.2, horizontal_flip=True)
validation_datagen = ImageDataGenerator(rescale=1.0 / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
color_mode='rgb',
class_mode='categorical',
batch_size=batch_size,
shuffle=True
)
validation_generator = validation_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
color_mode='rgb',
class_mode='categorical',
batch_size=batch_size,
shuffle=True
)
history = vgg_model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples,
epochs=nb_epoch,
validation_data=validation_generator,
validation_steps=nb_validation_samples
)
模型网络
Layer (type) Output Shape Param #
=================================================================
image_input (InputLayer) (None, 299, 299, 3) 0
_________________________________________________________________
vgg16 (Model) multiple 14714688
_________________________________________________________________
flatten (Flatten) (None, 41472) 0
_________________________________________________________________
dense_16 (Dense) (None, 256) 10617088
_________________________________________________________________
dropout_10 (Dropout) (None, 256) 0
_________________________________________________________________
predictions (Dense) (None, 5) 1285
=================================================================
Total params: 25,333,061
Trainable params: 17,697,797
Non-trainable params: 7,635,264
模型结果
Epoch 1/10
100/100 [==============================] - 48s 484ms/step - loss: 1.6028 - acc: 0.2379 - val_loss: 1.5978 - val_acc: 0.1814
Epoch 2/10
100/100 [==============================] - 47s 470ms/step - loss: 1.5758 - acc: 0.3098 - val_loss: 1.5577 - val_acc: 0.3258
Epoch 3/10
100/100 [==============================] - 45s 455ms/step - loss: 1.5352 - acc: 0.3386 - val_loss: 1.5273 - val_acc: 0.3496
Epoch 4/10
100/100 [==============================] - 45s 453ms/step - loss: 1.4991 - acc: 0.3425 - val_loss: 1.4890 - val_acc: 0.3914
Epoch 5/10
100/100 [==============================] - 47s 472ms/step - loss: 1.4600 - acc: 0.3826 - val_loss: 1.4406 - val_acc: 0.4523
Epoch 6/10
100/100 [==============================] - 46s 456ms/step - loss: 1.4252 - acc: 0.4021 - val_loss: 1.4337 - val_acc: 0.4165
Epoch 7/10
100/100 [==============================] - 45s 453ms/step - loss: 1.3944 - acc: 0.4037 - val_loss: 1.3720 - val_acc: 0.4964
Epoch 8/10
100/100 [==============================] - 48s 479ms/step - loss: 1.3787 - acc: 0.4193 - val_loss: 1.3615 - val_acc: 0.4988
Epoch 9/10
100/100 [==============================] - 46s 464ms/step - loss: 1.3590 - acc: 0.4067 - val_loss: 1.3272 - val_acc: 0.4952
Epoch 10/10
100/100 [==============================] - 45s 449ms/step - loss: 1.3419 - acc: 0.4244 - val_loss: 1.3038 - val_acc: 0.5060
最佳答案
Dense 中的 softmax
单元是 sigmoid
函数的集合。它的工作方式类似于多类分类器
,其工作原理是每个类一个分类器
。 Sigmoid 非常适合识别 1 或 0 等二进制输出。因此,softmax 对于输出层非常有用,但不如中间层那么好。
深入的解释是,relu
单元上的反向传播保留了中间特征,而 softmax
在这方面做得不那么好,但做得更好在输出层。
这就是区别
top_model = Sequential()
top_model.add(Flatten(input_shape=vgg16.output_shape[1:]))
top_model.add(Dense(256, activation='relu'))
top_model.add(Dropout(0.5))
top_model.add(Dense(nb_classes,activation='softmax'))
top_model.summary()
鉴于
x = Flatten(name='flatten')(output_vgg16_conv)
x = Dense(256, activation='softmax')(x)
x = Dropout(0.5)(x)
x = Dense(5, activation='softmax', name='predictions')(x)
很高兴看到您正在通过迁移学习重新训练 imagenet! :)
请告诉我们这是否解决了问题,或者如果还需要其他内容,请发表评论!
关于python - Keras VGG16 相同模型不同方法给出不同结果,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52575271/
我正在处理一组标记为 160 个组的 173k 点。我想通过合并最接近的(到 9 或 10 个组)来减少组/集群的数量。我搜索过 sklearn 或类似的库,但没有成功。 我猜它只是通过 knn 聚类
我有一个扁平数字列表,这些数字逻辑上以 3 为一组,其中每个三元组是 (number, __ignored, flag[0 or 1]),例如: [7,56,1, 8,0,0, 2,0,0, 6,1,
我正在使用 pipenv 来管理我的包。我想编写一个 python 脚本来调用另一个使用不同虚拟环境(VE)的 python 脚本。 如何运行使用 VE1 的 python 脚本 1 并调用另一个 p
假设我有一个文件 script.py 位于 path = "foo/bar/script.py"。我正在寻找一种在 Python 中通过函数 execute_script() 从我的主要 Python
这听起来像是谜语或笑话,但实际上我还没有找到这个问题的答案。 问题到底是什么? 我想运行 2 个脚本。在第一个脚本中,我调用另一个脚本,但我希望它们继续并行,而不是在两个单独的线程中。主要是我不希望第
我有一个带有 python 2.5.5 的软件。我想发送一个命令,该命令将在 python 2.7.5 中启动一个脚本,然后继续执行该脚本。 我试过用 #!python2.7.5 和http://re
我在 python 命令行(使用 python 2.7)中,并尝试运行 Python 脚本。我的操作系统是 Windows 7。我已将我的目录设置为包含我所有脚本的文件夹,使用: os.chdir("
剧透:部分解决(见最后)。 以下是使用 Python 嵌入的代码示例: #include int main(int argc, char** argv) { Py_SetPythonHome
假设我有以下列表,对应于及时的股票价格: prices = [1, 3, 7, 10, 9, 8, 5, 3, 6, 8, 12, 9, 6, 10, 13, 8, 4, 11] 我想确定以下总体上最
所以我试图在选择某个单选按钮时更改此框架的背景。 我的框架位于一个类中,并且单选按钮的功能位于该类之外。 (这样我就可以在所有其他框架上调用它们。) 问题是每当我选择单选按钮时都会出现以下错误: co
我正在尝试将字符串与 python 中的正则表达式进行比较,如下所示, #!/usr/bin/env python3 import re str1 = "Expecting property name
考虑以下原型(prototype) Boost.Python 模块,该模块从单独的 C++ 头文件中引入类“D”。 /* file: a/b.cpp */ BOOST_PYTHON_MODULE(c)
如何编写一个程序来“识别函数调用的行号?” python 检查模块提供了定位行号的选项,但是, def di(): return inspect.currentframe().f_back.f_l
我已经使用 macports 安装了 Python 2.7,并且由于我的 $PATH 变量,这就是我输入 $ python 时得到的变量。然而,virtualenv 默认使用 Python 2.6,除
我只想问如何加快 python 上的 re.search 速度。 我有一个很长的字符串行,长度为 176861(即带有一些符号的字母数字字符),我使用此函数测试了该行以进行研究: def getExe
list1= [u'%app%%General%%Council%', u'%people%', u'%people%%Regional%%Council%%Mandate%', u'%ppp%%Ge
这个问题在这里已经有了答案: Is it Pythonic to use list comprehensions for just side effects? (7 个答案) 关闭 4 个月前。 告
我想用 Python 将两个列表组合成一个列表,方法如下: a = [1,1,1,2,2,2,3,3,3,3] b= ["Sun", "is", "bright", "June","and" ,"Ju
我正在运行带有最新 Boost 发行版 (1.55.0) 的 Mac OS X 10.8.4 (Darwin 12.4.0)。我正在按照说明 here构建包含在我的发行版中的教程 Boost-Pyth
学习 Python,我正在尝试制作一个没有任何第 3 方库的网络抓取工具,这样过程对我来说并没有简化,而且我知道我在做什么。我浏览了一些在线资源,但所有这些都让我对某些事情感到困惑。 html 看起来
我是一名优秀的程序员,十分优秀!