- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在按照here和here的方法使用tf和Keras创建一个cycleGAN
网络结构非常复杂:许多模型相互嵌套。
我无法保存并重新加载经过训练的模型。
训练完成后,我用了
generator_AtoB.save("models/generator_AtoB.h5")
pickle.dump(generator_AtoB, saveFile)
h5dump | less
检查,我可以看到.h5文件包含数据。
generator_AtoB = load_model("models/generator_AtoB.h5")
pickle.load(saveFile)
Traceback (most recent call last):
File "test_model.py", line 14, in <module>
generator_AtoB = pickle.load(saveFile)
File "/home/MYUSERNAME/.virtualenvs/tensorflow_py3/lib/python3.5/site-packages/keras/engine/network.py", line 1266, in __setstate__
model = saving.unpickle_model(state)
File "/home/MYUSERNAME/.virtualenvs/tensorflow_py3/lib/python3.5/site-packages/keras/engine/saving.py", line 435, in unpickle_model
return _deserialize_model(f)
File "/home/MYUSERNAME/.virtualenvs/tensorflow_py3/lib/python3.5/site-packages/keras/engine/saving.py", line 274, in _deserialize_model
reshape=False)
File "/home/MYUSERNAME/.virtualenvs/tensorflow_py3/lib/python3.5/site-packages/keras/engine/saving.py", line 682, in preprocess_weights_for_loading
weights = convert_nested_model(weights)
File "/home/MYUSERNAME/.virtualenvs/tensorflow_py3/lib/python3.5/site-packages/keras/engine/saving.py", line 658, in convert_nested_model
original_backend=original_backend))
File "/home/MYUSERNAME/.virtualenvs/tensorflow_py3/lib/python3.5/site-packages/keras/engine/saving.py", line 682, in preprocess_weights_for_loading
weights = convert_nested_model(weights)
File "/home/MYUSERNAME/.virtualenvs/tensorflow_py3/lib/python3.5/site-packages/keras/engine/saving.py", line 670, in convert_nested_model
original_backend=original_backend))
File "/home/MYUSERNAME/.virtualenvs/tensorflow_py3/lib/python3.5/site-packages/keras/engine/saving.py", line 682, in preprocess_weights_for_loading
weights = convert_nested_model(weights)
File "/home/MYUSERNAME/.virtualenvs/tensorflow_py3/lib/python3.5/site-packages/keras/engine/saving.py", line 658, in convert_nested_model
original_backend=original_backend))
File "/home/MYUSERNAME/.virtualenvs/tensorflow_py3/lib/python3.5/site-packages/keras/engine/saving.py", line 800, in preprocess_weights_for_loading
elif layer_weights_shape != weights[0].shape:
IndexError: list index out of range
keras.load_model
或
pickle.load
,则错误相同
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
# https://hardikbansal.github.io/CycleGANBlog/
import sys
import time
import numpy as np
import keras
from keras.models import Sequential, Model
from keras.layers import Dense, Flatten, Input, multiply, add as kadd
from keras.layers import Conv2D, BatchNormalization, Conv2DTranspose
from keras.layers import LeakyReLU, ReLU
from keras.layers import Activation
from keras.preprocessing.image import ImageDataGenerator
from PIL import Image
ngf = 32 # Number of filters in first layer of generator
ndf = 64 # Number of filters in first layer of discriminator
BATCH_SIZE = 1 # batch_size
pool_size = 50 # pool_size
IMG_WIDTH = 256 # Imput image will of width 256
IMG_HEIGHT = 256 # Input image will be of height 256
IMG_DEPTH = 3 # RGB format
DISCRIMINATOR_ITERATIONS = 1
SAVE_IMAGES_INTERVAL = 25
ITERATIONS = 5000
FAKE_POOL_SIZE=25
INPUT_SHAPE = (IMG_WIDTH, IMG_HEIGHT, IMG_DEPTH)
def resnet_block(num_features):
block = Sequential()
block.add(Conv2D(num_features, kernel_size=3, strides=1, padding="SAME"))
block.add(BatchNormalization())
block.add(ReLU())
block.add(Conv2D(num_features, kernel_size=3, strides=1, padding="SAME"))
block.add(BatchNormalization())
block.add(ReLU())
resblock_input = Input(shape=(64, 64, 256))
conv_model = block(resblock_input)
_sum = kadd([resblock_input, conv_model])
composed = Model(inputs=[resblock_input], outputs=_sum)
return composed
def discriminator( f=4, name=None):
d = Sequential()
d.add(Conv2D(ndf, kernel_size=f, strides=2, padding="SAME", name="discr_conv2d_1"))
d.add(BatchNormalization())
d.add(LeakyReLU(0.2))
d.add(Conv2D(ndf * 2, kernel_size=f, strides=2, padding="SAME", name="discr_conv2d_2"))
d.add(BatchNormalization())
d.add(LeakyReLU(0.2))
d.add(Conv2D(ndf * 4, kernel_size=f, strides=2, padding="SAME", name="discr_conv2d_3"))
d.add(BatchNormalization())
d.add(LeakyReLU(0.2))
d.add(Conv2D(ndf * 8, kernel_size=f, strides=2, padding="SAME", name="discr_conv2d_4"))
d.add(BatchNormalization())
d.add(LeakyReLU(0.2))
d.add(Conv2D(1, kernel_size=f, strides=1, padding="SAME", name="discr_conv2d_out"))
# d.add(Activation("sigmoid"))
model_input = Input(shape=INPUT_SHAPE)
decision = d(model_input)
composed = Model(model_input, decision)
# print(d.output_shape)
# d.summary()
return composed
def generator(name=None):
g = Sequential()
# ENCODER
g.add(Conv2D(ngf, kernel_size=7,
strides=1,
# activation='relu',
padding='SAME',
input_shape=INPUT_SHAPE,
name="encoder_0" ))
g.add(Conv2D(64*2, kernel_size=3,
strides=2,
padding='SAME',
name="encoder_1" ))
# output shape = (128, 128, 128)
g.add(Conv2D(64*4, kernel_size=3,
padding="SAME",
strides=2,))
# output shape = (64, 64, 256)
# END ENCODER
# TRANSFORM
g.add(resnet_block(64*4))
g.add(resnet_block(64*4))
g.add(resnet_block(64*4))
g.add(resnet_block(64*4))
g.add(resnet_block(64*4))
# END TRANSFORM
# generator.shape = (64, 64, 256)
# DECODER
g.add(Conv2DTranspose(ngf*2,kernel_size=3, strides=2, padding="SAME"))
g.add(Conv2DTranspose(ngf*2,kernel_size=3, strides=2, padding="SAME"))
g.add(Conv2D(3,kernel_size=7, strides=1, padding="SAME"))
# END DECODER
model_input = Input(shape=INPUT_SHAPE)
generated_image = g(model_input)
composed = Model(model_input, generated_image, name=name)
return composed
def fromMinusOneToOne(x):
return x/127.5 -1
def toRGB(x):
return (1+x) * 127.5
def createImageGenerator( subset="train", data_type="A", batch_size=1, pp=None):
# we create two instances with the same arguments
data_gen_args = dict(
preprocessing_function= pp,
zoom_range=0.1)
image_datagen = ImageDataGenerator(**data_gen_args)
# Provide the same seed and keyword arguments to the fit and flow methods
seed = 1
image_directory=subset+data_type
print('data/vangogh2photo/'+image_directory)
image_generator = image_datagen.flow_from_directory(
'data/vangogh2photo/'+image_directory,
class_mode=None,
batch_size=batch_size,
seed=seed)
return image_generator
if __name__ == '__main__':
generator_AtoB = generator(name="gen_A")
generator_BtoA = generator(name="gen_B")
discriminator_A = discriminator(name="disc_A")
discriminator_B = discriminator(name="disc_B")
# input_A = Input(batch_shape=(batch_size, IMG_WIDTH, IMG_HEIGHT, IMG_DEPTH), name="input_A")
input_A = Input(batch_shape=(None, IMG_WIDTH, IMG_HEIGHT, IMG_DEPTH), name="input_A")
generated_B = generator_AtoB(input_A)
discriminator_generated_B = discriminator_B(generated_B)
cyc_A = generator_BtoA(generated_B)
input_B = Input(batch_shape=(None, IMG_WIDTH, IMG_HEIGHT, IMG_DEPTH), name="input_B")
generated_A = generator_BtoA(input_B)
discriminator_generated_A = discriminator_A(generated_A )
cyc_B = generator_AtoB(generated_A)
### GENERATOR TRAINING
optim = keras.optimizers.Adam(lr=0.0002, beta_1=0.5, beta_2=0.999, epsilon=1e-08)
# cyclic error is increased, because it's more important
cyclic_weight_multipier = 10
generator_trainer = Model([input_A, input_B],
[discriminator_generated_B, discriminator_generated_A,
cyc_A, cyc_B,])
losses = [ "MSE", "MSE", "MAE", "MAE"]
losses_weights = [ 1, 1, cyclic_weight_multipier, cyclic_weight_multipier]
generator_trainer.compile(optimizer=optim, loss = losses, loss_weights=losses_weights)
### DISCRIMINATOR TRAINING
disc_optim = keras.optimizers.Adam(lr=0.0002, beta_1=0.5, beta_2=0.999, epsilon=1e-08)
real_A = Input(batch_shape=(None, IMG_WIDTH, IMG_HEIGHT, IMG_DEPTH), name="in_real_A")
real_B = Input(batch_shape=(None, IMG_WIDTH, IMG_HEIGHT, IMG_DEPTH), name="in_real_B")
generated_A = Input(batch_shape=(None, IMG_WIDTH, IMG_HEIGHT, IMG_DEPTH), name="in_gen_A")
generated_B = Input(batch_shape=(None, IMG_WIDTH, IMG_HEIGHT, IMG_DEPTH), name="in_gen_B")
discriminator_real_A = discriminator_A(real_A)
discriminator_generated_A = discriminator_A(generated_A)
discriminator_real_B = discriminator_B(real_B)
discriminator_generated_B = discriminator_B(generated_B)
disc_trainer = Model([real_A, generated_A, real_B, generated_B],
[ discriminator_real_A,
discriminator_generated_A,
discriminator_real_B,
discriminator_generated_B] )
disc_trainer.compile(optimizer=disc_optim, loss = 'MSE')
#########
##
## TRAINING
##
#########
fake_A_pool = []
fake_B_pool = []
ones = np.ones((BATCH_SIZE,)+ generator_trainer.output_shape[0][1:])
zeros = np.zeros((BATCH_SIZE,)+ generator_trainer.output_shape[0][1:])
train_A_image_generator = createImageGenerator("train", "A")
train_B_image_generator = createImageGenerator("train", "B")
it = 1
while it < ITERATIONS:
start = time.time()
print("\nIteration %d " % it)
sys.stdout.flush()
# THIS ONLY WORKS IF BATCH SIZE == 1
real_A = train_A_image_generator.next()
real_B = train_B_image_generator.next()
fake_A_pool.extend(generator_BtoA.predict(real_B))
fake_B_pool.extend(generator_AtoB.predict(real_A))
#resize pool
fake_A_pool = fake_A_pool[-FAKE_POOL_SIZE:]
fake_B_pool = fake_B_pool[-FAKE_POOL_SIZE:]
fake_A = [ fake_A_pool[ind] for ind in np.random.choice(len(fake_A_pool), size=(BATCH_SIZE,), replace=False) ]
fake_B = [ fake_B_pool[ind] for ind in np.random.choice(len(fake_B_pool), size=(BATCH_SIZE,), replace=False) ]
fake_A = np.array(fake_A)
fake_B = np.array(fake_B)
for x in range(0, DISCRIMINATOR_ITERATIONS):
_, D_loss_real_A, D_loss_fake_A, D_loss_real_B, D_loss_fake_B = \
disc_trainer.train_on_batch(
[real_A, fake_A, real_B, fake_B],
[zeros, ones * 0.9, zeros, ones * 0.9] )
print("=====")
print("Discriminator loss:")
print("Real A: %s, Fake A: %s || Real B: %s, Fake B: %s " % ( D_loss_real_A, D_loss_fake_A, D_loss_real_B, D_loss_fake_B))
_, G_loss_fake_B, G_loss_fake_A, G_loss_rec_A, G_loss_rec_B = \
generator_trainer.train_on_batch(
[real_A, real_B],
[zeros, zeros, real_A, real_B])
print("=====")
print("Generator loss:")
print("Fake B: %s, Cyclic A: %s || Fake A: %s, Cyclic B: %s " % (G_loss_fake_B, G_loss_rec_A, G_loss_fake_A, G_loss_rec_B))
end = time.time()
print("Iteration time: %s s" % (end-start))
sys.stdout.flush()
if not (it % SAVE_IMAGES_INTERVAL ):
imgA = real_A
# print(imgA.shape)
imga2b = generator_AtoB.predict(imgA)
# print(imga2b.shape)
imga2b2a = generator_BtoA.predict(imga2b)
# print(imga2b2a.shape)
imgB = real_B
imgb2a = generator_BtoA.predict(imgB)
imgb2a2b = generator_AtoB.predict(imgb2a)
c = np.concatenate([imgA, imga2b, imga2b2a, imgB, imgb2a, imgb2a2b], axis=2).astype(np.uint8)
# print(c.shape)
x = Image.fromarray(c[0])
x.save("data/generated/iteration_%s.jpg" % str(it).zfill(4))
it+=1
generator_AtoB.save("models/generator_AtoB.h5")
generator_BtoA.save("models/generator_BtoA.h5")
最佳答案
我有同样的问题。 Ankish建议使用tf.keras API来解决此问题。我不知道为什么,但是...
tf.keras.models.load_model("./saved_models/our_model.h5", compile=False)
keras.models.load_model("./saved_models/our_model.h5")
关于python - 无法使用keras.load_model保存/加载模型-IndexError : list index out of range,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54344206/
我想要显示正在加载的 .gif,直到所有内容都已加载,包括嵌入的 iframe。但是,目前加载 gif 会在除 iframe 之外的所有内容都已加载后消失。我怎样才能让它等到 iframe 也加载完毕
首先,这是我第一次接触 Angular。 我想要实现的是,我有一个通知列表,我必须以某种方式限制 limitTo,因此元素被限制为三个,在我单击按钮后,其余的应该加载。 我不明白该怎么做: 设置“ V
我正在尝试在我的设备上运行这个非常简单的应用程序(使用 map API V2),并且出于某种原因尝试使用 MapView 时: 使用 java 文件: public class MainMap e
我正在使用 Python 2.6、Excel 2007 Professional 和最新版本的 PyXLL。在 PyXLL 中加载具有 import scipy 抛出异常,模块未加载。有没有人能够在
我想做这个: 创建并打包原始游戏。然后我想根据原始游戏中的蓝图创建具有新网格/声音/动画和蓝图的其他 PAK 文件。原始游戏不应该知道有关其他网格/动画/等的任何信息。因此,我需要在原始游戏中使用 A
**摘要:**在java项目中经常会使用到配置文件,这里就介绍几种加载配置文件的方法。 本文分享自华为云社区《【Java】读取/加载 properties配置文件的几种方法》,作者:Copy工程师。
在 Groovy 脚本中是否可以执行条件导入语句? if (test){ import this.package.class } else { import that.package.
我正在使用 NVidia 视觉分析器(来自 CUDA 5.0 beta 版本的基于 eclipse 的版本)和 Fermi 板,我不了解其中两个性能指标: 全局加载/存储效率表示实际内存事务数与请求事
有没有办法在通过 routeProvider 加载特定 View 时清除 Angular JS 存储的历史记录? ? 我正在使用 Angular 创建一个公共(public)安装,并且历史会积累很多,
使用 Xcode 4.2,在我的应用程序中, View 加载由 segue 事件触发。 在 View Controller 中首先调用什么方法? -(void) viewWillAppear:(BOO
我在某些Django模型中使用JSONField,并希望将此数据从Oracle迁移到Postgres。 到目前为止,当使用Django的dumpdata和loaddata命令时,我仍然没有运气来保持J
创建 Nib 时,我需要创建两种类型:WindowNib 或 ViewNib。我看到的区别是,窗口 Nib 有一个窗口和一个 View 。 如何将 View Nib 加载到另一个窗口中?我是否必须创建
我想将多个env.variables转换为静态结构。 我可以手动进行: Env { is_development: env::var("IS_DEVELOPMENT")
正如我从一个测试用例中看到的:https://godbolt.org/z/K477q1 生成的程序集加载/存储原子松弛与普通变量相同:ldr 和 str 那么,宽松的原子变量和普通变量之间有什么区别吗
我有一个重定向到外部网站的按钮/链接,但是外部网站需要一些时间来加载。所以我想添加一个加载屏幕,以便外部页面在显示之前完全加载。我无法控制外部网站,并且外部网站具有同源策略,因此我无法在 iFrame
我正在尝试为我的应用程序开发一个Dockerfile,该文件在初始化后加载大量环境变量。不知何故,当我稍后执行以下命令时,这些变量是不可用的: docker exec -it container_na
很难说出这里问的是什么。这个问题是含糊的、模糊的、不完整的、过于宽泛的或修辞性的,无法以目前的形式得到合理的回答。如需帮助澄清此问题以便重新打开它,visit the help center 。 已关
我刚刚遇到一个问题,我有一个带有一些不同选项的选择标签。 现在我想检查用户选择了哪些选项。 然后我想将一个新的 html 文件加载到该网站(取决于用户选中的选项)宽度 javascript,我该怎么做
我知道两种保存/加载应用程序设置的方法: 使用PersistentStore 使用文件系统(存储,因为 SDCard 是可选的) 我想知道您使用应用程序设置的做法是什么? 使用 PersistentS
我开始使用 Vulkan 时偶然发现了我的第一个问题。尝试创建调试报告回调时(验证层和调试扩展在我的英特尔 hd vulkan 驱动程序上可用,至少它是这么说的),它没有告诉我 vkCreateDeb
我是一名优秀的程序员,十分优秀!