gpt4 book ai didi

python - 使用 VGG 进行数据生成和损失计算时,元素不属于图形

转载 作者:行者123 更新时间:2023-11-30 09:15:51 26 4
gpt4 key购买 nike

我有一个 VGG19 编码器,它接受 (256,256,3) 的输入图像 y 并返回维度 (32,32, 512) 的张量 来自 vgg 的 conv-4-1 层。我需要将其转换为 numpy 数组以应用一些转换并使用我的解码器重建图像。

简而言之,我正在尝试像这样训练解码器网络:

x = vgg_encoder(y)  # generate features from image y
x = do_extra_transformation(x) # for example, reshape and apply K means to shift features towards their cluster centres
y_pred = decoder(x) # try to reconstruct the image y from features
loss = calculate_loss(y, y_pred) # calculate reconstruction loss using VGG loss

但是,当我运行代码时,出现错误:ValueError: Tensor Tensor("block4_conv1/Relu:0", shape=(?, 32, 32, 512), dtype=float32) 不是此图的元素。

我假设错误来自在我调用 VGG 上的预测以生成特征后 tensorflow 断开图的连接。我不明白为什么这是一个问题,因为它在技术上仅用于数据生成,而不是训练计算图的一部分!

<小时/>

完整代码,您可以使用下面的python example.py运行

import tensorflow as tf
import numpy as np
from tensorflow.keras.applications import VGG19
from tensorflow.keras.layers import Input, UpSampling2D, Conv2D
from tensorflow.keras.models import Model
import tensorflow.keras.backend as K
from tensorflow.keras.optimizers import Adam

class CustomModel:

def __init__(self, im_h, im_w, im_c):
self.im_shape = (im_h, im_w, im_c)
self.vgg_features_shape = (None, None, 512)
self.vgg_loss_model = self.build_vgg_loss()
self.kernel_size = (3,3)
self.decoder = self.build_decoder()


def build_vgg_loss(self):
vgg = VGG19(weights="imagenet", include_top=False, input_shape=self.im_shape)
vgg.outputs = vgg.get_layer('block4_conv1').output
model = Model(inputs=vgg.inputs, outputs=vgg.outputs)
model.trainable = False

return model

def build_decoder(self):
"""
Mirrors the VGG network with max-pooling layers replaces by UpScaling Layers
"""

i = Input((None, None, 512))
x = Conv2D(filters=512, kernel_size=self.kernel_size, padding='same')(i)

x = UpSampling2D()(x)
for _ in range(4):
x = Conv2D(filters=256, kernel_size=self.kernel_size, padding='same')(x)

x = UpSampling2D()(x)
for _ in range(2):
x = Conv2D(filters=128, kernel_size=self.kernel_size, padding='same')(x)

x = UpSampling2D()(x)
for _ in range(2):
x = Conv2D(filters=64, kernel_size=self.kernel_size, padding='same')(x)

x = Conv2D(filters=3, kernel_size=self.kernel_size, padding='same')(x)

model = Model(inputs=i, outputs=x)

return model


def get_loss(self, y_pred, y):

vgg_model = self.vgg_loss_model

def content_loss(y_pred, y):

dif = vgg_model(y) - vgg_model(y_pred)
sq = K.square(dif)
s = K.sum(sq, axis=-1)
sqrt = K.sqrt(s)
loss = K.sum(sqrt)

return loss

return content_loss(y_pred, y)


class DataLoader:

def __init__(self, vgg):
self.vgg = vgg

def gen(self):
while True:
y = np.random.randn(256, 256,3)
x = self.vgg.predict(np.expand_dims(y, 0)).reshape((32,32,512)) # if this is turned into a np.array, everything works as expected
yield x, np.random.randn(256, 256,3)


model = CustomModel(256,256,3)

# dl = DataLoader(datapath='./trainer/data/', mst=mst)

output_types=(
tf.float32,
tf.float32
)
output_shapes=(
tf.TensorShape([None, None, None]),
tf.TensorShape([None, None, None])
)

ds = tf.data.Dataset.from_generator(DataLoader(model.vgg_loss_model).gen,
output_types=output_types,
output_shapes=output_shapes)

ds = ds.repeat().batch(1)
iterator = ds.make_one_shot_iterator()
x, y = iterator.get_next()
y_pred = model.decoder(x)


loss = model.get_loss(y_pred, y)
opt = tf.train.AdamOptimizer(0.01)
train_opt = opt.minimize(loss)

init_op = tf.global_variables_initializer()

with tf.Session() as sess:

sess.run(init_op)
opt = tf.train.GradientDescentOptimizer(0.01)
for i in range(5):
sess.run(train_opt)

最佳答案

不要忘记,您所描述的任务的输入是图像,输出也是相同的图像。因此,您构建的模型必须包含所有部分,即编码器 + 解码器。当然,您可以选择不训练其中任何一个(因为您已经选择不训练编码器)。以下是您需要应用的更改:

以下是错误的,因为yy_preddecoder的真实输出和预测输出,所以应用没有意义它们上的 vgg_model (即编码器):

dif = vgg_model(y) - vgg_model(y_pred)

您只想将重建图像与原始图像进行比较。所以只需将其更改为:

dif = y - y_pred

(此外,您不再需要 get_loss 中的 vgg_model = self.vgg_loss_model;实际上,get_loss 可以定义为CustomModel 类的静态方法,没有内部 custom_loss 函数)。

<小时/>
def gen(self):
while True:
y = np.random.randn(256, 256,3)
x = self.vgg.predict(np.expand_dims(y, 0)).reshape((32,32,512))
yield x, np.random.randn(256, 256,3)

正如我们提到的,模型的输入和输出是相同的(此外,通过使用 self.vgg.predict,您可以有效地从模型中删除编码器)整个模型计算图)。只需将其更改为:

def gen(self):
while True:
x = np.random.randn(256, 256,3)
yield x, x # same input and output
<小时/>

最后这一行:

y_pred = model.decoder(x)

首先应用编码器,然后在编码器的输出上应用解码器来重建图像。因此,请执行您所说的:

y_pred = model.decoder(model.vgg_loss_model(x))
<小时/>

最后一点:我认为在这些情况下,最好在开始实现之前在一张纸上绘制整个计算图的大图,确实有助于更好地理解问题并节省大量时间和精力.

关于python - 使用 VGG 进行数据生成和损失计算时,元素不属于图形,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56080498/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com