gpt4 book ai didi

python - 直流-GAN : Discriminator loss going up while generator loss goes down

转载 作者:行者123 更新时间:2023-11-30 09:31:08 27 4
gpt4 key购买 nike

我无法判断这个错误是由于技术错误还是超参数造成的,但我的 DC-GAN 的鉴别器损失一开始很低,然后逐渐攀升,在 8 左右减慢,而我的发电机损失则大幅下降。我在大约 60,000 epoch 时结束了它。有趣的是,鉴别器的准确率似乎在 20-50% 左右浮动。有人有任何建议来解决这个问题吗?如有任何帮助,我们将不胜感激。

重要信息

  • 数据格式:472 个 320x224 彩色 PNG 文件。
  • 优化器:Adam(0.0002, 0.5)
  • 损失:二元交叉熵

50,000+ epochs 后生成的图像:(应该是白色背景上的运动鞋)

enter image description here

鉴别器模型:

    def build_discriminator(self):

img_shape = (self.img_size[0], self.img_size[1], self.channels)

model = Sequential()

model.add(Conv2D(32, kernel_size=self.kernel_size, strides=2, input_shape=img_shape, padding="same")) # 192x256 -> 96x128
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))

model.add(Conv2D(64, kernel_size=self.kernel_size, strides=2, padding="same")) # 96x128 -> 48x64
model.add(ZeroPadding2D(padding=((0, 1), (0, 1))))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(BatchNormalization(momentum=0.8))

model.add(Conv2D(128, kernel_size=self.kernel_size, strides=2, padding="same")) # 48x64 -> 24x32
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(BatchNormalization(momentum=0.8))

model.add(Conv2D(256, kernel_size=self.kernel_size, strides=1, padding="same")) # 24x32 -> 12x16
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))

model.add(Conv2D(512, kernel_size=self.kernel_size, strides=1, padding="same")) # 12x16 -> 6x8
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))

model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))

model.summary()

img = Input(shape=img_shape)
validity = model(img)

return Model(img, validity)

发电机型号:

    def build_generator(self):

noise_shape = (100,)

model = Sequential()
model.add(
Dense(self.starting_filters * (self.img_size[0] // (2 ** self.upsample_layers)) * (self.img_size[1] // (2 ** self.upsample_layers)),
activation="relu", input_shape=noise_shape))
model.add(Reshape(((self.img_size[0] // (2 ** self.upsample_layers)),
(self.img_size[1] // (2 ** self.upsample_layers)),
self.starting_filters)))
model.add(BatchNormalization(momentum=0.8))

model.add(UpSampling2D()) # 6x8 -> 12x16
model.add(Conv2D(1024, kernel_size=self.kernel_size, padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(momentum=0.8))

model.add(UpSampling2D()) # 12x16 -> 24x32
model.add(Conv2D(512, kernel_size=self.kernel_size, padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(momentum=0.8))

model.add(UpSampling2D()) # 24x32 -> 48x64
model.add(Conv2D(256, kernel_size=self.kernel_size, padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(momentum=0.8))

model.add(UpSampling2D()) # 48x64 -> 96x128
model.add(Conv2D(128, kernel_size=self.kernel_size, padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(momentum=0.8))

model.add(UpSampling2D()) # 96x128 -> 192x256
model.add(Conv2D(64, kernel_size=self.kernel_size, padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(momentum=0.8))

model.add(Conv2D(32, kernel_size=self.kernel_size, padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(momentum=0.8))

model.add(Conv2D(self.channels, kernel_size=self.kernel_size, padding="same"))
model.add(Activation("tanh"))

model.summary()

noise = Input(shape=noise_shape)
img = model(noise)

return Model(noise, img)

最佳答案

我觉得你遇到这个问题完全可以理解。你的网络没有得到补偿,就神经元数量而言,生成器比鉴别器强大得多。我会尝试使生成器和鉴别器在层数、配置和大小方面彼此对称,这样就可以确保没有一个比另一个强。事实上

关于python - 直流-GAN : Discriminator loss going up while generator loss goes down,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57513715/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com