- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我最近开始学习图像分割和 UNet。我正在尝试进行多类图像分割,其中我有 7 个类,输入是 (256, 256, 3) rgb 图像,输出是 (256, 256, 1) 灰度图像,其中每个强度值对应一个类。我正在做像素明智的 softmax。我正在使用稀疏分类交叉熵以避免进行一次热编码。
def soft1(x):
return keras.activations.softmax(x, axis = -1)
def conv2d_block(input_tensor, n_filters, kernel_size = 3, batchnorm = True):
x = Conv2D(filters = n_filters, kernel_size = (kernel_size, kernel_size),\
kernel_initializer = 'he_normal', padding = 'same')(input_tensor)
if batchnorm:
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2D(filters = n_filters, kernel_size = (kernel_size, kernel_size),\
kernel_initializer = 'he_normal', padding = 'same')(input_tensor)
if batchnorm:
x = BatchNormalization()(x)
x = Activation('relu')(x)
return x
def get_unet(input_img, n_classes, n_filters = 16, dropout = 0.1, batchnorm = True):
# Contracting Path
c1 = conv2d_block(input_img, n_filters * 1, kernel_size = 3, batchnorm = batchnorm)
p1 = MaxPooling2D((2, 2))(c1)
p1 = Dropout(dropout)(p1)
c2 = conv2d_block(p1, n_filters * 2, kernel_size = 3, batchnorm = batchnorm)
p2 = MaxPooling2D((2, 2))(c2)
p2 = Dropout(dropout)(p2)
c3 = conv2d_block(p2, n_filters * 4, kernel_size = 3, batchnorm = batchnorm)
p3 = MaxPooling2D((2, 2))(c3)
p3 = Dropout(dropout)(p3)
c4 = conv2d_block(p3, n_filters * 8, kernel_size = 3, batchnorm = batchnorm)
p4 = MaxPooling2D((2, 2))(c4)
p4 = Dropout(dropout)(p4)
c5 = conv2d_block(p4, n_filters = n_filters * 16, kernel_size = 3, batchnorm = batchnorm)
# Expansive Path
u6 = Conv2DTranspose(n_filters * 8, (3, 3), strides = (2, 2), padding = 'same')(c5)
u6 = concatenate([u6, c4])
u6 = Dropout(dropout)(u6)
c6 = conv2d_block(u6, n_filters * 8, kernel_size = 3, batchnorm = batchnorm)
u7 = Conv2DTranspose(n_filters * 4, (3, 3), strides = (2, 2), padding = 'same')(c6)
u7 = concatenate([u7, c3])
u7 = Dropout(dropout)(u7)
c7 = conv2d_block(u7, n_filters * 4, kernel_size = 3, batchnorm = batchnorm)
u8 = Conv2DTranspose(n_filters * 2, (3, 3), strides = (2, 2), padding = 'same')(c7)
u8 = concatenate([u8, c2])
u8 = Dropout(dropout)(u8)
c8 = conv2d_block(u8, n_filters * 2, kernel_size = 3, batchnorm = batchnorm)
u9 = Conv2DTranspose(n_filters * 1, (3, 3), strides = (2, 2), padding = 'same')(c8)
u9 = concatenate([u9, c1])
u9 = Dropout(dropout)(u9)
c9 = conv2d_block(u9, n_filters * 1, kernel_size = 3, batchnorm = batchnorm)
outputs = Conv2D(n_classes, (1, 1))(c9)
outputs = Reshape((image_height*image_width, 1, n_classes), input_shape = (image_height, image_width, n_classes))(outputs)
outputs = Activation(soft1)(outputs)
model = Model(inputs=[input_img], outputs=[outputs])
print(outputs.shape)
return model
Model: "model_2"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_12 (InputLayer) (None, 256, 256, 3) 0
__________________________________________________________________________________________________
conv2d_211 (Conv2D) (None, 256, 256, 16) 448 input_12[0][0]
__________________________________________________________________________________________________
batch_normalization_200 (BatchN (None, 256, 256, 16) 64 conv2d_211[0][0]
__________________________________________________________________________________________________
activation_204 (Activation) (None, 256, 256, 16) 0 batch_normalization_200[0][0]
__________________________________________________________________________________________________
max_pooling2d_45 (MaxPooling2D) (None, 128, 128, 16) 0 activation_204[0][0]
__________________________________________________________________________________________________
dropout_89 (Dropout) (None, 128, 128, 16) 0 max_pooling2d_45[0][0]
__________________________________________________________________________________________________
conv2d_213 (Conv2D) (None, 128, 128, 32) 4640 dropout_89[0][0]
__________________________________________________________________________________________________
batch_normalization_202 (BatchN (None, 128, 128, 32) 128 conv2d_213[0][0]
__________________________________________________________________________________________________
activation_206 (Activation) (None, 128, 128, 32) 0 batch_normalization_202[0][0]
__________________________________________________________________________________________________
max_pooling2d_46 (MaxPooling2D) (None, 64, 64, 32) 0 activation_206[0][0]
__________________________________________________________________________________________________
dropout_90 (Dropout) (None, 64, 64, 32) 0 max_pooling2d_46[0][0]
__________________________________________________________________________________________________
conv2d_215 (Conv2D) (None, 64, 64, 64) 18496 dropout_90[0][0]
__________________________________________________________________________________________________
batch_normalization_204 (BatchN (None, 64, 64, 64) 256 conv2d_215[0][0]
__________________________________________________________________________________________________
activation_208 (Activation) (None, 64, 64, 64) 0 batch_normalization_204[0][0]
__________________________________________________________________________________________________
max_pooling2d_47 (MaxPooling2D) (None, 32, 32, 64) 0 activation_208[0][0]
__________________________________________________________________________________________________
dropout_91 (Dropout) (None, 32, 32, 64) 0 max_pooling2d_47[0][0]
__________________________________________________________________________________________________
conv2d_217 (Conv2D) (None, 32, 32, 128) 73856 dropout_91[0][0]
__________________________________________________________________________________________________
batch_normalization_206 (BatchN (None, 32, 32, 128) 512 conv2d_217[0][0]
__________________________________________________________________________________________________
activation_210 (Activation) (None, 32, 32, 128) 0 batch_normalization_206[0][0]
__________________________________________________________________________________________________
max_pooling2d_48 (MaxPooling2D) (None, 16, 16, 128) 0 activation_210[0][0]
__________________________________________________________________________________________________
dropout_92 (Dropout) (None, 16, 16, 128) 0 max_pooling2d_48[0][0]
__________________________________________________________________________________________________
conv2d_219 (Conv2D) (None, 16, 16, 256) 295168 dropout_92[0][0]
__________________________________________________________________________________________________
batch_normalization_208 (BatchN (None, 16, 16, 256) 1024 conv2d_219[0][0]
__________________________________________________________________________________________________
activation_212 (Activation) (None, 16, 16, 256) 0 batch_normalization_208[0][0]
__________________________________________________________________________________________________
conv2d_transpose_45 (Conv2DTran (None, 32, 32, 128) 295040 activation_212[0][0]
__________________________________________________________________________________________________
concatenate_45 (Concatenate) (None, 32, 32, 256) 0 conv2d_transpose_45[0][0]
activation_210[0][0]
__________________________________________________________________________________________________
dropout_93 (Dropout) (None, 32, 32, 256) 0 concatenate_45[0][0]
__________________________________________________________________________________________________
conv2d_221 (Conv2D) (None, 32, 32, 128) 295040 dropout_93[0][0]
__________________________________________________________________________________________________
batch_normalization_210 (BatchN (None, 32, 32, 128) 512 conv2d_221[0][0]
__________________________________________________________________________________________________
activation_214 (Activation) (None, 32, 32, 128) 0 batch_normalization_210[0][0]
__________________________________________________________________________________________________
conv2d_transpose_46 (Conv2DTran (None, 64, 64, 64) 73792 activation_214[0][0]
__________________________________________________________________________________________________
concatenate_46 (Concatenate) (None, 64, 64, 128) 0 conv2d_transpose_46[0][0]
activation_208[0][0]
__________________________________________________________________________________________________
dropout_94 (Dropout) (None, 64, 64, 128) 0 concatenate_46[0][0]
__________________________________________________________________________________________________
conv2d_223 (Conv2D) (None, 64, 64, 64) 73792 dropout_94[0][0]
__________________________________________________________________________________________________
batch_normalization_212 (BatchN (None, 64, 64, 64) 256 conv2d_223[0][0]
__________________________________________________________________________________________________
activation_216 (Activation) (None, 64, 64, 64) 0 batch_normalization_212[0][0]
__________________________________________________________________________________________________
conv2d_transpose_47 (Conv2DTran (None, 128, 128, 32) 18464 activation_216[0][0]
__________________________________________________________________________________________________
concatenate_47 (Concatenate) (None, 128, 128, 64) 0 conv2d_transpose_47[0][0]
activation_206[0][0]
__________________________________________________________________________________________________
dropout_95 (Dropout) (None, 128, 128, 64) 0 concatenate_47[0][0]
__________________________________________________________________________________________________
conv2d_225 (Conv2D) (None, 128, 128, 32) 18464 dropout_95[0][0]
__________________________________________________________________________________________________
batch_normalization_214 (BatchN (None, 128, 128, 32) 128 conv2d_225[0][0]
__________________________________________________________________________________________________
activation_218 (Activation) (None, 128, 128, 32) 0 batch_normalization_214[0][0]
__________________________________________________________________________________________________
conv2d_transpose_48 (Conv2DTran (None, 256, 256, 16) 4624 activation_218[0][0]
__________________________________________________________________________________________________
concatenate_48 (Concatenate) (None, 256, 256, 32) 0 conv2d_transpose_48[0][0]
activation_204[0][0]
__________________________________________________________________________________________________
dropout_96 (Dropout) (None, 256, 256, 32) 0 concatenate_48[0][0]
__________________________________________________________________________________________________
conv2d_227 (Conv2D) (None, 256, 256, 16) 4624 dropout_96[0][0]
__________________________________________________________________________________________________
batch_normalization_216 (BatchN (None, 256, 256, 16) 64 conv2d_227[0][0]
__________________________________________________________________________________________________
activation_220 (Activation) (None, 256, 256, 16) 0 batch_normalization_216[0][0]
__________________________________________________________________________________________________
conv2d_228 (Conv2D) (None, 256, 256, 7) 119 activation_220[0][0]
__________________________________________________________________________________________________
reshape_12 (Reshape) (None, 65536, 1, 7) 0 conv2d_228[0][0]
__________________________________________________________________________________________________
activation_221 (Activation) (None, 65536, 1, 7) 0 reshape_12[0][0]
==================================================================================================
Total params: 1,179,511
Trainable params: 1,178,039
Non-trainable params: 1,472
__________________________________________________________________________________________________
最佳答案
您的模型应该以 (256,256,7)
结尾.
即 每像素 7 个类 ,并且形状应该与您的输出图像一致,即 (256,256,1)
.这仅适用于 'sparse_categorical_crossentropy'
或自定义损失。
所以,最多 conv_228
模型看起来不错(虽然没有详细看)。
在这个卷积之后不需要任何东西。
您可以将softmax直接放在conv_228
中或直接之后。y_train
应该是 (256,256,1)
为了这。
关于keras - Unet : Multi Class Image Segmentation,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59343661/
目录 1 Motivation 2 U-Net网络 3 训练 3.1 数据增强 3
我已经在这个问题上苦苦挣扎了好几个星期了,我找不到任何可以帮助我的东西。 Unity 的多人网络拓扑(我所知道的)是主机-客户端。主机开始游戏,客户端使用 ip:port 连接到主机。当其中一方位于
我正在训练用于分割的 uNet 模型。训练模型后,输出全为零,我不明白为什么。 我看到建议我应该使用特定的损失函数,所以我使用了 dice 损失函数。这是因为黑色区域 (0) 比白色区域 (1) 大得
正如标题所暗示的,我遇到了一个问题,即客户端发送的命令没有被触发。 我尝试实现的基本功能是,当敌人出现在玩家面前并且我单击时,该玩家会被暂时击晕。如果我是主持人,工作正常,双方都完美注册。 如果我作为
设置:创建我的第一个多人游戏并遇到一个奇怪的问题。这是一款坦克游戏,玩家可以射击子弹并互相残杀 问题:当客户端移动时射击,子弹似乎有一点延迟产生,导致玩家与子弹相撞。 The issue seems
我正在尝试在2 个不同的位置 生成2 个玩家(主机和客户端)。 我不知道该怎么做,因为播放器是由网络管理器自动生成的。 我尝试了以下但失败得很惨:(。 [Command] void CmdSpawn(
我正在使用 unet 进行客户端-服务器通信,当我从服务器(在 PC 上)向客户端(在 PC 上)广播消息时它工作正常但是当我尝试从服务器(PC)到客户端(Hololens 模拟器) unet 不起作
我一直在做一个小型的副项目,它需要服务器端的最高性能,因为它应该被设计为处理多达 100 名玩家。我也希望它尽可能具有权威性(当然还有客户端预测和所有这些东西) 我已经决定分离客户端/服务器,但是,u
我正在尝试使用 UNET 在 Unity 中同步一个非玩家游戏对象的变换。基本上我有一个玩家可以对抗那个对象并移动它,我希望那个对象的转换在服务器和客户端中以相同的方式改变。 我遵循了教程 https
统一版本:5.5 场景示例: Light [带有 Light 组件的游戏对象] LightSwitch - [包含:BoxCollider|NetworkIdentity|继承自 NetworkBeh
我正在 Keras 中使用 VGG16(解码器部分)训练 U-Net。该模型训练良好并且正在学习 - 我看到验证集的 gradua tol 改进。 但是,当我尝试对图像调用 predict 时,我收到
当我尝试训练模型时,我不断收到以下代码的以下错误:TypeError: fit_generator() missing 1 required positional argument: 'generat
简短描述:我正在加载一个主场景中的附加场景,它加载正常但附加场景的游戏对象(包含网络标识组件)正在禁用。 详细信息:我正在通过此代码加载附加场景,以便附加场景加载到我的服务器和所有工作正常的客户端中:
我正在研究和学习 Unity 5、UNET 和网络的一些基础知识。我制作了一个简单的 3D 游戏,您可以在其中四处走动并更改对象的颜色。但我现在想让它成为多人游戏,我在弄清楚如何通过网络发送更改以便所
我最近开始学习图像分割和 UNet。我正在尝试进行多类图像分割,其中我有 7 个类,输入是 (256, 256, 3) rgb 图像,输出是 (256, 256, 1) 灰度图像,其中每个强度值对应一
我正在 PyTorch 中实现基于 U-Net 的架构。在火车时间,我有大小 256x256 的补丁这不会造成任何问题。但是在测试时,我有全高清图像( 1920x1080 )。这会在跳过连接期间导致问
我创建了我的如何统一匹配类(class)。这一切在电脑上运行良好,我可以轻松地创建一场比赛,没有任何问题。当我尝试为 iOS 创建匹配时出现了问题。 当我尝试创建新匹配时,我在 Xcode 上收到此错
我正在开发第一人称游戏,玩家可以在其中构建复杂的对象。结构示例: Train - Wagon - Table - Chair - Chest (stores items) - Work
众所周知,Unet 的文档缺乏。 我很想得到这个答案,并希望它可以帮助其他人以后搜索。 这是我正在尝试做的事情: 在玩家客户端上使用光线转换检测命中。 使用[命令]指示对专用服务器的命中 使用 [Ta
我的游戏中有一个立方体,它在观察另一个物体时在其初始状态和更大的状态之间切换。这在单人游戏中运行良好,但当我将它带到多人游戏时,我找不到正确的选项组合来让它在两个客户端(一个是主机)上更新。每个玩家仍
我是一名优秀的程序员,十分优秀!