- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试从 Github 训练一个 3D 分割网络.我的模型是用 Keras (Python) 实现的,这是一个典型的 U-Net 模型。模型,总结如下,
Model: "functional_3"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 128, 128, 4) 0
__________________________________________________________________________________________________
gaussian_noise (GaussianNoise) (None, 128, 128, 4) 0 input_1[0][0]
__________________________________________________________________________________________________
conv2d (Conv2D) (None, 128, 128, 64) 1088 gaussian_noise[0][0]
__________________________________________________________________________________________________
batch_normalization (BatchNorma (None, 128, 128, 64) 256 conv2d[0][0]
__________________________________________________________________________________________________
p_re_lu (PReLU) (None, 128, 128, 64) 64 batch_normalization[0][0]
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 128, 128, 64) 36928 p_re_lu[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 128, 128, 64) 256 conv2d_1[0][0]
__________________________________________________________________________________________________
p_re_lu_1 (PReLU) (None, 128, 128, 64) 64 batch_normalization_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 128, 128, 64) 36928 p_re_lu_1[0][0]
__________________________________________________________________________________________________
add (Add) (None, 128, 128, 64) 0 conv2d[0][0]
conv2d_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 64, 64, 128) 32896 add[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 64, 64, 128) 512 conv2d_3[0][0]
__________________________________________________________________________________________________
p_re_lu_2 (PReLU) (None, 64, 64, 128) 128 batch_normalization_2[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 64, 64, 128) 147584 p_re_lu_2[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 64, 64, 128) 512 conv2d_4[0][0]
__________________________________________________________________________________________________
p_re_lu_3 (PReLU) (None, 64, 64, 128) 128 batch_normalization_3[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 64, 64, 128) 147584 p_re_lu_3[0][0]
__________________________________________________________________________________________________
add_1 (Add) (None, 64, 64, 128) 0 conv2d_3[0][0]
conv2d_5[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 32, 32, 256) 131328 add_1[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 32, 32, 256) 1024 conv2d_6[0][0]
__________________________________________________________________________________________________
p_re_lu_4 (PReLU) (None, 32, 32, 256) 256 batch_normalization_4[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, 32, 32, 256) 590080 p_re_lu_4[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 32, 32, 256) 1024 conv2d_7[0][0]
__________________________________________________________________________________________________
p_re_lu_5 (PReLU) (None, 32, 32, 256) 256 batch_normalization_5[0][0]
__________________________________________________________________________________________________
conv2d_8 (Conv2D) (None, 32, 32, 256) 590080 p_re_lu_5[0][0]
__________________________________________________________________________________________________
add_2 (Add) (None, 32, 32, 256) 0 conv2d_6[0][0]
conv2d_8[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, 16, 16, 512) 524800 add_2[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 16, 16, 512) 2048 conv2d_9[0][0]
__________________________________________________________________________________________________
p_re_lu_6 (PReLU) (None, 16, 16, 512) 512 batch_normalization_6[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, 16, 16, 512) 2359808 p_re_lu_6[0][0]
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 16, 16, 512) 2048 conv2d_10[0][0]
__________________________________________________________________________________________________
p_re_lu_7 (PReLU) (None, 16, 16, 512) 512 batch_normalization_7[0][0]
__________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, 16, 16, 512) 2359808 p_re_lu_7[0][0]
__________________________________________________________________________________________________
add_3 (Add) (None, 16, 16, 512) 0 conv2d_9[0][0]
conv2d_11[0][0]
__________________________________________________________________________________________________
up_sampling2d (UpSampling2D) (None, 32, 32, 512) 0 add_3[0][0]
__________________________________________________________________________________________________
conv2d_12 (Conv2D) (None, 32, 32, 256) 524544 up_sampling2d[0][0]
__________________________________________________________________________________________________
concatenate (Concatenate) (None, 32, 32, 512) 0 add_2[0][0]
conv2d_12[0][0]
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 32, 32, 512) 2048 concatenate[0][0]
__________________________________________________________________________________________________
p_re_lu_8 (PReLU) (None, 32, 32, 512) 512 batch_normalization_8[0][0]
__________________________________________________________________________________________________
conv2d_13 (Conv2D) (None, 32, 32, 256) 1179904 p_re_lu_8[0][0]
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, 32, 32, 256) 1024 conv2d_13[0][0]
__________________________________________________________________________________________________
p_re_lu_9 (PReLU) (None, 32, 32, 256) 256 batch_normalization_9[0][0]
__________________________________________________________________________________________________
conv2d_15 (Conv2D) (None, 32, 32, 256) 131072 concatenate[0][0]
__________________________________________________________________________________________________
conv2d_14 (Conv2D) (None, 32, 32, 256) 590080 p_re_lu_9[0][0]
__________________________________________________________________________________________________
add_4 (Add) (None, 32, 32, 256) 0 conv2d_15[0][0]
conv2d_14[0][0]
__________________________________________________________________________________________________
up_sampling2d_1 (UpSampling2D) (None, 64, 64, 256) 0 add_4[0][0]
__________________________________________________________________________________________________
conv2d_16 (Conv2D) (None, 64, 64, 128) 131200 up_sampling2d_1[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 64, 64, 256) 0 add_1[0][0]
conv2d_16[0][0]
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 64, 64, 256) 1024 concatenate_1[0][0]
__________________________________________________________________________________________________
p_re_lu_10 (PReLU) (None, 64, 64, 256) 256 batch_normalization_10[0][0]
__________________________________________________________________________________________________
conv2d_17 (Conv2D) (None, 64, 64, 128) 295040 p_re_lu_10[0][0]
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 64, 64, 128) 512 conv2d_17[0][0]
__________________________________________________________________________________________________
p_re_lu_11 (PReLU) (None, 64, 64, 128) 128 batch_normalization_11[0][0]
__________________________________________________________________________________________________
conv2d_19 (Conv2D) (None, 64, 64, 128) 32768 concatenate_1[0][0]
__________________________________________________________________________________________________
conv2d_18 (Conv2D) (None, 64, 64, 128) 147584 p_re_lu_11[0][0]
__________________________________________________________________________________________________
add_5 (Add) (None, 64, 64, 128) 0 conv2d_19[0][0]
conv2d_18[0][0]
__________________________________________________________________________________________________
up_sampling2d_2 (UpSampling2D) (None, 128, 128, 128 0 add_5[0][0]
__________________________________________________________________________________________________
conv2d_20 (Conv2D) (None, 128, 128, 64) 32832 up_sampling2d_2[0][0]
__________________________________________________________________________________________________
concatenate_2 (Concatenate) (None, 128, 128, 128 0 add[0][0]
conv2d_20[0][0]
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 128, 128, 128 512 concatenate_2[0][0]
__________________________________________________________________________________________________
p_re_lu_12 (PReLU) (None, 128, 128, 128 128 batch_normalization_12[0][0]
__________________________________________________________________________________________________
conv2d_21 (Conv2D) (None, 128, 128, 64) 73792 p_re_lu_12[0][0]
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, 128, 128, 64) 256 conv2d_21[0][0]
__________________________________________________________________________________________________
p_re_lu_13 (PReLU) (None, 128, 128, 64) 64 batch_normalization_13[0][0]
__________________________________________________________________________________________________
conv2d_23 (Conv2D) (None, 128, 128, 64) 8192 concatenate_2[0][0]
__________________________________________________________________________________________________
conv2d_22 (Conv2D) (None, 128, 128, 64) 36928 p_re_lu_13[0][0]
__________________________________________________________________________________________________
add_6 (Add) (None, 128, 128, 64) 0 conv2d_23[0][0]
conv2d_22[0][0]
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, 128, 128, 64) 256 add_6[0][0]
__________________________________________________________________________________________________
p_re_lu_14 (PReLU) (None, 128, 128, 64) 64 batch_normalization_14[0][0]
__________________________________________________________________________________________________
conv2d_24 (Conv2D) (None, 128, 128, 4) 260 p_re_lu_14[0][0]
__________________________________________________________________________________________________
activation (Activation) (None, 128, 128, 4) 0 conv2d_24[0][0]
==================================================================================================
Total params: 10,159,748
Trainable params: 10,153,092
Non-trainable params: 6,656
__________________________________________________________________________________________________
我的训练文件在 (batch, Height, Width, Channel)
中输入形状。我将训练图像和标签保存在两个 Numpy 文件 (.npy) 中。其中,x_training.npy
包含图像(形状:(20, 128, 128, 4))和 y_training.npy
包含图像标签(形状:(20, 128, 128, 4))。然后我使用自定义数据生成器读取数据。
def img_msk_gen(X33_train,Y_train,seed):
'''
a custom generator that performs data augmentation on both patches and their corresponding targets (masks)
'''
datagen = ImageDataGenerator(horizontal_flip=True,data_format="channels_last")
datagen_msk = ImageDataGenerator(horizontal_flip=True,data_format="channels_last")
image_generator = datagen.flow(X33_train,batch_size=4,seed=seed)
y_generator = datagen_msk.flow(Y_train,batch_size=4,seed=seed)
while True:
yield(image_generator.next(), y_generator.next())
最后,我正在尝试训练我的模型,
#load data from disk
X_patches=np.load("./x_training.npy").astype(np.float32)
Y_labels_valid=np.load("./y_training.npy").astype(np.float32)
X33_train=X_patches
Y_train=Y_labels
train_generator=img_msk_gen(X33_train=X_patches,Y_train=Y_labels,seed= 9999)
model.fit_generator(train_generator,steps_per_epoch=len(X33_train)//batch_size,
verbose=1)
但是,它会抛出一个错误,就像这样......
TypeError: Only integers, slices (`:`), ellipsis (`...`), tf.newaxis (`None`) and scalar tf.int32/tf.int64 tensors are valid indices, got [1, 3]
如果您有任何建议或想法,对我会有帮助。我的完整模型实现是 here在 colab 中,数据是 here在 Google Drive 中。 虽然有类似类型的问题,但我无法解决我的问题。任何形式的帮助将不胜感激。提前谢谢。
最佳答案
错误直接说明:你给 [1,3] 这是一个列表,它需要一个数字或一个切片。
也许你的意思是 [1:3] ?
你似乎给了 [1,3] 所以也许应该改变:
y_core=K.sum(y_true_f[:,[1,3]],axis=1)
到
y_core=K.sum(y_true_f[1:3],axis=1)
这至少是有效的语法,我不确定它是否符合您的要求。
关于python - TypeError : Only integers, slices (`:` ), ellipsis (`…` ), tf.newaxis (`None` ) 和标量 tf.int32/tf.int64 张量是有效的索引,得到 [1, 3],我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/63680459/
我想成对比较(与 <= )两个 NumPy ndarray 的所有元素s A和 B ,其中两个数组都可以具有任意维度 m 和 n,这样结果就是一个维度为 m + n 的数组。 我知道如何针对 B 的给
我想对一个数组进行切片,以便我可以使用它来对另一个任意维度的数组执行操作。换句话说,我正在执行以下操作: A = np.random.rand(5) B = np.random.rand(5,2,3,
关闭。这个问题需要 details or clarity 。目前不接受答案。 想要改进这个问题?通过 editing this post 添加详细信息并澄清问题。 已关闭 5 年前。 奥 git _a
我正在努力使我的程序更快。我有一个矩阵和一个向量: GDES = N.array([[1,2,3,4,5], [6,7,8,9,10], [11,12,13,14,15], [16,17,18,19,
什么是numpy.newaxis我什么时候应该使用它? 在一维数组 x 上使用它会产生: >>> x array([0, 1, 2, 3]) >>> x[np.newaxis, :] array([[
什么是numpy.newaxis我应该什么时候使用它? 在一维数组 x 上使用它会产生: >>> x array([0, 1, 2, 3]) >>> x[np.newaxis, :] array([[
using Distance euclidean ([1:10;1:10], [1:10]) ERROR: DimensionMismatch("The lengths of a and b mus
我想将一个数组中的每个向量与另一个数组中的所有向量进行比较,并计算每个向量匹配的符号数量。让我举个例子。我有两个数组,a 和 b。对于a中的每个向量,我想将其与b中的每个向量进行比较。然后我想返回一个
我需要计算沿定义的线 (3D) 具有相等间距的 n 个点 (3D)。我知道这条线的起点和终点。首先,我使用了 for k in range(nbin): step = k/float(nbin-1
您好,我是 tensorflow 的新手。我想在 tensorflow 中实现以下 python 代码。 import numpy as np a = np.array([1,2,3,4,5,6,7,
在 numpy 中,可以使用切片语法中的 'newaxis' 对象来创建长度为 1 的轴,例如: import numpy as np print np.zeros((3,5))[:,np.newax
我开始使用 Python 和 TensorFlow 机器学习。 我正在研究一个示例,在该示例中,我创建了一个简单的张量,表示具有两行三列 float 的矩阵: t = tf.constant([[1.
这个问题在这里已经有了答案: Numpy: Should I use newaxis or None? (1 个回答) 关闭 9 年前。 为什么None有np.newaxis的保存效果?例如,使用:
ndarray.reshape 或 numpy.newaxis 均可用于向数组添加新维度。它们似乎都创建了一个 View ,使用一个而不是另一个有什么理由或优势吗? >>> b array([ 1.,
我有一个名为预测的数组 array([0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0,
我正在尝试使用 Tensorflow 后端在 Keras 中训练深度学习算法。我正在尝试执行以下操作: x = tf.reshape(theta, [-1])[K.argmax(image)] 哪里i
例如: #Creating a new empty dataframe test=pd.DataFrame() test_list=[1,3,5,6,3,1] #Now I am converting
大家好,我是 Tensorflow 的新手。 我想改变张量的维度,我找到了3种方法来实现这个,如下所示: a = tf.constant([[1,2,3],[4,5,6]]) # shape (2,3
在操作矩阵的时候,不同的接口对于矩阵的输入维度要求不同,输入可能为1-D,2-D,3-D等等。下面介绍一下使用Numpy进行矩阵维度变更的相关方法。主要包括以下几种: 1、np.newaxis扩充
我最近在阅读一个开源项目的源代码。当程序员想要将像 array([0, 1, 2]) 这样的行向量转换为像 array([[0], [1], [2]] 这样的列向量时),使用了 np.reshape(
我是一名优秀的程序员,十分优秀!