- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试从 Github 训练一个 3D 分割网络.我的模型是用 Keras (Python) 实现的,这是一个典型的 U-Net 模型。模型,总结如下,
Model: "functional_3"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 128, 128, 4) 0
__________________________________________________________________________________________________
gaussian_noise (GaussianNoise) (None, 128, 128, 4) 0 input_1[0][0]
__________________________________________________________________________________________________
conv2d (Conv2D) (None, 128, 128, 64) 1088 gaussian_noise[0][0]
__________________________________________________________________________________________________
batch_normalization (BatchNorma (None, 128, 128, 64) 256 conv2d[0][0]
__________________________________________________________________________________________________
p_re_lu (PReLU) (None, 128, 128, 64) 64 batch_normalization[0][0]
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 128, 128, 64) 36928 p_re_lu[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 128, 128, 64) 256 conv2d_1[0][0]
__________________________________________________________________________________________________
p_re_lu_1 (PReLU) (None, 128, 128, 64) 64 batch_normalization_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 128, 128, 64) 36928 p_re_lu_1[0][0]
__________________________________________________________________________________________________
add (Add) (None, 128, 128, 64) 0 conv2d[0][0]
conv2d_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 64, 64, 128) 32896 add[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 64, 64, 128) 512 conv2d_3[0][0]
__________________________________________________________________________________________________
p_re_lu_2 (PReLU) (None, 64, 64, 128) 128 batch_normalization_2[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 64, 64, 128) 147584 p_re_lu_2[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 64, 64, 128) 512 conv2d_4[0][0]
__________________________________________________________________________________________________
p_re_lu_3 (PReLU) (None, 64, 64, 128) 128 batch_normalization_3[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 64, 64, 128) 147584 p_re_lu_3[0][0]
__________________________________________________________________________________________________
add_1 (Add) (None, 64, 64, 128) 0 conv2d_3[0][0]
conv2d_5[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 32, 32, 256) 131328 add_1[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 32, 32, 256) 1024 conv2d_6[0][0]
__________________________________________________________________________________________________
p_re_lu_4 (PReLU) (None, 32, 32, 256) 256 batch_normalization_4[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, 32, 32, 256) 590080 p_re_lu_4[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 32, 32, 256) 1024 conv2d_7[0][0]
__________________________________________________________________________________________________
p_re_lu_5 (PReLU) (None, 32, 32, 256) 256 batch_normalization_5[0][0]
__________________________________________________________________________________________________
conv2d_8 (Conv2D) (None, 32, 32, 256) 590080 p_re_lu_5[0][0]
__________________________________________________________________________________________________
add_2 (Add) (None, 32, 32, 256) 0 conv2d_6[0][0]
conv2d_8[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, 16, 16, 512) 524800 add_2[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 16, 16, 512) 2048 conv2d_9[0][0]
__________________________________________________________________________________________________
p_re_lu_6 (PReLU) (None, 16, 16, 512) 512 batch_normalization_6[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, 16, 16, 512) 2359808 p_re_lu_6[0][0]
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 16, 16, 512) 2048 conv2d_10[0][0]
__________________________________________________________________________________________________
p_re_lu_7 (PReLU) (None, 16, 16, 512) 512 batch_normalization_7[0][0]
__________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, 16, 16, 512) 2359808 p_re_lu_7[0][0]
__________________________________________________________________________________________________
add_3 (Add) (None, 16, 16, 512) 0 conv2d_9[0][0]
conv2d_11[0][0]
__________________________________________________________________________________________________
up_sampling2d (UpSampling2D) (None, 32, 32, 512) 0 add_3[0][0]
__________________________________________________________________________________________________
conv2d_12 (Conv2D) (None, 32, 32, 256) 524544 up_sampling2d[0][0]
__________________________________________________________________________________________________
concatenate (Concatenate) (None, 32, 32, 512) 0 add_2[0][0]
conv2d_12[0][0]
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 32, 32, 512) 2048 concatenate[0][0]
__________________________________________________________________________________________________
p_re_lu_8 (PReLU) (None, 32, 32, 512) 512 batch_normalization_8[0][0]
__________________________________________________________________________________________________
conv2d_13 (Conv2D) (None, 32, 32, 256) 1179904 p_re_lu_8[0][0]
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, 32, 32, 256) 1024 conv2d_13[0][0]
__________________________________________________________________________________________________
p_re_lu_9 (PReLU) (None, 32, 32, 256) 256 batch_normalization_9[0][0]
__________________________________________________________________________________________________
conv2d_15 (Conv2D) (None, 32, 32, 256) 131072 concatenate[0][0]
__________________________________________________________________________________________________
conv2d_14 (Conv2D) (None, 32, 32, 256) 590080 p_re_lu_9[0][0]
__________________________________________________________________________________________________
add_4 (Add) (None, 32, 32, 256) 0 conv2d_15[0][0]
conv2d_14[0][0]
__________________________________________________________________________________________________
up_sampling2d_1 (UpSampling2D) (None, 64, 64, 256) 0 add_4[0][0]
__________________________________________________________________________________________________
conv2d_16 (Conv2D) (None, 64, 64, 128) 131200 up_sampling2d_1[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 64, 64, 256) 0 add_1[0][0]
conv2d_16[0][0]
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 64, 64, 256) 1024 concatenate_1[0][0]
__________________________________________________________________________________________________
p_re_lu_10 (PReLU) (None, 64, 64, 256) 256 batch_normalization_10[0][0]
__________________________________________________________________________________________________
conv2d_17 (Conv2D) (None, 64, 64, 128) 295040 p_re_lu_10[0][0]
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 64, 64, 128) 512 conv2d_17[0][0]
__________________________________________________________________________________________________
p_re_lu_11 (PReLU) (None, 64, 64, 128) 128 batch_normalization_11[0][0]
__________________________________________________________________________________________________
conv2d_19 (Conv2D) (None, 64, 64, 128) 32768 concatenate_1[0][0]
__________________________________________________________________________________________________
conv2d_18 (Conv2D) (None, 64, 64, 128) 147584 p_re_lu_11[0][0]
__________________________________________________________________________________________________
add_5 (Add) (None, 64, 64, 128) 0 conv2d_19[0][0]
conv2d_18[0][0]
__________________________________________________________________________________________________
up_sampling2d_2 (UpSampling2D) (None, 128, 128, 128 0 add_5[0][0]
__________________________________________________________________________________________________
conv2d_20 (Conv2D) (None, 128, 128, 64) 32832 up_sampling2d_2[0][0]
__________________________________________________________________________________________________
concatenate_2 (Concatenate) (None, 128, 128, 128 0 add[0][0]
conv2d_20[0][0]
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 128, 128, 128 512 concatenate_2[0][0]
__________________________________________________________________________________________________
p_re_lu_12 (PReLU) (None, 128, 128, 128 128 batch_normalization_12[0][0]
__________________________________________________________________________________________________
conv2d_21 (Conv2D) (None, 128, 128, 64) 73792 p_re_lu_12[0][0]
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, 128, 128, 64) 256 conv2d_21[0][0]
__________________________________________________________________________________________________
p_re_lu_13 (PReLU) (None, 128, 128, 64) 64 batch_normalization_13[0][0]
__________________________________________________________________________________________________
conv2d_23 (Conv2D) (None, 128, 128, 64) 8192 concatenate_2[0][0]
__________________________________________________________________________________________________
conv2d_22 (Conv2D) (None, 128, 128, 64) 36928 p_re_lu_13[0][0]
__________________________________________________________________________________________________
add_6 (Add) (None, 128, 128, 64) 0 conv2d_23[0][0]
conv2d_22[0][0]
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, 128, 128, 64) 256 add_6[0][0]
__________________________________________________________________________________________________
p_re_lu_14 (PReLU) (None, 128, 128, 64) 64 batch_normalization_14[0][0]
__________________________________________________________________________________________________
conv2d_24 (Conv2D) (None, 128, 128, 4) 260 p_re_lu_14[0][0]
__________________________________________________________________________________________________
activation (Activation) (None, 128, 128, 4) 0 conv2d_24[0][0]
==================================================================================================
Total params: 10,159,748
Trainable params: 10,153,092
Non-trainable params: 6,656
__________________________________________________________________________________________________
我的训练文件在 (batch, Height, Width, Channel)
中输入形状。我将训练图像和标签保存在两个 Numpy 文件 (.npy) 中。其中,x_training.npy
包含图像(形状:(20, 128, 128, 4))和 y_training.npy
包含图像标签(形状:(20, 128, 128, 4))。然后我使用自定义数据生成器读取数据。
def img_msk_gen(X33_train,Y_train,seed):
'''
a custom generator that performs data augmentation on both patches and their corresponding targets (masks)
'''
datagen = ImageDataGenerator(horizontal_flip=True,data_format="channels_last")
datagen_msk = ImageDataGenerator(horizontal_flip=True,data_format="channels_last")
image_generator = datagen.flow(X33_train,batch_size=4,seed=seed)
y_generator = datagen_msk.flow(Y_train,batch_size=4,seed=seed)
while True:
yield(image_generator.next(), y_generator.next())
最后,我正在尝试训练我的模型,
#load data from disk
X_patches=np.load("./x_training.npy").astype(np.float32)
Y_labels_valid=np.load("./y_training.npy").astype(np.float32)
X33_train=X_patches
Y_train=Y_labels
train_generator=img_msk_gen(X33_train=X_patches,Y_train=Y_labels,seed= 9999)
model.fit_generator(train_generator,steps_per_epoch=len(X33_train)//batch_size,
verbose=1)
但是,它会抛出一个错误,就像这样......
TypeError: Only integers, slices (`:`), ellipsis (`...`), tf.newaxis (`None`) and scalar tf.int32/tf.int64 tensors are valid indices, got [1, 3]
如果您有任何建议或想法,对我会有帮助。我的完整模型实现是 here在 colab 中,数据是 here在 Google Drive 中。 虽然有类似类型的问题,但我无法解决我的问题。任何形式的帮助将不胜感激。提前谢谢。
最佳答案
错误直接说明:你给 [1,3] 这是一个列表,它需要一个数字或一个切片。
也许你的意思是 [1:3] ?
你似乎给了 [1,3] 所以也许应该改变:
y_core=K.sum(y_true_f[:,[1,3]],axis=1)
到
y_core=K.sum(y_true_f[1:3],axis=1)
这至少是有效的语法,我不确定它是否符合您的要求。
关于python - TypeError : Only integers, slices (`:` ), ellipsis (`…` ), tf.newaxis (`None` ) 和标量 tf.int32/tf.int64 张量是有效的索引,得到 [1, 3],我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/63680459/
这几天我一直在努力。我一直在自学 CSS,所以对菜鸟好一点。我正在创建一个推荐 slider 。推荐以 3 个 block 显示。我希望前 2 个下降,第 3 个上升。但是当 slider 激活时,无
我最近开始学习 Nodejs,现在我很困惑我的网络应用程序使用什么,html 还是 ejs (Express)。 Ejs 使用 Express 模块,而 .html 使用 HTML 模块。我的第一个问
假设我们有一个 PostgreSQL 表contacts,每条记录都有一堆带标签的电子邮件地址(标签和电子邮件对)——其中一个是“主要”。 存储方式如下: id 主键 电子邮件 文本 email_la
我成功为一种新的tesseract语言编写了traineddata文件,但是当我完成时,我继续收到以下错误: index >= 0 && index = 0 && 索引 < size_used_ :E
这个问题已经有答案了: How to deal with SettingWithCopyWarning in Pandas (21 个回答) 已关闭 4 年前。 假设我有一个像这样的数据框,第一列“密
如果我有一个位置或行/列同时用于 A 和 B 位置,请检查 B 是否与 A 成对角线? 1 2 3 4 5 6 7 8 9 例如,我如何检查 5 是否与 7 成对角线? 此外,如果我检查 4 是
MongoDB:索引 一、 创建索引 默认情况下,集合中的_id字段就是索引,我们可以通过getIndexes()方法来查看一个集合中的索引 > db.user.getIndexes() [ { "v
一、索引介绍 索引是一种用来快速查询数据的数据结构。 B+Tree就是一种常用的数据库索引数据结构,MongoDB采用B+Tree 做索引,索引创建在colletions上。 MongoDB不使用索引
我无法决定索引。 就像我有下面的查询需要太多时间来执行: select count(rn.NODE_ID) as Count, rnl.[ISO_COUNTRY_CODE] as Cou
我有这些表: CREATE TABLE `cstat` ( `id_cstat` bigint(20) NOT NULL, `lang_code` varchar(3) NOT NULL,
我正在尝试找到一种方法来提高包含 IP 范围的 mysql 表的性能(在高峰时段每秒最多有 500 个 SELECT 查询(!),所以我有点担心)。 我有一个这种结构的表: id smallint(
jquery index() 似乎无法识别元素之一,总是说“无法读取未定义的属性‘长度’”这是我的代码。mnumber 是导致问题的原因。我需要 number 和 mnumber 才能跟踪使用鼠标,并
我们有一个包含近 4000 万条记录的 MongoDB 集合。该集合的当前大小为 5GB。此集合中存储的数据包含以下字段: _id: "MongoDB id" userid: "user id" (i
文档说:如果你有多个字段的复合索引,你可以用它来查询字段的开始子集。所以如果你有一个索引一个,乙,丙你可以用它查询一种一个,乙a,b,c 我的问题是,如果我有一个像这样的复合索引一个,乙,丙我可以查询
我正在使用 $('#list option').each(function(){ //do stuff }); 循环列表中的选项。我想知道如何获取当前循环的索引? 因为我不想让 var i = 0;循
MySQL索引的建立对于MySQL的高效运行是很重要的,索引可以大大提高MySQL的检索速度。 打个比方,如果合理的设计且使用索引的MySQL是一辆兰博基尼的话,那么没有设计和使用索引的MySQL
SQLite 索引(Index) 索引(Index)是一种特殊的查找表,数据库搜索引擎用来加快数据检索。简单地说,索引是一个指向表中数据的指针。一个数据库中的索引与一本书后边的索引是非常相似的。
我是 RavenDB 的新手。我正在尝试使用多 map 索引功能,但我不确定这是否是解决我的问题的最佳方法。所以我有三个文件:Unit、Car、People。 汽车文件看起来像这样: { Id: "
我有以下数据,我想根据范围在另一个表中建立索引 我想要实现的是,例如,如果三星的销售额为 2500,则折扣为 2%,低于 3000 且高于 1000 我知道它可以通过索引来完成,与多个数组匹配,然后指
我正在检查并删除 SQL 数据库中的重复和冗余索引。 所以如果我有两个相同的索引,我会删除。 例如,如果我删除了重叠的索引... 索引1:品牌、型号 指标二:品牌、型号、价格 我删除索引 1。 相同顺
我是一名优秀的程序员,十分优秀!