- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
当我安装 tensorflow.keras.Model
时,我该如何解决这个错误, 喜欢:
history_model_2 = model.fit(train_data.next_batch(),
validation_data=validation_data.next_batch(),
epochs=32)
这是我得到的错误:
InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: Input to reshape is a tensor with 983040 values, but the requested shape has 1966080
[[node model_2/reshape/Reshape (defined at <ipython-input-82-15c7d8d22e71>:10) ]]
[[model_2/ctc/Cast_3/_90]]
(1) Invalid argument: Input to reshape is a tensor with 983040 values, but the requested shape has 1966080
[[node model_2/reshape/Reshape (defined at <ipython-input-82-15c7d8d22e71>:10) ]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_33412]
Function call stack:
train_function -> train_function
{{ 在我的
model.fit()
,
train_data.next_batch()
是一个生成器,为
x
生成数据和
y
参数(我从
model.fit_generator
is being deprecated 开始使用它,这个生成器和几乎完整的代码部分灵感来自
this example from keras ocr examples on GitHub ,我也从中使用了如下所示的 ctc 损失函数。)}}
from tensorflow.keras import layers
from tensorflow.keras import Model
from tensorflow.keras import backend as tf_keras_backend
def ctc_lambda_func(args):
y_pred, labels, input_length, label_length = args
# the 2 is critical here, since the first couple outputs of the RNN tend to be garbage:
y_pred = y_pred[:, 2:, :]
return tf_keras_backend.ctc_batch_cost(labels, y_pred, input_length, label_length)
# Make Network
input_data = layers.Input(name='the_input', shape=(128, 64, 1), dtype='float32') # (None, 128, 64, 1)
# Convolution layer (VGG)
inner = layers.Conv2D(64, (3, 3), padding='same', name='conv1', kernel_initializer='he_normal', activation='relu')(input_data) # (None, 128, 64, 64)
inner = layers.BatchNormalization()(inner)
inner = layers.Activation('relu')(inner)
inner = layers.MaxPooling2D(pool_size=(2, 2), name='max1')(inner) # (None,64, 32, 64)
inner = layers.Conv2D(128, (3, 3), padding='same', name='conv2', kernel_initializer='he_normal', activation='relu')(inner) # (None, 64, 32, 128)
inner = layers.BatchNormalization()(inner)
inner = layers.Activation('relu')(inner)
inner = layers.MaxPooling2D(pool_size=(2, 2), name='max2')(inner) # (None, 32, 16, 128)
inner = layers.Conv2D(256, (3, 3), padding='same', name='conv3', kernel_initializer='he_normal', activation='relu')(inner) # (None, 32, 16, 256)
inner = layers.BatchNormalization()(inner)
inner = layers.Activation('relu')(inner)
inner = layers.Conv2D(256, (3, 3), padding='same', name='conv4', kernel_initializer='he_normal', activation='relu')(inner) # (None, 32, 16, 256)
inner = layers.BatchNormalization()(inner)
inner = layers.Activation('relu')(inner)
inner = layers.MaxPooling2D(pool_size=(1, 2), name='max3')(inner) # (None, 32, 8, 256)
inner = layers.Conv2D(512, (3, 3), padding='same', name='conv5', kernel_initializer='he_normal', activation='relu')(inner) # (None, 32, 8, 512)
inner = layers.BatchNormalization()(inner)
inner = layers.Activation('relu')(inner)
inner = layers.Conv2D(512, (3, 3), padding='same', name='conv6', activation='relu')(inner) # (None, 32, 8, 512)
inner = layers.BatchNormalization()(inner)
inner = layers.Activation('relu')(inner)
inner = layers.MaxPooling2D(pool_size=(1, 2), name='max4')(inner) # (None, 32, 4, 512)
inner = layers.Conv2D(512, (2, 2), padding='same', kernel_initializer='he_normal', name='con7', activation='relu')(inner) # (None, 32, 4, 512)
before_reshape = layers.BatchNormalization()(inner)
inner = layers.Activation('relu')(inner)
# CNN to RNN
reshape_op = layers.Reshape(target_shape=((32, 2048)), name='reshape')(before_reshape) # (None, 32, 2048)
dense_after_reshape = layers.Dense(64, activation='relu', kernel_initializer='he_normal', name='dense1')(reshape_op) # (None, 32, 64)
# RNN layer
gru_1 = layers.GRU(256, return_sequences=True, kernel_initializer='he_normal', name='gru1')(dense_after_reshape) # (None, 32, 512)
gru_1b = layers.GRU(256, return_sequences=True, go_backwards=True, kernel_initializer='he_normal', name='gru1_b')(dense_after_reshape)
reversed_gru_1b = layers.Lambda(lambda inputTensor: tf_keras_backend.reverse(inputTensor, axes=1)) (gru_1b)
gru1_merged = layers.add([gru_1, reversed_gru_1b]) # (None, 32, 512)
gru1_merged = layers.BatchNormalization()(gru1_merged)
gru_2 = layers.GRU(256, return_sequences=True, kernel_initializer='he_normal', name='gru2')(gru1_merged)
gru_2b = layers.GRU(256, return_sequences=True, go_backwards=True, kernel_initializer='he_normal', name='gru2_b')(gru1_merged)
reversed_gru_2b= layers.Lambda(lambda inputTensor: tf_keras_backend.reverse(inputTensor, axes=1)) (gru_2b)
gru2_merged = layers.concatenate([gru_2, reversed_gru_2b]) # (None, 32, 1024)
gru2_merged = layers.BatchNormalization()(gru2_merged)
# transforms RNN output to character activations:
y_pred = layers.Dense(num_classes, kernel_initializer='he_normal',name='dense2', activation='softmax')(gru2_merged) #(None, 32, 80)
y_pred = layers.Activation('softmax', name='softmax')(inner)
labels = layers.Input(name='the_labels', shape=[16], dtype='float32')
input_length = layers.Input(name='input_length', shape=[1], dtype='int64')
label_length = layers.Input(name='label_length', shape=[1], dtype='int64')
# loss function
loss_out = layers.Lambda(ctc_lambda_func, output_shape=(1,), name='ctc')(
[y_pred, labels, input_length, label_length]
)
model = Model(inputs=[input_data, labels, input_length, label_length], outputs=loss_out)
编译它:
model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer = 'adam')
我还尝试通过多种方式调试来确保尺寸正确,但无济于事。
最佳答案
我准备用于预处理图像的生成器出现错误。它产生了 64,64 而不是 128,64 的图像。我很遗憾没有检查它。
关于keras - tensorflow.keras 拟合期间 - 无效的参数 reshape 输入是具有 983040 个值的张量,但请求的形状具有 1966080,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62995465/
如何检查字符串是否被 reshape ?示例:“aab”返回 0,因为“a”无法 reshape 为该字符串或任何其他更短的字符串。 另一个例子是“aabbaab”返回 1,因为“aabb”可以被 r
我无法清楚地理解theano的reshape。我有一个形状的图像矩阵: [batch_size, stack1_size, stack2_size, height, width] ,其中有 s
如何检查字符串是否被 reshape ?示例:“aab”返回 0,因为“a”无法 reshape 为该字符串或任何其他更短的字符串。 另一个例子是“aabbaab”返回 1,因为“aabb”可以被 r
这是原始数据 a=[[1,2,3,4,5,6], [7,8,9,10,11,12]] 我想把它转换成这样的格式: b=[[1,2,3,7,8,9], [4,5,6,10,11,12]] a
我目前正在学习 CS231 作业,我意识到一些令人困惑的事情。在计算梯度时,当我第一次 reshape x 然后得到转置时,我得到了正确的结果。 x_r=x.reshape(x.shape[0],-1
这个问题在这里已经有了答案: Reshaping multiple sets of measurement columns (wide format) into single columns (lon
我有一个包含超过 1500 列的宽格式数据集。由于许多变量都是重复的,我想将其 reshape 为长形式。然而,r 抛出一个错误: Error in guess(varying) : Failed
我有一个长格式的数据框狗,我正在尝试使用 reshape() 函数将其重新格式化为宽格式。目前看起来是这样的: dogid month year trainingtype home scho
这个问题在这里已经有了答案: how to reshape an N length vector to a 3x(N/3) matrix in numpy using reshape (1 个回答)
我对 ndarray.reshape 的结构有疑问.我读过 numpy.reshape()和 ndarray.reshape是 python 中用于 reshape 数组的等效命令。 据我所知,num
所以这是我的麻烦:我想将一个长格式的数据文件改成宽格式。但是,我没有唯一的“j”变量;长格式文件中的每条记录都有几个关键变量。 例如,我想这样做: | caseid | gender | age |
Whis 这个数据框, df df id parameter visit value sex 1 01 blood V1 1 f 2 01 saliva V
我有一个列表,其中包含几个不同形状的 numpy 数组。我想将这个数组列表 reshape 为一个 numpy 向量,然后更改向量中的每个元素,然后将其 reshape 回原始数组列表。 例如: 输入
我有一个形状为 (1800,144) 的数组 (a) 其中 a[0:900,:] 都是实数,后半部分数组 a[900:1800,:] 全部为零。我想把数组的后半部分水平地放在前半部分旁边,然后将它们推
我有一个如下所示的数组: array([[0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2
我正在创建一个 tf.Variable(),然后使用该变量创建一个简单的函数,然后我使用 tf.reshape() 展平原始变量,然后我在函数和展平变量之间使用了 tf.gradients()。为什么
我有一个名为 data 的数据框,我试图从中识别任何异常价格。 数据框头部看起来像: Date Last Price 0 29/12/2017 487.74 1 28/
我有一个 float vec 数组,我想对其进行 reshape vec.shape >>> (3,) len(vec[0]) # all 3 rows of vec have 150 columns
tl;dr 我可以在不使用 numpy.reshape 的情况下将 numpy 数组的 View 从 5x5x5x3x3x3 reshape 为 125x1x1x3x3x3 吗? 我想对一个体积(大小
set.seed(123)data <- data.frame(ID = 1:10, weight_hus = rnorm(10, 0, 1),
我是一名优秀的程序员,十分优秀!