gpt4 book ai didi

python - 无法连接 Keras Lambda 层

转载 作者:太空宇宙 更新时间:2023-11-04 00:08:01 24 4
gpt4 key购买 nike

我需要以不同的方式处理一些层,做一些 OR 操作。我已经找到了方法,我创建了一个 Lambda 层并使用 keras.backend.any 处理数据。我也在做一个拆分,因为我需要用我的逻辑 OR 操作 2 个单独的组。

def logical_or_layer(x):
"""Processing an OR operation"""
import keras.backend
#normalized to 0,1
aux_array = keras.backend.sign(x)
aux_array = keras.backend.relu(aux_array)
# OR operation
aux_array = keras.backend.any(aux_array)
# casting back the True/False to 1,0
aux_array = keras.backend.cast(aux_array, dtype='float32')

return aux_array

然后我像这样创建图层:

#this is the input tensor
inputs = Input(shape=(inputSize,))

#this is the Neurule layer
x = Dense(neurulesQt, activation='softsign')(inputs)
#after each neurule layer, the outputs need to be put into SIGNUM (-1 or 1)
x = Lambda(signumTransform, output_shape=lambda x:x, name='signumAfterNeurules')(x)

#separating into 2 (2 possible outputs)
layer_split0 = Lambda( lambda x: x[:, :end_output0], output_shape=(11, ), name='layer_split0')(x)
layer_split1 = Lambda( lambda x: x[:, start_output1:end_output1], output_shape=(9,), name='layer_split1')(x)

#this is the OR layer
y_0 = Lambda(logical_or_layer, output_shape=(1,), name='or0')(layer_split0)
y_1 = Lambda(logical_or_layer, output_shape=(1,), name='or1')(layer_split1)

仅供引用:Neurules 是根据 IF-THEN 规则创建的神经元,这是一个与神经元一起工作的项目,神经元是用 TruthTable 训练的,代表专家知识。

现在,当我尝试像这样将拆分后的层放回去时:

y = concatenate([y_0,y_1])

出现此错误:

ValueError: Can't concatenate scalars (use tf.stack instead) for 'concatenate_32/concat' (op: 'ConcatV2') with input shapes: [], [], [].

那么好吧,让我们按照建议使用tf.stack:

y = keras.backend.stack([y_0, y_1])

然后当我尝试时,它不能再用作模型中的输出:

model = Model(inputs=inputs, outputs=y)

出现错误:

ValueError: Output tensors to a Model must be the output of a Keras `Layer` (thus holding past layer metadata). Found: Tensor("stack_14:0", shape=(2,), dtype=float32)

检查函数 keras.backend.is_keras_tensor(y) 它给了我 False,但是对于所有其他层它给了我 True

我应该如何正确连接它?

编辑:根据@today 的回答,我能够创建一个新的 Lambda 层,其中包含 stack。但是输出被修改了,它应该是 (None,2) 而它是 (2,None,1) 这里是 model.summary( ):

__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_90 (InputLayer) (None, 24) 0
__________________________________________________________________________________________________
dense_90 (Dense) (None, 20) 500 input_90[0][0]
__________________________________________________________________________________________________
signumAfterNeurules (Lambda) (None, 20) 0 dense_90[0][0]
__________________________________________________________________________________________________
layer_split0 (Lambda) (None, 11) 0 signumAfterNeurules[0][0]
__________________________________________________________________________________________________
layer_split1 (Lambda) (None, 9) 0 signumAfterNeurules[0][0]
__________________________________________________________________________________________________
or0 (Lambda) (None, 1) 0 layer_split0[0][0]
__________________________________________________________________________________________________
or1 (Lambda) (None, 1) 0 layer_split1[0][0]
__________________________________________________________________________________________________
output (Lambda) (2, None, 1) 0 or0[0][0]
or1[0][0]
==================================================================================================
Total params: 500
Trainable params: 0
Non-trainable params: 500
__________________________________________________________________________________________________

我应该如何在层中定义 output_shape 以使批处理在最后仍然存在?

EDIT2:按照@today 的提示,我完成了以下操作:

#this is the input tensor
inputs = Input(shape=(inputSize,))

#this is the Neurule layer
x = Dense(neurulesQt, activation='softsign')(inputs)
#after each neuron layer, the outputs need to be put into SIGNUM (-1 or 1)
x = Lambda(signumTransform, output_shape=lambda x:x, name='signumAfterNeurules')(x)
#separating into 2 (2 possible outputs)
layer_split0 = Lambda( lambda x: x[:, :end_output0], output_shape=[11], name='layer_split0')(x)
layer_split1 = Lambda( lambda x: x[:, start_output1:end_output1], output_shape=[9], name='layer_split1')(x)
#this is the OR layer
y_0 = Lambda(logical_or_layer, output_shape=(1,), name='or0')(layer_split0)
y_1 = Lambda(logical_or_layer, output_shape=(1,), name='or1')(layer_split1)

y = Lambda(lambda x: K.stack([x[0], x[1]]),output_shape=(2,), name="output")([y_0, y_1])

现在它似乎可以正常工作了,下面的 model.summary():

__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 24) 0
__________________________________________________________________________________________________
dense_1 (Dense) (None, 20) 500 input_1[0][0]
__________________________________________________________________________________________________
signumAfterNeurules (Lambda) (None, 20) 0 dense_1[0][0]
__________________________________________________________________________________________________
layer_split0 (Lambda) (None, 11) 0 signumAfterNeurules[0][0]
__________________________________________________________________________________________________
layer_split1 (Lambda) (None, 9) 0 signumAfterNeurules[0][0]
__________________________________________________________________________________________________
or0 (Lambda) (None, 1) 0 layer_split0[0][0]
__________________________________________________________________________________________________
or1 (Lambda) (None, 1) 0 layer_split1[0][0]
__________________________________________________________________________________________________
output (Lambda) (None, 2) 0 or0[0][0]
or1[0][0]
==================================================================================================
Total params: 500
Trainable params: 0
Non-trainable params: 500
__________________________________________________________________________________________________

最佳答案

K.stack 包裹在 Lambda 层中,如下所示:

from keras import backend as K

y = Lambda(lambda x: K.stack([x[0], x[1]]))([y_0, y_1])

关于python - 无法连接 Keras Lambda 层,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53376996/

24 4 0
文章推荐: javascript - 如何使用 jQuery 实现这个很酷的 popup div 效果?
文章推荐: node.js - 服务器端的 SuperCluster
文章推荐: 自定义 lwip microblaze echo 示例
文章推荐: css - 使用
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com