gpt4 book ai didi

python - 操纵神经网络的输出

转载 作者:太空宇宙 更新时间:2023-11-03 20:29:05 24 4
gpt4 key购买 nike

我有一个神经网络,输入为 (m, 2, 3, 96, 96) ,输出为 (m, 2, 128)。我试图通过减去输出 [m][0][0] - 输出 [m][0][1] 将该输出转换为 (m, 1, 128),然后通过输入 (m, 1) 转换为 (m, 1) 1x128 输出到密集层

我尝试过网络和包装器中的 Lambda 和 keras.backend.Subtract 层

def faceRecoModel(input_shape):
"""
Implementation of the Inception model used for FaceNet

Arguments:
input_shape -- shape of the images of the dataset

Returns:
model -- a Model() instance in Keras
"""

# Define the input as a tensor with shape input_shape
X_input = Input(input_shape)

# Zero-Padding
X = ZeroPadding2D((3, 3))(X_input)

# First Block
X = Conv2D(64, (7, 7), strides=(2, 2), name='conv1')(X)
X = BatchNormalization(axis=1, name='bn1')(X)
X = Activation('relu')(X)

# Zero-Padding + MAXPOOL
X = ZeroPadding2D((1, 1))(X)
X = MaxPooling2D((3, 3), strides=2)(X)

# Second Block
X = Conv2D(64, (1, 1), strides=(1, 1), name='conv2')(X)
X = BatchNormalization(axis=1, epsilon=0.00001, name='bn2')(X)
X = Activation('relu')(X)

# Zero-Padding + MAXPOOL
X = ZeroPadding2D((1, 1))(X)

# Second Block
X = Conv2D(192, (3, 3), strides=(1, 1), name='conv3')(X)
X = BatchNormalization(axis=1, epsilon=0.00001, name='bn3')(X)
X = Activation('relu')(X)

# Zero-Padding + MAXPOOL
X = ZeroPadding2D((1, 1))(X)
X = MaxPooling2D(pool_size=3, strides=2)(X)

# Inception 1: a/b/c
X = inception_block_1a(X)
X = inception_block_1b(X)
X = inception_block_1c(X)

# Inception 2: a/b
X = inception_block_2a(X)
X = inception_block_2b(X)

# Inception 3: a/b
X = inception_block_3a(X)
X = inception_block_3b(X)

# Top layer
X = AveragePooling2D(pool_size=(3, 3), strides=(1, 1), data_format='channels_first')(X)
X = Flatten()(X)
X = Dense(128, name='dense_layer')(X)

# L2 normalization
X = Lambda(lambda x: K.l2_normalize(x, axis=1))(X)

# Create model instance
model = Model(inputs=X_input, outputs=X, name='FaceRecoModel')

return model



# now this is the wrapper I mentioned
model = faceRecoModel((3, 96, 96))
i = Input((2, 3, 96, 96))
o = TimeDistributed(model)(i)
model = Model(i, o)
model.compile(optimizer='adam', loss=pair_loss)

最佳答案

X = Lambda(lambda x: return x[:,0] - x[:,1])(X)
X = Dense(...)(X)

关于python - 操纵神经网络的输出,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57633584/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com