gpt4 book ai didi

python - 我怎样才能在keras中制作一个可训练的参数?

转载 作者:太空狗 更新时间:2023-10-30 01:18:25 26 4
gpt4 key购买 nike

感谢您查看我的问题。

例如。

最终输出的是两个矩阵A和B的和,像这样:

output = keras.layers.add([A, B])

现在,我想建立一个新的参数 x 来改变输出。

我想让 newoutput = Ax+B(1-x)

x 是我网络中的一个可训练参数

我该怎么办?请帮助我~非常感谢!

编辑(部分代码):

conv1 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(input)
drop1 = Dropout(0.5)(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(drop1)

conv2 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
conv2 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2)
drop2 = Dropout(0.5)(conv2)

up1 = Conv2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop2))

#the line I want to change:
merge = add([drop2,up1])
#this layer is simply add drop2 and up1 layer.now I want to add a trainable parameter x to adjust the weight of thoese two layers.

我尝试使用代码,但仍然出现一些问题:

1.如何使用自己的图层?

merge = Mylayer()(drop2,up1)

还是其他方式?

2.out_dim是什么意思?这些参数都是3-dim矩阵。out_dim是什么意思?

谢谢...T.T

edit2(已解决)

from keras import backend as K
from keras.engine.topology import Layer
import numpy as np

from keras.layers import add

class MyLayer(Layer):

def __init__(self, **kwargs):
super(MyLayer, self).__init__(**kwargs)

def build(self, input_shape):

self._x = K.variable(0.5)
self.trainable_weights = [self._x]

super(MyLayer, self).build(input_shape) # Be sure to call this at the end

def call(self, x):
A, B = x
result = add([self._x*A ,(1-self._x)*B])
return result

def compute_output_shape(self, input_shape):
return input_shape[0]

最佳答案

您必须创建一个继承自 Layer 的自定义类,并使用 self.add_weight(...) 创建可训练参数。您可以找到此 here 的示例和 there .

对于您的示例,该层将以某种方式看起来像这样:

from keras import backend as K
from keras.engine.topology import Layer
import numpy as np

class MyLayer(Layer):

def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)

def build(self, input_shape):
# Create a trainable weight variable for this layer.
self._A = self.add_weight(name='A',
shape=(input_shape[1], self.output_dim),
initializer='uniform',
trainable=True)
self._B = self.add_weight(name='B',
shape=(input_shape[1], self.output_dim),
initializer='uniform',
trainable=True)
super(MyLayer, self).build(input_shape) # Be sure to call this at the end

def call(self, x):
return K.dot(x, self._A) + K.dot(1-x, self._B)

def compute_output_shape(self, input_shape):
return (input_shape[0], self.output_dim)

编辑:仅根据名称,我(错误地)假设 x 是层输入并且您想要优化 AB。但是,正如您所说,您想要优化 x。为此,您可以执行以下操作:

from keras import backend as K
from keras.engine.topology import Layer
import numpy as np

class MyLayer(Layer):

def __init__(self, **kwargs):
super(MyLayer, self).__init__(**kwargs)

def build(self, input_shape):
# Create a trainable weight variable for this layer.
self._x = self.add_weight(name='x',
shape=(1,),
initializer='uniform',
trainable=True)
super(MyLayer, self).build(input_shape) # Be sure to call this at the end

def call(self, x):
A, B = x
return K.dot(self._x, A) + K.dot(1-self._x, B)

def compute_output_shape(self, input_shape):
return input_shape[0]

Edit2:你可以调用这个层使用

merge = Mylayer()([drop2,up1])

关于python - 我怎样才能在keras中制作一个可训练的参数?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52031587/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com