gpt4 book ai didi

tensorflow - 保持另一个输出常数的输出 w.r.t 网络权重的梯度

转载 作者:行者123 更新时间:2023-12-04 15:52:59 24 4
gpt4 key购买 nike

假设我有一个简单的 MLP

enter image description here

并且我有一个关于输出层的一些损失函数的梯度,以获得 G = [0, -1](即,增加第二个输出变量会降低损失函数)。

如果我根据我的网络参数取 G 的梯度并应用梯度适当的权重更新,第二个输出变量应该增加,但第一个输出变量没有任何说明,并且梯度的缩放应用几乎肯定会改变输出变量(无论是增加还是减少)

如何修改我的损失函数或任何梯度计算,以确保第一个输出不会改变?

最佳答案

更新:我误解了这个问题。这是新的答案。

为此,您只需要更新隐藏层和第二个输出单元之间的连接,同时保持隐藏层和第一个输出单元之间的连接完好无损。

第一种方法是引入两组变量 :一个用于隐藏层和第一个输出单元之间的连接,一个用于其余部分。然后你可以使用 tf.stack 组合它们,并通过 var_list得到相应的导数。就像(仅用于说明。未经测试。谨慎使用):

out1 = tf.matmul(hidden, W_h_to_out1) + b_h_to_out1
out2 = tf.matmul(hidden, W_h_to_out2) + b_h_to_out2
out = tf.stack([out1, out2])
out = tf.transpose(tf.reshape(out, [2, -1]))
loss = some_function_of(out)
optimizer = tf.train.GradientDescentOptimizer(0.1)
train_op_second_unit = optimizer.minimize(loss, var_list=[W_h_to_out2, b_h_to_out2])

另一种方法是使用面具。 当您使用某些框架(例如,slim、Keras 等)时,这更容易实现且更灵活,我会推荐这种方式。将第一个输出单元隐藏到损失函数中的想法,同时不更改第二个输出单元。 这可以使用二进制变量来完成:如果你想保留它,则乘以 1,乘以 0 以删除它。 这是代码:

import tensorflow as tf
import numpy as np

# let's make our tiny dataset: (x, y) pairs, where x = (x1, x2, x3), y = (y1, y2),
# and y1 = x1+x2+x3, y2 = x1^2+x2^2+x3^2

# n_sample data points
n_sample = 8
data_x = np.random.random((n_sample, 3))
data_y = np.zeros((n_sample, 2))
data_y[:, 0] += np.sum(data_x, axis=1)
data_y[:, 1] += np.sum(data_x**2, axis=1)
data_y += 0.01 * np.random.random((n_sample, 2)) # add some noise


# build graph
# suppose we have a network of shape [3, 4, 2], i.e.: one hidden layer of size 4.

x = tf.placeholder(tf.float32, shape=[None, 3], name='x')
y = tf.placeholder(tf.float32, shape=[None, 2], name='y')
mask = tf.placeholder(tf.float32, shape=[None, 2], name='mask')

W1 = tf.Variable(tf.random_normal(shape=[3, 4], stddev=0.1), name='W1')
b1 = tf.Variable(tf.random_normal(shape=[4], stddev=0.1), name='b1')
hidden = tf.nn.sigmoid(tf.matmul(x, W1) + b1)
W2 = tf.Variable(tf.random_normal(shape=[4, 2], stddev=0.1), name='W2')
b2 = tf.Variable(tf.random_normal(shape=[2], stddev=0.1), name='b2')
out = tf.matmul(hidden, W2) + b2
loss = tf.reduce_mean(tf.square(out - y))

# multiply out by mask, thus out[0] is "invisible" to loss, and its gradient will not be propagated
masked_out = mask * out
loss2 = tf.reduce_mean(tf.square(masked_out - y))

optimizer = tf.train.GradientDescentOptimizer(0.1)
train_op_all = optimizer.minimize(loss) # update all variables in the network
train_op12 = optimizer.minimize(loss, var_list=[W2, b2]) # update hidden -> output layer
train_op2 = optimizer.minimize(loss2, var_list=[W2, b2]) # update hidden -> second output unit


sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
mask_out1 = np.zeros((n_sample, 2))
mask_out1[:, 1] += 1.0
# print(mask_out1)
print(sess.run([hidden, out, loss, loss2], feed_dict={x: data_x, y: data_y, mask: mask_out1}))

# In this case, only out2 is updated. You see the loss and loss2 decreases.
sess.run(train_op2, feed_dict={x: data_x, y:data_y, mask: mask_out1})
print(sess.run([hidden, out, loss, loss2], feed_dict={x: data_x, y:data_y, mask: mask_out1}))

# In this case, both out1 and out2 is updated. You see the loss and loss2 decreases.
sess.run(train_op12, feed_dict={x: data_x, y:data_y, mask: mask_out1})
print(sess.run([hidden, out, loss, loss2], feed_dict={x: data_x, y:data_y, mask: mask_out1}))

# In this case, everything is updated. You see the loss and loss2 decreases.
sess.run(train_op_all, feed_dict={x: data_x, y:data_y, mask: mask_out1})
print(sess.run([hidden, out, loss, loss2], feed_dict={x: data_x, y:data_y, mask: mask_out1}))
sess.close()

========================以下是旧答案====================== ========

获得衍生品 w.r.t.不同的变量,你可以传递一个 var_list决定更新哪个变量。下面是一个例子:

import tensorflow as tf
import numpy as np

# let's make our tiny dataset: (x, y) pairs, where x = (x1, x2, x3), y = (y1, y2),
# and y1 = x1+x2+x3, y2 = x1^2+x2^2+x3^2

# n_sample data points
n_sample = 8
data_x = np.random.random((n_sample, 3))
data_y = np.zeros((n_sample, 2))
data_y[:, 0] += np.sum(data_x, axis=1)
data_y[:, 1] += np.sum(data_x**2, axis=1)
data_y += 0.01 * np.random.random((n_sample, 2)) # add some noise


# build graph
# suppose we have a network of shape [3, 4, 2], i.e.: one hidden layer of size 4.

x = tf.placeholder(tf.float32, shape=[None, 3], name='x')
y = tf.placeholder(tf.float32, shape=[None, 2], name='y')

W1 = tf.Variable(tf.random_normal(shape=[3, 4], stddev=0.1), name='W1')
b1 = tf.Variable(tf.random_normal(shape=[4], stddev=0.1), name='b1')
hidden = tf.nn.sigmoid(tf.matmul(x, W1) + b1)
W2 = tf.Variable(tf.random_normal(shape=[4, 2], stddev=0.1), name='W2')
b2 = tf.Variable(tf.random_normal(shape=[2], stddev=0.1), name='b2')
out = tf.matmul(hidden, W2) + b2

loss = tf.reduce_mean(tf.square(out - y))
optimizer = tf.train.GradientDescentOptimizer(0.1)
# You can pass a variable list to decide which variable(s) to minimize.
train_op_second_layer = optimizer.minimize(loss, var_list=[W2, b2])
# If there is no var_list, all variables will be updated.
train_op_all = optimizer.minimize(loss)

sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
print(sess.run([W1, b1, W2, b2, loss], feed_dict={x: data_x, y:data_y}))

# In this case, only W2 and b2 are updated. You see the loss decreases.
sess.run(train_op_second_layer, feed_dict={x: data_x, y:data_y})
print(sess.run([W1, b1, W2, b2, loss], feed_dict={x: data_x, y:data_y}))

# In this case, all variables are updated. You see the loss decreases.
sess.run(train_op_all, feed_dict={x: data_x, y:data_y})
print(sess.run([W1, b1, W2, b2, loss], feed_dict={x: data_x, y:data_y}))
sess.close()

关于tensorflow - 保持另一个输出常数的输出 w.r.t 网络权重的梯度,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42182233/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com