gpt4 book ai didi

python - TensorFlow: "No gradients provided for any variable"和 partial_run

转载 作者:太空狗 更新时间:2023-10-30 02:02:20 25 4
gpt4 key购买 nike

问题

使用 Tensorflow 的 partial_run() 方法没有像我预期的那样工作。我在所提供代码的底部使用它,我相信它会给我附加的错误。

一般的数据流是我需要从模型中获得预测,在一些非 tensorflow 代码中使用该预测(对软件合成器进行编程),然后在播放后获得音频特征(MFCCS、RMS、FFT)一个 MIDI 音符,它最终可以传递给成本函数,以检查预测的音色与重新创建作为当前示例提供的所需声音的接近程度。

代码 - 省略预处理

# Create the tensorflow graph.
dimension_data_example = generate_examples(1,
midi_note,
midi_velocity,
note_length,
render_length,
engine,
generator,
mfcc_normaliser,
rms_normaliser)

features, parameters = dimension_data_example[0]
# https://github.com/aymericdamien/TensorFlow-Examples/blob/master/notebooks/3_NeuralNetworks/recurrent_network.ipynb
# Parameters for the tensorflow graph.
learning_rate = 0.001
training_iters = 256
batch_size = 128
display_step = 10
number_hidden_1 = 128
number_hidden_2 = 128

# Network parameters:
# 14 * 181 - (amount of mfccs + rms value) * sample size
number_input = int(features.shape[0])

# 155 - amount of parameters
number_outputs = len(parameters)

x = tf.placeholder("float", [None, number_input])

# Create model
def multilayer_perceptron(x, weights, biases):
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# Hidden layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.relu(layer_2)
# Output layer with linear activation
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
return out_layer

# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([number_input, number_hidden_1])),
'h2': tf.Variable(tf.random_normal([number_hidden_1, number_hidden_2])),
'out': tf.Variable(tf.random_normal([number_hidden_2, number_outputs]))
}
biases = {
'b1': tf.Variable(tf.random_normal([number_hidden_1])),
'b2': tf.Variable(tf.random_normal([number_hidden_2])),
'out': tf.Variable(tf.random_normal([number_outputs]))
}

# Construct model
prediction = multilayer_perceptron(x, weights, biases)

x_original = tf.placeholder("float", [None, number_input])
x_from_y = tf.placeholder("float", [None, number_input])
cost = tf.sqrt(tf.reduce_mean(tf.square(tf.sub(x_original, x_from_y))))
optimiser = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

# Initializing the variables
init = tf.global_variables_initializer()

# Launching the graph
with tf.Session() as sess:

sess.run(init)
step = 1

while step * batch_size < training_iters:

train_batch = generate_examples(batch_size,
midi_note,
midi_velocity,
note_length,
render_length,
engine,
generator,
mfcc_normaliser,
rms_normaliser)
split_train = map(list, zip(*train_batch))
batch_x = split_train[0]

setup = sess.partial_run_setup([prediction, optimiser],
[x, x_original, x_from_y])

pred = sess.partial_run(setup, prediction, feed_dict={x: batch_x})

features_from_prediction = get_features(pred,
midi_note,
midi_velocity,
note_length,
render_length)

sess.partial_run(setup, optimiser, feed_dict={x_original: batch_x,
x_from_y: features_from_prediction})

错误

Traceback (most recent call last):
File "model.py", line 255, in <module>
optimiser = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 276, in minimize
([str(v) for _, v in grads_and_vars], loss))
ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables ['Tensor("Variable/read:0", shape=(2534, 128), dtype=float32)', 'Tensor("Variable_1/read:0", shape=(128, 128), dtype=float32)', 'Tensor("Variable_2/read:0", shape=(128, 155), dtype=float32)', 'Tensor("Variable_3/read:0", shape=(128,), dtype=float32)', 'Tensor("Variable_4/read:0", shape=(128,), dtype=float32)', 'Tensor("Variable_5/read:0", shape=(155,), dtype=float32)'] and loss Tensor("Sqrt:0", shape=(), dtype=float32).

最佳答案

您遇到的直接错误:

No gradients provided for any variable, check your graph for ops that do not support gradients, between variables

是因为没有从您的成本到您的权重的梯度路径。这是因为在权重和成本之间的图形之外发生了占位符和计算。因此,不存在从成本到权重的梯度路径。

换句话说,考虑设置。

Weights -> prediction -> get_features -> calculate cost.

现在,考虑反向传播,我们可以计算成本的梯度,但是我们没有从成本到 get_features 或从 get_features 到预测的梯度,因为 get_features 不是图的一部分:

Weights <- prediction <-/- get_features <-/- calculate cost.

因此,权重将永远无法学习。如果您希望此设置起作用,您需要以某种方式找到一条从成本返回到预测的路径,可能是在图形的反向路径中模拟 get_features 的梯度。可能有更简洁的方法,但我现在想不出一个。

希望对您有所帮助!

关于python - TensorFlow: "No gradients provided for any variable"和 partial_run,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42498876/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com