gpt4 book ai didi

python - 急切模式下的 TFP 精确 GP 回归

转载 作者:行者123 更新时间:2023-11-28 18:56:37 36 4
gpt4 key购买 nike

我正在尝试使用 TF2.0 eager 模式执行精确的 GP 回归,基于来自 https://colab.research.google.com/github/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Gaussian_Process_Regression_In_TFP.ipynb

amplitude = (
np.finfo(np.float64).tiny +
tf.nn.softplus(tf.Variable(initial_value=1., name='amplitude', dtype=np.float64))
)
length_scale = (
np.finfo(np.float64).tiny +
tf.nn.softplus(tf.Variable(initial_value=1., name='length_scale', dtype=np.float64))
)
observation_noise_variance = (
np.finfo(np.float64).tiny +
tf.nn.softplus(tf.Variable(initial_value=1e-6,
name='observation_noise_variance',
dtype=np.float64))
)

kernel = tfk.ExponentiatedQuadratic(amplitude, length_scale)

gp = tfd.GaussianProcess(
kernel=kernel,
index_points=tf.expand_dims(x, 1),
observation_noise_variance=observation_noise_variance
)

neg_log_likelihood = lambda: -gp.log_prob(y)

optimizer = tf.optimizers.Adam(learning_rate=.01)

num_iters = 1000
lls_ = np.zeros(num_iters, np.float64)
for i in range(num_iters):
lls_[i] = neg_log_likelihood()
optimizer.minimize(neg_log_likelihood, var_list=[amplitude, length_scale, observation_noise_variance])

但是优化失败:

'tensorflow.python.framework.ops.EagerTensor' object has no attribute '_in_graph_mode'

如果我将 amplitude、length_scale 和 observation_noise_variance 分别移动到 tf.Variable,例如:

amplitude = tf.Variable(initial_value=1., name='amplitude', dtype=np.float64)
amplitude_ = (
np.finfo(np.float64).tiny +
tf.nn.softplus(amplitude)
)

优化失败:

ValueError: No gradients provided for any variable: ['amplitude:0', 'length_scale:0', 'observation_noise_variance:0'].

我做错了什么?

最佳答案

目前 eager 模式有一个问题:

https://groups.google.com/a/tensorflow.org/d/msg/tfprobability/IlhL-fcv3yc/jpQc4ogcFwAJ

解决方法是明确使用 GradientTape:

amplitude_ = tf.Variable(initial_value=1., name='amplitude_', dtype=np.float64)
length_scale_ = tf.Variable(initial_value=1., name='length_scale_', dtype=np.float64)
observation_noise_variance_ = tf.Variable(initial_value=1e-6,
name='observation_noise_variance_',
dtype=np.float64)

@tf.function
def neg_log_likelihood():
amplitude = np.finfo(np.float64).tiny + tf.nn.softplus(amplitude_)
length_scale = np.finfo(np.float64).tiny + tf.nn.softplus(length_scale_)
observation_noise_variance = np.finfo(np.float64).tiny + tf.nn.softplus(observation_noise_variance_)

kernel = tfk.ExponentiatedQuadratic(amplitude, length_scale)

gp = tfd.GaussianProcess(
kernel=kernel,
index_points=tf.expand_dims(x, 1),
observation_noise_variance=observation_noise_variance
)

return -gp.log_prob(y)

optimizer = tf.optimizers.Adam(learning_rate=.01)

num_iters = 1000

nlls = np.zeros(num_iters, np.float64)
for i in range(num_iters):
nlls[i] = neg_log_likelihood()
with tf.GradientTape() as tape:
loss = neg_log_likelihood()
grads = tape.gradient(loss, [amplitude_, length_scale_, observation_noise_variance_])
optimizer.apply_gradients(zip(grads, [amplitude_, length_scale_, observation_noise_variance_]))

关于python - 急切模式下的 TFP 精确 GP 回归,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57493949/

36 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com