gpt4 book ai didi

python - "Regression with Probabilistic Layers in TensorFlow Probability"的问题

转载 作者:行者123 更新时间:2023-12-03 23:49:04 26 4
gpt4 key购买 nike

我在使用 tfp.layers.DistributionLambda 时遇到问题,我是 TF 新手,正在努力使 tensorflow 动。 有人可以提供一些有关如何设置输出分布参数的见解吗?

语境:

TFP 团队在 Regression with Probabilistic Layers in TensorFlow Probability 上写了一个教程,它建立了以下模型:

# Build model.
model = tfk.Sequential([
tf.keras.layers.Dense(1 + 1),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.05 * t[..., 1:]))),
])

我的问题:

它使用 tfp.layers.DistributionLambda 输出正态分布,但我不清楚 tfd.Normal 的参数(均值/位置和标准差/比例)是如何设置的,所以我无法将正态更改为 Gamma 分布。我尝试了以下操作,但没有奏效(预测的分布参数为 nan)。
def dist_output_layer (t, softplus_scale=0.05):
"""Create distribution with variable mean and variance
"""
mean = t[..., :1]
std_dev = 1e-3 + tf.math.softplus(softplus_scale * mean)

alpha = (mean/std_dev)**2
beta = alpha/mean

return tfd.Gamma(concentration = alpha,
rate = beta
)

# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(20,activation="relu"), # "By using a deeper neural network and introducing nonlinear activation functions, however, we can learn more complicated functional dependencies!
tf.keras.layers.Dense(1 + 1), #two neurons here b/c the output layer's distribution's mean and std. deviation
tfp.layers.DistributionLambda(dist_output_layer)
])

非常感谢。

最佳答案

老实说,关于你从 Medium 粘贴的代码片段有很多话要说。
不过,我希望你会发现我下面的评论有些用处。

# Build model.
model = tfk.Sequential([

# The first layer is a Dense layer with 2 units, one for each of the parameters that will
# be learnt (see next layer). Its implied shape is (batch_size, 2).
# Note that this Dense layer has no activation function as we want are any real value that will be used
# to parameterize the Normal distribution in the Normal distribution component of the following
# layer
tf.keras.layers.Dense(1 + 1),

# The following layer is a DistributionLambda that encapsulates a Normal distribution. The
# DistributionLambda takes a function in its constructor, and this function should take the output
# tensor from the previous layer as its input (this is the Dense layer and the comments above).
# The goal is to learn the 2 parameters of the distribution that is loc (the mean) and scale (the standard
# deviation). For this, a lambda construct is used. The ellipsis you can see for the loc
# and scale arguments (that is the 3 dots) are for the batch size. Also note that scale (the standard deviation)
# cannot be negative. The softplus function was used to make sure that the learnt parameter scale doesn't get
# negative.
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.05 * t[..., 1:]))),
])

关于python - "Regression with Probabilistic Layers in TensorFlow Probability"的问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60415629/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com