gpt4 book ai didi

python - 为什么自动微分和梯度带需要使用上下文管理器?

转载 作者:行者123 更新时间:2023-12-01 08:10:08 25 4
gpt4 key购买 nike

上下文管理器可以将两个相关的操作变成一个。例如:

with open('some_file', 'w') as opened_file:
opened_file.write('Hola!')

上面的代码相当于:

file = open('some_file', 'w')
try:
file.write('Hola!')
finally:
file.close()

但是在 https://www.tensorflow.org/tutorials/eager/custom_training_walkthrough#define_the_loss_and_gradient_function我发现:

def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return loss_value, tape.gradient(loss_value, model.trainable_variables)

它相当于什么?

最佳答案

我不是Python专家,但我认为with是由__enter__方法和__exit__方法定义的( https://book.pythontips.com/en/latest/context_managers.html )。对于 tf.GradientTape 方法 __enter__ 是:

  def __enter__(self):
"""Enters a context inside which operations are recorded on this tape."""
self._push_tape()
return self

https://github.com/tensorflow/tensorflow/blob/r2.0/tensorflow/python/eager/backprop.py#L801-L804

以及 __exit__ 方法

  def __exit__(self, typ, value, traceback):
"""Exits the recording context, no further operations are traced."""
if self._recording:
self._pop_tape()

https://github.com/tensorflow/tensorflow/blob/r2.0/tensorflow/python/eager/backprop.py#L806-L809

然后

with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)

tape = tf.GradientTape()
tape.push_tape()
loss_value = loss(model, inputs, targets)
self._pop_tape()

关于python - 为什么自动微分和梯度带需要使用上下文管理器?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55310671/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com