gpt4 book ai didi

优化器的 Tensorflow 聚合方法

转载 作者:行者123 更新时间:2023-12-01 01:51:43 26 4
gpt4 key购买 nike

我在 tensorflow 优化器中找不到关于聚合方法的文档

我有以下代码行

train_op = optimizer.minimize(loss, global_step=batch, aggregation_method = tf.AggregationMethod.EXPERIMENTAL_TREE)

但是,此属性可以更改为
tf.AggregationMethod.EXPERIMENTAL_ACCUMULATE_N

有谁知道它有什么作用? (我只知道当我使用 LSTM 的默认值时,它没有足够的内存来运行)

最佳答案

对于 AggregationMethod , EXPERIMENTAL_ACCUMULATE_N是到 ADD_N ( DEFAULT ) 为 accumulate_n是到 add_n . add_n在进行任何求和之前等待其所有参数可用,而 accumulate_n只要输入可用,就求和。这可能会节省内存,但有一些挑剔的形状信息限制,因为它当前的实现需要创建一个临时变量。

有一些文档 in the comments :

      # The benefit of using AccumulateN is that its inputs can be combined
# in any order and this can allow the expression to be evaluated with
# a smaller memory footprint. When used with gpu_allocator_retry,
# it is possible to compute a sum of terms which are much larger than
# total GPU memory.
# AccumulateN can currently only be used if we know the shape for
# an accumulator variable. If this is not known, or if we only have
# 2 grads then we fall through to the "tree" case below.

关于优化器的 Tensorflow 聚合方法,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44000781/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com