gpt4 book ai didi

tensorflow - 使用分布策略在 Estimator 中累积梯度

转载 作者:行者123 更新时间:2023-12-04 01:47:24 26 4
gpt4 key购买 nike

为了减少分布式训练的同步次数,我想先做梯度的局部累加。这就像您可以拥有多个 GPU,但串行而非并行。

我想在带有分布式策略的 estimator.train 循环中使用它,例如镜像和集体 allreduce 等。

这是我的实现,请给我一些输入:)

首先因为我需要在 session.run() 中运行不同的图形,所以我修改了 estimator.EstimatorSpec 以进行更多操作。其次,似乎没有明确的方法可以在分布式策略环境中的本地 GPU 中创建本地非共享变量。我不得不破解一些 variable_create_scope。

这里是 hacked variable_creator 函数,

def skip_all_scope_variable_creator(next_creator=None, on_device=None, **kwargs):
#print("skip_all_scope_variable_creator:[{}]".format(kwargs))
initial_value = kwargs.get("initial_value", None)
trainable = kwargs.get("trainable", None)
collections = kwargs.get("collections", None)
validate_shape = kwargs.get("validate_shape", True)
caching_device = kwargs.get("caching_device", None)
name = kwargs.get("name", None)
variable_def = kwargs.get("variable_def", None)
dtype = kwargs.get("dtype", None)
expected_shape = kwargs.get("expected_shape", None)
import_scope = kwargs.get("import_scope", None)
constraint = kwargs.get("constraint", None)
use_resource = kwargs.get("use_resource", None)

with tf.device(on_device) :
return resource_variable_ops.ResourceVariable(
initial_value=initial_value, trainable=trainable,
collections=collections, validate_shape=validate_shape,
caching_device=caching_device, name=name, dtype=dtype,
constraint=constraint, variable_def=variable_def,
import_scope=import_scope)

这是我在 model_fn() 中创建三个操作的代码,

    loss = loss_from_model
optimizer = some_optimizer
tvars = tf.trainable_variables()

gradients = optimizer.compute_gradients(
loss, tvars, colocate_gradients_with_ops=True)

accumulate_pass_num = FLAGS.pass_per_batch

if accumulate_pass_num > 1 :
accum_grads = []
accum_vars = []

reset_grad_ops = []
accum_grad_ops = []
for g,v in gradients:
accum_vars.append(v)
if g is not None:
with tf.variable_creator_scope(lambda next_creator=None, **kwargs: skip_all_scope_variable_creator(next_creator, g.device, **kwargs)):
print("create accum_grad for variable:{}".format(v.name))
tmp_grad_on_device = tf.Variable(tf.zeros_like(g), trainable=False, synchronization=tf.VariableSynchronization.ON_READ, collections=[tf.GraphKeys.LOCAL_VARIABLES], name='tmp_accum_grad')
reset_one_grad_op = tf.assign(tmp_grad_on_device, g, name="reset_accumulated_gradient_op")
reset_grad_ops.append(reset_one_grad_op)
# the return of assign_add is the value will be update
accum_grad_on_device = tmp_grad_on_device.assign_add(g, name="accumulate_gradient")
accum_grad_ops.append(accum_grad_on_device)
accum_grads.append(accum_grad_on_device)
else:
accum_grads.append(None)

accumulate_gradients_op = tf.group(*accum_grad_ops, name="grouped_accu_grad_op")
reset_gradients_op = tf.group(*reset_grad_ops, name="grouped_reset_gradients_op")
accum_grad_means = [tf.multiply(v, 1.0/accumulate_pass_num) if v is not None else None for v in accum_grads]
accum_grads_vars = zip(accum_grad_means, accum_vars)
minimize_op = optimizer.apply_gradients(
accum_grads_vars, global_step=global_step, name="train")

update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
train_op = tf.group(minimize_op, update_ops)
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op, accumulate_gradients_op=accumulate_gradients_op, reset_gradients_op=reset_gradients_op, accumulate_pass_num=accumulate_pass_num)

这里修改了 estimator.train() 以运行不同的操作,

      while not mon_sess.should_stop():
if estimator_spec.accumulate_pass_num > 1 :
# reset gradiends first
mon_sess.run([estimator_spec.reset_gradients_op])
for _ in range(estimator_spec.accumulate_pass_num-2):
mon_sess.run([estimator_spec.accumulate_gradients_op])

_, loss = mon_sess.run([estimator_spec.train_op, estimator_spec.loss])

我在 google 官方模型存储库中的 transformer 模型上进行了尝试。结果很好。

我的问题是,有没有更好的方法来做到这一点?

我是否应该考虑使用 tf.cond() 来选择 model_fn 中返回的操作,这样 Estimator 和 EstimatorSpec 就不需要修改了?但是好像很难:(

非常感谢!

最佳答案

我认为您可以通过将 train_ops 传递给估算器来实现这一点。在估算器 model_fn 中单独调用 tensorflow ops 绝对没有效果。因为按照设计,每次训练只调用一次 model_fn,因此你放入其中的每个操作也只会执行一次。除此之外,所有 tf​​.cond 分支都将在 model_fn 调用期间被评估和执行。(您可以通过简单的条件日志操作来验证此行为。)实现梯度累加的关键是:

  1. 使用 tf.cond 包装您的所有操作,并结合 tf.no_op 作为 false_fn。
  2. 让 train_op = tf.group(*accum_ops, [conditional_minimize_op, reset_ops]),但通过 control_dependencies 控制你的执行顺序,因为 tf.group 不关心。
  3. 将您满载的 train_op 传递给 EstimatorSpec

那些传递给 estimator_spec 或 training_hooks 的操作可以在训练过程中动态执行。

这是我的代码,用有限的 GPU 内存微调 BERT:

# compute batch gradient
grads = tf.gradients(loss, tvars)
(grads, _) = tf.clip_by_global_norm(grads, clip_norm=1.0)
# this is a list of sum(dy/dx) for each variable that must be paired with a tvars list.
# element may be an IndexedSlices object that does not support assignning, e.g. [g.assign(value) for g in grads]
# some of the elements are None, meaning y and x does not depend on each other.
# Nonetypes must be handled using Python, tensorflow cannot convert Nonetypes to 0.

# declare a temp variable for summation
sum_gradient = [tf.get_variable(name="sum_grads" + str(i), shape=tv.shape,
initializer=tf.zeros_initializer,
trainable=False,
dtype=tf.float32,
collections=[tf.GraphKeys.LOCAL_VARIABLES]) for i, tv in enumerate(tvars)]
sum_ops = []
unused_variable_in_batch = []

# gradient accumulation
for i, gv in enumerate(grads):
if gv is not None:
sum_ops.append(sum_gradient[i].assign_add(gv, name="accumulate_gradient"))
else:
unused_variable_in_batch.append(sum_gradient[i])
sum_gradient[i] = None

# NOTE : calling .assign_add does NOTHING in estimator, must wrap them all and handle them via train_ops

def apply_accumulated_gradients(sums):
# normalize gradient
normalize_ops = []
for i, g in enumerate(sums):
if g is not None:
normalize_ops.append(sums[i].assign(tf.multiply(g, 1 / gradient_accmulation_multiplier)))
# assign to make sure it still is a variable, or else it will become a Tensor
with tf.control_dependencies(normalize_ops):
minimize_op = optimizer.apply_gradients(zip(sums, tvars), global_step=global_step)
return tf.group(minimize_op, *normalize_ops, name="apply_accumulated_gradients")

train_op = tf.cond(tf.math.equal(global_step % gradient_accmulation_multiplier, 0),
lambda: apply_accumulated_gradients(sum_gradient),
lambda: optimizer.apply_gradients(zip([None for _ in grads], tvars), global_step=global_step))

# reset accumulation when necessary
def reset():
counter = 0
for i, s in enumerate(sum_gradient):
if s is None:
# restore reference from None to the original variable
sum_gradient[i] = unused_variable_in_batch[counter]
counter += 1
return tf.group([s.assign(tf.zeros_like(s)) for s in sum_gradient])

with tf.control_dependencies([train_op]):
reset_ops = tf.cond(tf.math.equal(do_update, 1.),
reset,
tf.no_op)
# the 2 branches must have identical structure, [op1, op2, ...] || no_op cannot be valid cond branch.
# tf.group to convert all resets into 1 op and match with no_op: tf.group() || np_op

# Increment global step
new_global_step = global_step + 1
train_op = tf.group(*sum_ops, [train_op, global_step.assign(new_global_step), reset_ops])

logging_hook = tf.train.LoggingTensorHook({"accuracy": "acc"},
every_n_iter=gradient_accmulation_multiplier)
output_spec = tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
train_op=train_op,
training_hooks=[logging_hook, accumulation_hook] # wrap with a list
)

我对批量梯度应用了裁剪,并简单地取了它们的平均值。这种方法对我有用,但我建议您在数据集上密切关注损失行为。

还有,关于tf.cond(tf.math.equal(do_update, 1.),...,...),do_update是一个Hook管理的变量,每一步gradient_accmulation_multiplier都会取值1,所以这个语句与 tf.math.equal(global_step % gradient_accmulation_multiplier, 0) 具有完全相同的效果。这只是另一种方式。

Hook的代码如下:

class GradientAccumulationHook(session_run_hook.SessionRunHook):
"""
Puts a certain tf.Variable to 1 once every certain steps.
"""

def __init__(self, frequency, variable):
self._step = 0
self._flag = 0.
self._freq = frequency
self._input_placeholder = tf.placeholder(tf.float32)
self.assign_op = variable.assign(self._input_placeholder)

def begin(self):
# a hook can modify graph at begin(), after this the graph will be finalized
self._step = tf.train.get_global_step()

def before_run(self, run_context):
step = run_context.session.run(self._step) # evaluate tensor to get a step number
self._flag = 1. if step % self._freq == 0 and step != 0 else 0.
run_context.session.run(self.assign_op, feed_dict={self._input_placeholder: self._flag})

关于tensorflow - 使用分布策略在 Estimator 中累积梯度,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54735106/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com