gpt4 book ai didi

python - 为什么在 tensorflow 中使用多 GPU 时 GPU 内存使用情况有很大不同?

转载 作者:太空宇宙 更新时间:2023-11-03 21:12:56 24 4
gpt4 key购买 nike

我使用的是 Tensorflow 1.4.0,两个 GPU 训练。

为什么两个GPU内存使用情况相差很大?这是GPU情况:

+-------------------------------+----------------------+----------------------+
| 4 Tesla K80 On | 00000000:00:1B.0 Off | 0 |
| N/A 50C P0 70W / 149W | 8538MiB / 11439MiB | 100% E. Process |
+-------------------------------+----------------------+----------------------+
| 5 Tesla K80 On | 00000000:00:1C.0 Off | 0 |
| N/A 42C P0 79W / 149W | 4442MiB / 11439MiB | 48% E. Process |
+-------------------------------+----------------------+----------------------+

GPU4 中使用的 Gpu 内存是 GPU5 的两倍。我认为两个GPU使用的GPU内存应该差不多。为什么会出现这种情况呢?有人帮助我吗?非常感谢!

这是计算平均梯度的代码和两个函数:

tower_grads = []
lossList = []
accuracyList = []

for gpu in range(NUM_GPUS):
with tf.device(assign_to_device('/gpu:{}'.format(gpu), ps_device='/cpu:0')):
print '============ GPU {} ============'.format(gpu)
imageBatch, labelBatch, epochNow = read_and_decode_TFRecordDataset(
args.tfrecords, BATCH_SIZE, EPOCH_NUM)
identityPretrainModel = identity_pretrain_inference.IdenityPretrainNetwork(IS_TRAINING,
BN_TRAINING, CLASS_NUM, DROPOUT_TRAINING)
logits = identityPretrainModel.inference(
imageBatch)
losses = identityPretrainModel.cal_loss(logits, labelBatch)
accuracy = identityPretrainModel.cal_accuracy(logits, labelBatch)
optimizer = tf.train.AdamOptimizer(learning_rate=LEARNING_RATE)
grads_and_vars = optimizer.compute_gradients(losses)
lossList.append(losses)
accuracyList.append(accuracy)
tower_grads.append(grads_and_vars)
grads_and_vars = average_gradients(tower_grads)
train = optimizer.apply_gradients(grads_and_vars)
global_step = tf.train.get_or_create_global_step()
incr_global_step = tf.assign(global_step, global_step + 1)
losses = sum(lossList) / NUM_GPUS
accuracy = sum(accuracyList) / NUM_GPUS



def assign_to_device(device, ps_device='/cpu:0'):
def _assign(op):
node_def = op if isinstance(op, tf.NodeDef) else op.node_def
if node_def.op in PS_OPS:
return ps_device
else:
return device
return _assign


def average_gradients(tower_grads):
average_grads = []
for grad_and_vars in zip(*tower_grads):
# Note that each grad_and_vars looks like the following:
# ((grad0_gpu0, var0_gpu0), ... , (grad0_gpuN, var0_gpuN))
grads = []
for g, _ in grad_and_vars:
# Add 0 dimension to the gradients to represent the tower.
expanded_g = tf.expand_dims(g, 0)

# Append on a 'tower' dimension which we will average over below.
grads.append(expanded_g)

# Average over the 'tower' dimension.
grad = tf.concat(grads, 0)
grad = tf.reduce_mean(grad, 0)

# Keep in mind that the Variables are redundant because they are shared
# across towers. So .. we will just return the first tower's pointer to
# the Variable.
v = grad_and_vars[0][1]
grad_and_var = (grad, v)
average_grads.append(grad_and_var)
return average_grads

最佳答案

多GPU代码来自:multigpu_cnn.py 。原因是第 124 行 with tf.device('/cpu:0'): 丢失了!在这种情况下,所有操作都放置在 GPU0 上。所以 gpu0 上的内存消耗比其他的要高得多。

关于python - 为什么在 tensorflow 中使用多 GPU 时 GPU 内存使用情况有很大不同?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54925073/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com