gpt4 book ai didi

tensorflow - 如何强制 tensorflow 使用所有可用的 GPU?

转载 作者:行者123 更新时间:2023-12-01 23:16:12 25 4
gpt4 key购买 nike

我有一个 8 GPU 集群,当我运行 piece of Tensorflow code from Kaggle (粘贴在下面),它只使用一个 GPU 而不是全部 8 个。我使用 nvidia-smi 确认了这一点.

# Set some parameters
IMG_WIDTH = 256
IMG_HEIGHT = 256
IMG_CHANNELS = 3
TRAIN_IM = './train_im/'
TRAIN_MASK = './train_mask/'
TEST_PATH = './test/'

warnings.filterwarnings('ignore', category=UserWarning, module='skimage')
num_training = len(os.listdir(TRAIN_IM))
num_test = len(os.listdir(TEST_PATH))
# Get and resize train images
X_train = np.zeros((num_training, IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS), dtype=np.uint8)
Y_train = np.zeros((num_training, IMG_HEIGHT, IMG_WIDTH, 1), dtype=np.bool)
print('Getting and resizing train images and masks ... ')
sys.stdout.flush()

#load training images
for count, filename in tqdm(enumerate(os.listdir(TRAIN_IM)), total=num_training):
img = imread(os.path.join(TRAIN_IM, filename))[:,:,:IMG_CHANNELS]
img = resize(img, (IMG_HEIGHT, IMG_WIDTH), mode='constant', preserve_range=True)
X_train[count] = img
name, ext = os.path.splitext(filename)
mask_name = name + '_mask' + ext
mask = cv2.imread(os.path.join(TRAIN_MASK, mask_name))[:,:,:1]
mask = resize(mask, (IMG_HEIGHT, IMG_WIDTH))
Y_train[count] = mask

# Check if training data looks all right
ix = random.randint(0, num_training-1)
print(ix)
imshow(X_train[ix])
plt.show()
imshow(np.squeeze(Y_train[ix]))
plt.show()
# Define IoU metric
def mean_iou(y_true, y_pred):
prec = []
for t in np.arange(0.5, 1.0, 0.05):
y_pred_ = tf.to_int32(y_pred > t)
score, up_opt = tf.metrics.mean_iou(y_true, y_pred_, 2)
K.get_session().run(tf.local_variables_initializer())
with tf.control_dependencies([up_opt]):
score = tf.identity(score)
prec.append(score)
return K.mean(K.stack(prec), axis=0)

# Build U-Net model
inputs = Input((IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS))
s = Lambda(lambda x: x / 255) (inputs)
width = 64
c1 = Conv2D(width, (3, 3), activation='relu', padding='same') (s)
c1 = Conv2D(width, (3, 3), activation='relu', padding='same') (c1)
p1 = MaxPooling2D((2, 2)) (c1)

c2 = Conv2D(width*2, (3, 3), activation='relu', padding='same') (p1)
c2 = Conv2D(width*2, (3, 3), activation='relu', padding='same') (c2)
p2 = MaxPooling2D((2, 2)) (c2)

c3 = Conv2D(width*4, (3, 3), activation='relu', padding='same') (p2)
c3 = Conv2D(width*4, (3, 3), activation='relu', padding='same') (c3)
p3 = MaxPooling2D((2, 2)) (c3)

c4 = Conv2D(width*8, (3, 3), activation='relu', padding='same') (p3)
c4 = Conv2D(width*8, (3, 3), activation='relu', padding='same') (c4)
p4 = MaxPooling2D(pool_size=(2, 2)) (c4)

c5 = Conv2D(width*16, (3, 3), activation='relu', padding='same') (p4)
c5 = Conv2D(width*16, (3, 3), activation='relu', padding='same') (c5)

u6 = Conv2DTranspose(width*8, (2, 2), strides=(2, 2), padding='same') (c5)
u6 = concatenate([u6, c4])
c6 = Conv2D(width*8, (3, 3), activation='relu', padding='same') (u6)
c6 = Conv2D(width*8, (3, 3), activation='relu', padding='same') (c6)

u7 = Conv2DTranspose(width*4, (2, 2), strides=(2, 2), padding='same') (c6)
u7 = concatenate([u7, c3])
c7 = Conv2D(width*4, (3, 3), activation='relu', padding='same') (u7)
c7 = Conv2D(width*4, (3, 3), activation='relu', padding='same') (c7)

u8 = Conv2DTranspose(width*2, (2, 2), strides=(2, 2), padding='same') (c7)
u8 = concatenate([u8, c2])
c8 = Conv2D(width*2, (3, 3), activation='relu', padding='same') (u8)
c8 = Conv2D(width*2, (3, 3), activation='relu', padding='same') (c8)

u9 = Conv2DTranspose(width, (2, 2), strides=(2, 2), padding='same') (c8)
u9 = concatenate([u9, c1], axis=3)
c9 = Conv2D(width, (3, 3), activation='relu', padding='same') (u9)
c9 = Conv2D(width, (3, 3), activation='relu', padding='same') (c9)

outputs = Conv2D(1, (1, 1), activation='sigmoid') (c9)

model = Model(inputs=[inputs], outputs=[outputs])

sgd = optimizers.SGD(lr=0.03, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(optimizer=sgd, loss='binary_crossentropy', metrics=[mean_iou])
model.summary()

# Fit model
earlystopper = EarlyStopping(patience=20, verbose=1)
checkpointer = ModelCheckpoint('nuclei_only.h5', verbose=1, save_best_only=True)
results = model.fit(X_train, Y_train, validation_split=0.05, batch_size = 32, verbose=1, epochs=100,
callbacks=[earlystopper, checkpointer])
我想使用 mxnet 或其他方法在所有可用的 GPU 上运行此代码。但是,我不确定如何执行此操作。所有资源仅显示如何在 mnist 数据集上执行此操作。我有自己的数据集,我正在以不同的方式阅读。因此,不太确定如何修改代码。

最佳答案

TL;DR : 使用 tf.distribute.MirroredStrategy() 作为一个范围,比如

strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
[...create model as you would otherwise...]

如果不指定任何参数, tf.distribute.MirroredStrategy() 将使用所有可用的 GPU。如果需要,您还可以指定要使用的那些,如下所示: mirrored_strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0", "/gpu:1"]) .

引用这个 Distributed training with TensorFlow实现细节和其他策略的指南。

较早的答案(现已过时: deprecated, removed as of April 1, 2020。):
使用 multi_gpu_model() 来自喀拉斯。 ()

TS;WM :

TensorFlow 2.0 现在有 tf.distribute 模块,“用于跨多个设备运行计算的库”。它建立在“分销策略”的概念之上。您可以指定分发策略,然后将其用作范围。 TensorFlow 将基本透明地拆分输入、并行计算并加入输出。反向传播也受制于此。由于所有处理现在都在幕后完成,您可能需要熟悉可用的策略及其参数,因为它们可能会极大地影响您的训练速度。例如,您希望变量驻留在 CPU 上吗?然后使用 tf.distribute.experimental.CentralStorageStrategy() .引用 Distributed training with TensorFlow指南以获取更多信息。

较早的答案(现已过时,将其留在这里以供引用):

来自 Tensorflow Guide :

If you have more than one GPU in your system, the GPU with the lowest ID will be selected by default.



如果您想使用多个 GPU,不幸的是,您必须手动指定要在每个 GPU 上放置的张量,例如
with tf.device('/device:GPU:2'):

更多信息请访问 Tensorflow Guide Using Multiple GPUs .

就如何在多个 GPU 上分布网络而言,有两种主要方法。
  • 您将网络分层分布在 GPU 上。这更容易实现,但不会产生很多性能优势,因为 GPU 将相互等待完成操作。
  • 您创建网络的单独副本,在每个 GPU 上称为“塔”。当您输入八元组网络时,您将输入批处理分成 8 个部分,然后分发它们。让网络前向传播,然后对梯度求和,然后进行反向传播。这将导致 almost-linear speedup与 GPU 的数量有关。但是,实现起来要困难得多,因为您还必须处理与批量标准化相关的复杂性,并且非常建议您确保正确随机化批量。有a nice tutorial here .您还应该查看 Inception V3 code在那里引用了如何构建这样一个东西的想法。特别是 _tower_loss() , _average_gradients()train() 的部分从 for i in range(FLAGS.num_gpus): 开始.

  • 如果您想尝试一下 Keras,它现在通过 multi_gpu_model() 显着简化了多 GPU 训练。 .它可以为您完成所有繁重的工作。

    关于tensorflow - 如何强制 tensorflow 使用所有可用的 GPU?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50032721/

    25 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com