gpt4 book ai didi

python - 迭代一段时间后,前向传递速度变慢 10000 倍

转载 作者:行者123 更新时间:2023-12-05 07:13:40 26 4
gpt4 key购买 nike

我像pytorch的官方DCGAN教程一样实现了一个简单的Deconv网络。我反复将 zeros 向量传递给它。一段时间后,花费的时间明显减慢。我想知道是什么原因以及如何解决它。

代码:

import torch
import torch.nn as nn
import time

# JUST TO MEASURE TIME
class Timer:
def __init__(self, msg):
self.msg = msg

def __enter__(self):
self.start = time.process_time()
return self

def __exit__(self, *args):
self.end = time.process_time()
self.interval = self.end - self.start

print('{}: {:.5f}'.format(self.msg, self.interval))

device = torch.device("cuda")

ngf, nc, nz, batchSize = 64, 1, 6, 1<<16
class Generator(nn.Module):
def __init__(self):
super(Generator, self).__init__()
self.main = nn.Sequential(
# input is Z, going into a convolution
nn.ConvTranspose2d( nz, ngf * 4, 4, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
# state size. (ngf*4) x 4 x 4
nn.ConvTranspose2d(ngf * 4, ngf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
# state size. (ngf*2) x 8 x 8
nn.ConvTranspose2d( ngf * 2, ngf, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
# state size. (ngf) x 16 x 16
nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False),
nn.Tanh()
# state size. (nc) x 32 x 32
)

def forward(self, input):
return self.main(input)

# Create the generator
netG = Generator().to(device)

def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
nn.init.normal_(m.weight.data, 0.0, 0.02)
elif classname.find('BatchNorm') != -1:
nn.init.normal_(m.weight.data, 1.0, 0.02)
nn.init.constant_(m.bias.data, 0)

netG.apply(weights_init)

# torch.backends.cudnn.benchmark=True

while True:
with Timer('Time elapsed'):
with torch.no_grad():
netG(torch.zeros([batchSize, nz, 1, 1], device=device))

结果:

Time elapsed: 0.02309 Time elapsed: 0.00072 Time elapsed: 0.00208 Time elapsed: 0.00128 Time elapsed: 0.00119 Time elapsed: 0.00153 Time elapsed: 0.00176 Time elapsed: 0.00170 Time elapsed: 0.00185 Time elapsed: 0.00188 Time elapsed: 0.00191 Time elapsed: 0.00190 Time elapsed: 0.00171 Time elapsed: 0.00176 Time elapsed: 0.00167 Time elapsed: 0.00120 Time elapsed: 0.00168 Time elapsed: 0.00169 Time elapsed: 0.00166 Time elapsed: 0.00167 Time elapsed: 0.00171 Time elapsed: 0.00168 Time elapsed: 0.00168 Time elapsed: 0.00168 Time elapsed: 0.00169 Time elapsed: 0.00177 Time elapsed: 0.00173 Time elapsed: 0.00176 Time elapsed: 0.00173 Time elapsed: 0.00171 Time elapsed: 0.00168 Time elapsed: 0.00173 Time elapsed: 0.00168 Time elapsed: 0.00178 Time elapsed: 0.00169 Time elapsed: 0.00171 Time elapsed: 0.00168 Time elapsed: 0.00169 Time elapsed: 0.00169 Time elapsed: 0.00173 Time elapsed: 0.00154 Time elapsed: 0.00170 Time elapsed: 0.00167 Time elapsed: 0.00224 Time elapsed: 0.00117 Time elapsed: 0.00175 Time elapsed: 0.00168 Time elapsed: 0.00173 Time elapsed: 0.00169 Time elapsed: 12.62760 Time elapsed: 12.71425 Time elapsed: 12.71379 Time elapsed: 12.71846 Time elapsed: 12.71909 Time elapsed: 12.71898 Time elapsed: 12.72288 Time elapsed: 12.72157 Time elapsed: 12.72226 Time elapsed: 12.72456 Time elapsed: 12.72350 Time elapsed: 12.72480 Time elapsed: 12.72644 Time elapsed: 12.72337 Time elapsed: 12.72424 Time elapsed: 12.72538 Time elapsed: 12.72533 Time elapsed: 12.72510 Time elapsed: 12.72507 Time elapsed: 12.72806 Time elapsed: 12.72865 Time elapsed: 12.72764 Time elapsed: 12.72431

  • 我的 GPU:Titan RTX
  • PyTorch 版本:1.4
  • Python 版本:3.7

最佳答案

我在我的 Titan RTX 上尝试了相同的代码并获得了完全相同的行为。

所有 gpu 调用都是异步的(正如 jodag 在评论中指出的那样)并且仅在需要时同步,如果存在依赖关系。因此,为了测试它,我稍微更改了代码,以便实际使用网络的输出并创建一个依赖项,这样一个依赖项。所以现在在下一次迭代开始之前需要输出。

while True:
with Timer('Time elapsed'):
with torch.no_grad():
output = netG(torch.zeros([batchSize, nz, 1, 1], device=device))
print(output.mean())

现在总是需要 12.8 秒。所以 jodag 是完全正确的。它与对 GPU 的异步调用以及 pytorch 如何在内部处理一切有关。

关于python - 迭代一段时间后,前向传递速度变慢 10000 倍,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60086108/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com