gpt4 book ai didi

python - PyTorch next(iter(training_loader)) 极慢,简单的数据,num_workers 不行吗?

转载 作者:太空宇宙 更新时间:2023-11-03 13:57:46 25 4
gpt4 key购买 nike

这里 x_daty_dat 只是很长的一维张量。

class FunctionDataset(Dataset):
def __init__(self):
x_dat, y_dat = data_product()

self.length = len(x_dat)
self.y_dat = y_dat
self.x_dat = x_dat

def __getitem__(self, index):
sample = self.x_dat[index]
label = self.y_dat[index]
return sample, label

def __len__(self):
return self.length

...

data_set = FunctionDataset()

...

training_sampler = SubsetRandomSampler(train_indices)
validation_sampler = SubsetRandomSampler(validation_indices)

training_loader = DataLoader(data_set, sampler=training_sampler, batch_size=params['batch_size'], shuffle=False)
validation_loader = DataLoader(data_set, sampler=validation_sampler, batch_size=valid_size, shuffle=False)

我也试过固定两个加载程序的内存。将 num_workers 设置为 > 0 会导致进程之间出现运行时错误(例如 EOF 错误和中断错误)。我得到我的批处理:

x_val, target = next(iter(training_loader))

整个数据集将适合内存/gpu,但我想为这个实验模拟批处理。分析我的过程给我以下信息:

16276989 function calls (16254744 primitive calls) in 38.779 seconds

Ordered by: cumulative time

ncalls tottime percall cumtime percall filename:lineno(function)
1745/1 0.028 0.000 38.780 38.780 {built-in method builtins.exec}
1 0.052 0.052 38.780 38.780 simple aprox.py:3(<module>)
1 0.000 0.000 36.900 36.900 simple aprox.py:519(exploreHeatmap)
1 0.000 0.000 36.900 36.900 simple aprox.py:497(optFromSample)
1 0.033 0.033 36.900 36.900 simple aprox.py:274(train)
705/483 0.001 0.000 34.495 0.071 {built-in method builtins.next}
222 1.525 0.007 34.493 0.155 dataloader.py:311(__next__)
222 0.851 0.004 12.752 0.057 dataloader.py:314(<listcomp>)
3016001 11.901 0.000 11.901 0.000 simple aprox.py:176(__getitem__)
21 0.010 0.000 10.891 0.519 simple aprox.py:413(validationError)
443 1.380 0.003 9.664 0.022 sampler.py:136(__iter__)
663/221 2.209 0.003 8.652 0.039 dataloader.py:151(default_collate)
221 0.070 0.000 6.441 0.029 dataloader.py:187(<listcomp>)
442 6.369 0.014 6.369 0.014 {built-in method stack}
3060221 2.799 0.000 5.890 0.000 sampler.py:68(<genexpr>)
3060000 3.091 0.000 3.091 0.000 tensor.py:382(<lambda>)
222 0.001 0.000 1.985 0.009 sampler.py:67(__iter__)
222 1.982 0.009 1.982 0.009 {built-in method randperm}
663/221 0.002 0.000 1.901 0.009 dataloader.py:192(pin_memory_batch)
221 0.000 0.000 1.899 0.009 dataloader.py:200(<listcomp>)
....

表明与我实验的其余事件(训练模型和许多其他计算等)相比,数据加载器非常慢。出了什么问题,加快速度的最佳方法是什么?

最佳答案

检索批处理时

x, y = next(iter(training_loader))

您实际上在每次调用时都创建了一个新的数据加载器迭代器实例(!)参见this thread获取更多信息。
您应该做的是创建迭代器一次(每个纪元):

training_loader_iter = iter(training_loader)

然后为迭代器上的每个批处理调用next

for i in range(num_batches_in_epoch):
x, y = next(training_loader_iter)

我之前遇到过类似的问题,这也让您在使用多个 worker 时遇到的 EOF 错误消失了。

关于python - PyTorch next(iter(training_loader)) 极慢,简单的数据,num_workers 不行吗?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53280967/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com