gpt4 book ai didi

python - 处理我的神经网络模型产生的负值

转载 作者:行者123 更新时间:2023-11-30 08:57:17 24 4
gpt4 key购买 nike

我有一个简单的 nn 模型,如下所示

class TestRNN(nn.Module):
def __init__(self, batch_size, n_steps, n_inputs, n_neurons, n_outputs):
super(TestRNN, self).__init__()
...
self.basic_rnn = nn.RNN(self.n_inputs, self.n_neurons)
self.FC = nn.Linear(self.n_neurons, self.n_outputs)


def forward(self, X):
...
lstm_out, self.hidden = self.basic_rnn(X, self.hidden)
out = self.FC(self.hidden)

return out.view(-1, self.n_outputs)

我正在使用criterion = nn.CrossEntropyLoss()来计算我的错误。操作顺序如下:

# get the inputs
x, y = data

# forward + backward + optimize
outputs = model(x)
loss = criterion(outputs, y)

我的训练数据 x 已标准化,如下所示:

tensor([[[7.0711e-01, 7.0711e-01, 0.0000e+00,  ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[2.6164e-02, 2.6164e-02, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 1.3108e-05],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[9.5062e-01, 3.1036e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[0.0000e+00, 1.3717e-05, 3.2659e-07, ..., 0.0000e+00,
0.0000e+00, 3.2659e-07]],

[[5.1934e-01, 5.4041e-01, 6.8083e-06, ..., 0.0000e+00,
0.0000e+00, 6.8083e-06],
[5.2340e-01, 6.0007e-01, 2.7062e-06, ..., 0.0000e+00,
0.0000e+00, 2.7062e-06],
[8.1923e-01, 5.7346e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00]],

[[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.0714e-01, 7.0708e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 7.0407e-06],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00]],

...,

[[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.1852e-01, 2.3411e-02, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.0775e-01, 7.0646e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 3.9888e-06],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00]],

[[5.9611e-01, 5.8796e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.0711e-01, 7.0710e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.7538e-01, 2.4842e-01, 1.7787e-06, ..., 0.0000e+00,
0.0000e+00, 1.7787e-06],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00]],

[[5.2433e-01, 5.2433e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[1.3155e-01, 1.3155e-01, 0.0000e+00, ..., 8.6691e-02,
9.7871e-01, 0.0000e+00],
[7.4412e-01, 6.6311e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 9.6093e-07]]])

传递给标准函数的典型输出y如下所示:

tensor([[-0.0513],
[-0.0445],
[-0.0514],
[-0.0579],
[-0.0539],
[-0.0323],
[-0.0521],
[-0.0294],
[-0.0372],
[-0.0518],
[-0.0516],
[-0.0501],
[-0.0312],
[-0.0496],
[-0.0436],
[-0.0514],
[-0.0518],
[-0.0465],
[-0.0530],
[-0.0471],
[-0.0344],
[-0.0502],
[-0.0536],
[-0.0594],
[-0.0356],
[-0.0371],
[-0.0513],
[-0.0528],
[-0.0621],
[-0.0404],
[-0.0403],
[-0.0562],
[-0.0510],
[-0.0580],
[-0.0516],
[-0.0556],
[-0.0063],
[-0.0459],
[-0.0494],
[-0.0460],
[-0.0631],
[-0.0525],
[-0.0454],
[-0.0509],
[-0.0522],
[-0.0426],
[-0.0527],
[-0.0423],
[-0.0572],
[-0.0308],
[-0.0452],
[-0.0555],
[-0.0479],
[-0.0513],
[-0.0514],
[-0.0498],
[-0.0514],
[-0.0471],
[-0.0505],
[-0.0467],
[-0.0485],
[-0.0520],
[-0.0517],
[-0.0442]], device='cuda:0', grad_fn=<ViewBackward>)
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], device='cuda:0')

当应用该标准时,我收到以下错误(使用 CUDA_LAUNCH_BLOCKING=1 运行):

/opt/conda/conda-bld/pytorch_1549628766161/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/conda/conda-bld/pytorch_1549628766161/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [20,0,0] Assertion `t >= 0 && t < n_classes` failed.
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1549628766161/work/aten/src/THCUNN/generic/ClassNLLCriterion.cu line=111 error=59 : device-side assert triggered

我的模型输出负值导致出现上述错误消息,我该如何解决此问题?

最佳答案

TL;DR

您有两个选择:

  1. 创建 outputs 的第二个维度尺寸为 2 而不是 1。
  2. 使用nn.BCEWithLogitsLoss而不是nn.CrossEntropyLoss
<小时/>

我认为问题不在于负数。其形状为outputs .

查看您的数组 y ,我看到你有 2 个不同的类(也许更多,但我们假设它是 2 个)。这意味着 outputs 的最后一个维度应该是2。原因是outputs需要为 2 个不同类别中的每一个类别给出“分数”(请参阅​​ the documentation )。分数可以是负数、零或正数。但你的形状outputs[64,1] ,而不是[64,2]根据需要。

nn.CrossEntropyLoss() 的步骤之一目标是将这些分数转换为两个类别的概率分布。这是使用 softmax 运算完成的。然而,在进行二元分类时(即只有 2 个类的分类,如我们当前的情况),还有另一种选择:仅给出一个类的分数,使用 sigmoid 函数将其转换为该类的概率,然后然后对此执行“1-p”以获得其他类别的概率。此选项意味着 outputs只需要为两个类别中的一个给出分数,就像您当前的情况一样。要选择此选项,您需要更改 nn.CrossEntropyLossnn.BCEWithLogitsLoss 。然后您可以传递给它 outputsy正如您当前所做的那样(但请注意, outputs 的形状需要恰好是 y 的形状,因此在您的示例中,您需要传递 outputs[:,0] 而不是 outputs 。此外,您还需要转换y 到 float : y.float() 。因此调用是 criterion(outputs[:,0], y.float()) )

关于python - 处理我的神经网络模型产生的负值,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54870863/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com