gpt4 book ai didi

python - 简单机器学习模型训练返回Nan

转载 作者:行者123 更新时间:2023-11-28 22:20:58 24 4
gpt4 key购买 nike

我正在尝试开始学习机器学习。

我写了一个简单的例子:

import numpy as np

# Prepare the data
input = np.array(list(range(100)))
output = np.array([x**2 + 2 for x in list(range(100))])

# Visualize Data
import matplotlib.pyplot as plt
plt.plot(input, output, 'ro')
plt.show()

# Define your Model
a = 1
b = 1

# y = ax + b # we put a bias in the model based on our knowledge

# Train your model == Optimize the parameters so that they give very less loss
for e in range(10):
for x, y in zip(input, output):
y_hat = a*x + b
loss = 0.5*(y_hat-y)**2

# Now that we have loss, we want gradient of the parameters a and b
# derivative of loss wrt a = (-x)(y-ax+b)
# so gradient descent: a = a - (learning_rate)*(derivative wrt a)

a = a - 0.1*(-x)*(y_hat-y)
b = b - 0.1*(-1)*(y_hat-y)
print("Epoch {0} Training loss = {1}".format(e, loss))


# Make Prections on new data

test_input = np.array(list(range(101,150)))
test_output = np.array([x**2.0 + 2 for x in list(range(101,150))])
model_predictions = np.array([a*x + b for x in list(range(101,150))])

plt.plot(test_input, test_output, 'ro')
plt.plot(test_input, model_predictions, '-')
plt.show()

现在当我运行代码时:

ml_zero.py:22: RuntimeWarning: overflow encountered in double_scalars
loss = 0.5*(y_hat-y)**2
Epoch 0 Training loss = inf
ml_zero.py:21: RuntimeWarning: overflow encountered in double_scalars
y_hat = a*x + b
Epoch 1 Training loss = inf
ml_zero.py:21: RuntimeWarning: invalid value encountered in double_scalars
y_hat = a*x + b
Epoch 2 Training loss = nan
Epoch 3 Training loss = nan
Epoch 4 Training loss = nan
Epoch 5 Training loss = nan
Epoch 6 Training loss = nan
Epoch 7 Training loss = nan
Epoch 8 Training loss = nan
Epoch 9 Training loss = nan

为什么错误是nan?我写了最简单的模型,但是使用 python 我得到了:

Traceback (most recent call last):
File "ml_zero.py", line 20, in <module>
loss = (y_hat-y)**2
OverflowError: (34, 'Result too large')

然后我将所有 Python 列表转换为 numpy。现在,我收到 Nan 错误,我只是不明白为什么这些小值会产生这些错误。

Daniele 的答案是用均方损失代替损失,即将损失除以输入总数,我得到了这个输出:

Epoch 0 Training loss = 1.7942781420994678e+36
Epoch 1 Training loss = 9.232837400842652e+70
Epoch 2 Training loss = 4.751367833814119e+105
Epoch 3 Training loss = 2.4455835946216386e+140
Epoch 4 Training loss = 1.2585275201812707e+175
Epoch 5 Training loss = 6.4767849625200624e+209
Epoch 6 Training loss = 3.331617554363007e+244
Epoch 7 Training loss = 1.714758503849272e+279
ml_zero.py:22: RuntimeWarning: overflow encountered in double_scalars
loss = 0.5*(y-y_hat)**2
Epoch 8 Training loss = inf
Epoch 9 Training loss = inf

至少它可以运行,但我正在尝试使用随机梯度下降来学习线性函数,它会在每个点丢失后更新参数。

仍然不明白人们是如何使用这些模型的,损失应该会减少,为什么它会随着梯度下降而增加?

最佳答案

你算错了。当您计算 GD 的梯度更新时,您必须除以数据集中的样本数:这就是为什么它被称为均值平方误差而不仅仅是平方误差。此外,您可能希望使用较小的输入,因为您正在尝试使用指数,因为它趋于增长......好吧,随着 x 呈指数增长。
this post以获得对 LR 和 GD 的良好介绍。

我冒昧地稍微重写了您的代码,这应该可行:

import numpy as np
import matplotlib.pyplot as plt

# Prepare the data
input_ = np.linspace(0, 10, 100) # Don't assign user data to Python's input builtin
output = np.array([x**2 + 2 for x in input_])

# Define model
a = 1
b = 1

# Train model
N = input_.shape[0] # Number of samples
for e in range(10):
loss = 0.
for x, y in zip(input_, output):
y_hat = a * x + b
a = a - 0.1 * (2. / N) * (-x) * (y - y_hat)
b = b - 0.1 * (2. / N) * (-1) * (y - y_hat)
loss += 0.5 * ((y - y_hat) ** 2)
loss /= N

print("Epoch {:2d}\tLoss: {:4f}".format(e, loss))


# Predict on test data
test_input = np.linspace(0, 15, 150) # Training data [0-10] + test data [10 - 15]
test_output = np.array([x**2.0 + 2 for x in test_input])
model_predictions = np.array([a*x + b for x in test_input])

plt.plot(test_input, test_output, 'ro')
plt.plot(test_input, model_predictions, '-')
plt.show()

这应该为您提供以下内容作为输出:

Epoch  0    Loss: 33.117127
Epoch 1 Loss: 42.949756
Epoch 2 Loss: 40.733332
Epoch 3 Loss: 38.657764
Epoch 4 Loss: 36.774646
Epoch 5 Loss: 35.067299
Epoch 6 Loss: 33.520409
Epoch 7 Loss: 32.119958
Epoch 8 Loss: 30.853112
Epoch 9 Loss: 29.708126

这是输出图:

enter image description here

干杯

编辑:OP 询问的是 SGD。上面的答案仍然是有效代码,但它适用于标准 GD(同时迭代整个数据集)。
对于 SGD,主循环必须稍微改变一下:

for e in range(10):
for x, y in zip(input_, output):
y_hat = a * x + b
loss = 0.5 * ((y - y_hat) ** 2)
a = a - 0.01 * (2.) * (-x) * (y - y_hat)
b = b - 0.01 * (2.) * (-1) * (y - y_hat)

print("Epoch {:2d}\tLoss: {:4f}".format(e, loss))

请注意,我必须降低学习率以避免发散。当您使用 1 的批处理大小进行训练时,避免这种梯度爆炸变得非常重要,因为单个样本可能会严重扰乱您向最佳状态的下降。

示例输出:

Epoch  0    Loss: 0.130379
Epoch 1 Loss: 0.123007
Epoch 2 Loss: 0.117352
Epoch 3 Loss: 0.112991
Epoch 4 Loss: 0.109615
Epoch 5 Loss: 0.106992
Epoch 6 Loss: 0.104948
Epoch 7 Loss: 0.103353
Epoch 8 Loss: 0.102105
Epoch 9 Loss: 0.101127

关于python - 简单机器学习模型训练返回Nan,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48579891/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com