gpt4 book ai didi

python - 线性回归梯度

转载 作者:太空宇宙 更新时间:2023-11-04 02:17:47 24 4
gpt4 key购买 nike

我有非常基本的线性回归样本。下面的实现(没有正则化)

class Learning:

def assume(self, weights, x):
return np.dot(x, np.transpose(weights))

def cost(self, weights, x, y, lam):
predict = self.assume(weights, x) \
.reshape(len(x), 1)

val = np.sum(np.square(predict - y), axis=0)
assert val is not None

assert val.shape == (1,)
return val[0] / 2 * len(x)

def grad(self, weights, x, y, lam):
predict = self.assume(weights, x)\
.reshape(len(x), 1)

val = np.sum(np.multiply(
x, (predict - y)), axis=0)
assert val is not None

assert val.shape == weights.shape
return val / len(x)

我想用 scipy.optimize 检查梯度,它是否有效。

learn = Learning()
INPUTS = np.array([[1, 2],
[1, 3],
[1, 6]])
OUTPUTS = np.array([[3], [5], [11]])
WEIGHTS = np.array([1, 1])

t_check_grad = scipy.optimize.check_grad(
learn.cost, learn.grad, WEIGHTS,INPUTS, OUTPUTS, 0)
print(t_check_grad)
# Output will be 73.2241602235811!!!

我从头到尾手动检查了所有计算。它实际上是正确的实现。但在输出中我看到了极大的差异!这是什么原因?

最佳答案

在你的成本函数中你应该返回

val[0] / (2 * len(x))

而不是 val[0]/2 * len(x)。那么你将拥有

print(t_check_grad)
# 1.20853633278e-07

关于python - 线性回归梯度,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52265087/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com