gpt4 book ai didi

python - 使用 python numpy 矩阵类的梯度下降

转载 作者:行者123 更新时间:2023-12-01 05:43:39 26 4
gpt4 key购买 nike

我正在尝试在 python 中实现单变量梯度下降算法。我尝试了很多不同的方法,但没有任何效果。以下是我尝试过的一个示例。我究竟做错了什么?提前致谢!!!

from numpy import *

class LinearRegression:

def __init__(self,data_file):
self.raw_data_ref = data_file
self.theta = matrix([[0],[0]])
self.iterations = 1500
self.alpha = 0.001


def format_data(self):
data = loadtxt(self.raw_data_ref, delimiter = ',')
dataMatrix = matrix(data)
x = dataMatrix[:,0]
y = dataMatrix[:,1]
m = y.shape[0]
vec = mat(ones((m,1)))
x = concatenate((vec,x),axis = 1)
return [x, y, m]


def computeCost(self, x, y, m):
predictions = x*self.theta
squaredErrorsMat = power((predictions-y),2)
sse = squaredErrorsMat.sum(axis = 0)
cost = sse/(2*m)
return cost


def descendGradient(self, x, y, m):
for i in range(self.iterations):

predictions = x*self.theta
errors = predictions - y
sumDeriv1 = (multiply(errors,x[:,0])).sum(axis = 0)
sumDeriv2 = (multiply(errors,x[:,1])).sum(axis = 0)

print self.computeCost(x,y,m)

tempTheta = self.theta
tempTheta[0] = self.theta[0] - self.alpha*(1/m)*sumDeriv1
tempTheta[1] = self.theta[1] - self.alpha*(1/m)*sumDeriv2

self.theta[0] = tempTheta[0]
self.theta[1] = tempTheta[1]


return self.theta



regressor = LinearRegression('ex1data1.txt')
output = regressor.format_data()
regressor.descendGradient(output[0],output[1],output[2])
print regressor.theta

一点更新;我之前尝试以更“矢量化”的方式进行操作,如下所示:

def descendGradient(self, x, y, m):
for i in range(self.iterations):

predictions = x*self.theta
errors = predictions - y

sumDeriv1 = (multiply(errors,x[:,0])).sum(axis = 0)
sumDeriv2 = (multiply(errors,x[:,1])).sum(axis = 0)

gammaMat = concatenate((sumDeriv1,sumDeriv2),axis = 0)
coeff = self.alpha*(1.0/m)
updateMatrix = gammaMat*coeff
print updateMatrix, gammaMat


jcost = self.computeCost(x,y,m)
print jcost
tempTheta = self.theta
tempTheta = self.theta - updateMatrix
self.theta = tempTheta

return self.theta

这导致了 [[-0.86221218],[0.88827876]] 的 theta。

最佳答案

您有两个问题,都与浮点有关:

1. 像这样初始化你的 theta 矩阵:

self.theta = matrix([[0.0],[0.0]])


2. 更改更新行,将 (1/m) 替换为 (1.0/m):

tempTheta[0] = self.theta[0] - self.alpha*(1.0/m)*sumDeriv1
tempTheta[1] = self.theta[1] - self.alpha*(1.0/m)*sumDeriv2



不相关的说明:您的 tempTheta 变量是不必要的。

关于python - 使用 python numpy 矩阵类的梯度下降,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/16826049/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com