gpt4 book ai didi

python - 增加线性回归的成本

转载 作者:太空狗 更新时间:2023-10-29 17:33:33 25 4
gpt4 key购买 nike

出于训练目的,我在 python 中实现了线性回归。问题是成本在增加而不是减少。对于数据,我使用机翼自噪声数据集。资料可查here

我按如下方式导入数据:

import pandas as pd

def features():

features = pd.read_csv("data/airfoil_self_noise/airfoil_self_noise.dat.txt", sep="\t", header=None)

X = features.iloc[:, 0:5]
Y = features.iloc[:, 5]

return X.values, Y.values.reshape(Y.shape[0], 1)

我的线性回归代码如下:

import numpy as np
import random

class linearRegression():

def __init__(self, learning_rate=0.01, max_iter=20):
"""
Initialize the hyperparameters of the linear regression.

:param learning_rate: the learning rate
:param max_iter: the max numer of iteration to perform
"""

self.lr = learning_rate
self.max_iter = max_iter
self.m = None
self.weights = None
self.bias = None

def fit(self, X, Y):
"""
Run gradient descent algorithm

:param X: the inputs
:param Y: the outputs
:return:
"""

self.m = X.shape[0]
self.weights = np.random.normal(0, 0.1, (X.shape[1], 1))
self.bias = random.normalvariate(0, 0.1)

for iter in range(0, self.max_iter):

A = self.__forward(X)
dw, db = self.__backward(A, X, Y)

J = (1/(2 * self.m)) * np.sum(np.power((A - Y), 2))

print("at iteration %s cost is %s" % (iter, J))

self.weights = self.weights - self.lr * dw
self.bias = self.bias - self.lr * db

def predict(self, X):
"""
Make prediction on the inputs

:param X: the inputs
:return:
"""

Y_pred = self.__forward(X)

return Y_pred

def __forward(self, X):
"""
Compute the linear function on the inputs

:param X: the inputs
:return:
A: the activation
"""

A = np.dot(X, self.weights) + self.bias

return A

def __backward(self, A, X, Y):
"""

:param A: the activation
:param X: the inputs
:param Y: the outputs
:return:
dw: the gradient for the weights
db: the gradient for the bias
"""

dw = (1 / self.m) * np.dot(X.T, (A - Y))
db = (1 / self.m) * np.sum(A - Y)

return dw, db

然后我实例化 linearRegression 类如下:

X, Y = features()
model = linearRegression()
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=42)
model.fit(X_train, y_train)

我试图找出成本增加的原因,但到目前为止我无法找出原因。如果有人能指出我正确的方向,我们将不胜感激。

最佳答案

通常,如果您选择较大的学习率,您可能会遇到类似的问题。我已尝试检查您的代码,我的观察结果是:

  • 您的成本函数 J 似乎没问题。
  • 但在你的反向函数中,你似乎从你的猜测中减去你的实际结果。通过这样做,您可能会得到负权重,并且由于您从权重和梯度中减去学习率和速率的乘积,您最终会得到增加的成本函数结果

关于python - 增加线性回归的成本,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52093695/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com