gpt4 book ai didi

python - 偏导数线性回归的随机梯度下降

转载 作者:行者123 更新时间:2023-11-30 22:15:19 25 4
gpt4 key购买 nike

我通过考虑偏导数 (df/dm) 和 (df/db) 手动实现线性回归的随机梯度下降

目标是我们必须随机选择 w0(权重),然后将它们收敛。由于这是随机的,我们必须在每次运行时对数据集进行采样

学习率最初应为 1,每次运行后应减少 2所以当 wK+1 等于 wK (k=1,2,3,......) 时,循环应该停止

这是在 Sklearn 的波士顿数据集上实现的

由于我是Python新手,所以没有使用函数下面是代码:

r= 1
m_deriv = 0
b_deriv = 0
learning_rate = 1
it = 1
w0_random = np.random.rand(13)
w0 = np.asmatrix(w0_random).T
b = np.random.rand()
b0 = np.random.rand()
while True:
df_sample = bos.sample(100)

price = df_sample['price']

price = np.asmatrix(price)

xi = np.asmatrix(df_sample.drop('price',axis=1))

N = len(xi)

for i in range(N):
# -2x * (y-(mx +b))
m_deriv += np.dot(-2*xi[i].T , (price[:,i] - np.dot(xi[i] , w0_random) + b))

# -2(y - (mx + b))
b_deriv += -2*(price[:,i] - (np.dot(xi[i] , w0_random) + b))

w0_new = m_deriv * learning_rate
b0_new = b_deriv * learning_rate
w1 = w0 - w0_new
b1 = b0 - b0_new

it += 1
if (w0==w1).all():
break
else:
w0 = w1
b0 = b1
learning_rate = learning_rate/2

当循环运行时,我得到的 w 和 b 值都很大。它们没有正确聚合循环哪里出错了,导致了更高的值以及如何解决它。

最佳答案

在上述情况下,在处理 xi 之前使用 StandardScaler 可以获得良好的结果,并使用 w1 而不是 w0_random >.

from sklearn.preprocessing import StandardScaler
import numpy as np
bos['PRICE'] = boston.target
X = bos.drop('PRICE', axis = 1)
Y = bos['PRICE']
df_sample =X[:100]
price =Y[:100]
xi_1=[]
price_1=[]
N = len(df_sample)
for j in range(N):
scaler = StandardScaler()
scaler.fit(df_sample)
xtrs = scaler.transform(df_sample)
xi_1.append(xtrs)
yi=np.asmatrix(price)
price_1.append(yi)
#print(price_1)
#print(xi_1)
xi=xi_1
price=price_1
r= 1
m_deriv = 0
b_deriv = 0
learning_rate = 1
it = 1
w0_random = np.random.rand(13)
w0 = np.asmatrix(w0_random).T
b = np.random.rand()
b0 = np.random.rand()
while True:
for i in range(N):
# -2x * (y-(mx +b))
w1=w0
b1=b0
m_deriv = np.dot(-2*xi[i].T , (price[i] - np.dot(xi[i] , w1) + b1))
# -2(y - (mx + b))
b_deriv = -2*(price[i] - (np.dot(xi[i] , w1) + b1))
w0_new = m_deriv * learning_rate
b0_new = b_deriv * learning_rate
w1 = w0 - w0_new
b1 = b0 - b0_new
it += 1
if (w0==w1).all():
break
else:
w0 = w1
b0 = b1
learning_rate = learning_rate/2
print("m_deriv=",m_deriv)
print("b_driv",b_deriv)

关于python - 偏导数线性回归的随机梯度下降,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50328545/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com