gpt4 book ai didi

r - R 中的线性回归梯度下降算法产生不同的结果

转载 作者:塔克拉玛干 更新时间:2023-11-03 04:55:53 26 4
gpt4 key购买 nike

我正在尝试在不使用任何使用以下数据的包或库的情况下从头开始在 R 中实现线性回归:

UCI Machine Learning Repository, Bike-Sharing-Dataset

线性回归很简单,下面是代码:

data <- read.csv("Bike-Sharing-Dataset/hour.csv")

# Select the useable features
data1 <- data[, c("season", "mnth", "hr", "holiday", "weekday", "workingday", "weathersit", "temp", "atemp", "hum", "windspeed", "cnt")]

# Split the data
trainingObs<-sample(nrow(data1),0.70*nrow(data1),replace=FALSE)

# Create the training dataset
trainingDS<-data1[trainingObs,]

# Create the test dataset
testDS<-data1[-trainingObs,]

x0 <- rep(1, nrow(trainingDS)) # column of 1's
x1 <- trainingDS[, c("season", "mnth", "hr", "holiday", "weekday", "workingday", "weathersit", "temp", "atemp", "hum", "windspeed")]

# create the x- matrix of explanatory variables
x <- as.matrix(cbind(x0,x1))

# create the y-matrix of dependent variables

y <- as.matrix(trainingDS$cnt)
m <- nrow(y)

solve(t(x)%*%x)%*%t(x)%*%y

下一步是实现批量更新梯度下降,这是我遇到问题的地方。我不知道错误来自何处或如何修复它们,但代码有效。问题是产生的值与回归的结果完全不同,我不确定为什么。

我实现的批量更新梯度下降的两个版本如下(两种算法的结果各不相同,与回归的结果不同):

# Gradient descent 1
gradientDesc <- function(x, y, learn_rate, conv_threshold, n, max_iter) {
plot(x, y, col = "blue", pch = 20)
m <- runif(1, 0, 1)
c <- runif(1, 0, 1)
yhat <- m * x + c
MSE <- sum((y - yhat) ^ 2) / n
converged = F
iterations = 0
while(converged == F) {
## Implement the gradient descent algorithm
m_new <- m - learn_rate * ((1 / n) * (sum((yhat - y) * x)))
c_new <- c - learn_rate * ((1 / n) * (sum(yhat - y)))
m <- m_new
c <- c_new
yhat <- m * x + c
MSE_new <- sum((y - yhat) ^ 2) / n
if(MSE - MSE_new <= conv_threshold) {
abline(c, m)
converged = T
return(paste("Optimal intercept:", c, "Optimal slope:", m))
}
iterations = iterations + 1
if(iterations > max_iter) {
abline(c, m)
converged = T
return(paste("Optimal intercept:", c, "Optimal slope:", m))
}
}
return(paste("MSE=", MSE))
}

和:

grad <- function(x, y, theta) { # note that for readability, I redefined theta as a column vector
gradient <- 1/m* t(x) %*% (x %*% theta - y)
return(gradient)
}
grad.descent <- function(x, maxit, alpha){
theta <- matrix(rep(0, length=ncol(x)), ncol = 1)
for (i in 1:maxit) {
theta <- theta - alpha * grad(x, y, theta)
}
return(theta)
}

如果有人能解释为什么这两个函数会产生不同的结果,我将不胜感激。我还想确保我实际上正确地实现了梯度下降。

最后,我如何绘制具有不同学习率的下降结果并将此数据叠加到回归本身的结果上?

编辑以下是使用 alpha = .005 和 10,000 次迭代运行这两种算法的结果:

1)

> gradientDesc(trainingDS, y, 0.005, 0.001, 32, 10000)
TEXT_SHOW_BACKTRACE environmental variable.
[1] "Optimal intercept: 2183458.95872599 Optimal slope: 62417773.0184353"

2)

> print(grad.descent(x, 10000, .005))
[,1]
x0 8.3681113
season 19.8399837
mnth -0.3515479
hr 8.0269388
holiday -16.2429750
weekday 1.9615369
workingday 7.6063719
weathersit -12.0611266
temp 157.5315413
atemp 138.8019732
hum -162.7948299
windspeed 31.5442471

最佳答案

为了举例说明如何以更好的方式编写这样的函数,请考虑以下内容:

gradientDesc <- function(x, y, learn_rate, conv_threshold, max_iter) {
n <- nrow(x)
m <- runif(ncol(x), 0, 1) # m is a vector of dimension ncol(x), 1
yhat <- x %*% m # since x already contains a constant, no need to add another one

MSE <- sum((y - yhat) ^ 2) / n

converged = F
iterations = 0

while(converged == F) {
m <- m - learn_rate * ( 1/n * t(x) %*% (yhat - y))
yhat <- x %*% m
MSE_new <- sum((y - yhat) ^ 2) / n

if( abs(MSE - MSE_new) <= conv_threshold) {
converged = T
}
iterations = iterations + 1
MSE <- MSE_new

if(iterations >= max_iter) break
}
return(list(converged = converged,
num_iterations = iterations,
MSE = MSE_new,
coefs = m) )
}

比较:

ols <- solve(t(x)%*%x)%*%t(x)%*%y 

现在,

out <- gradientDesc(x,y, 0.005, 1e-7, 200000)

data.frame(ols, out$coefs)
ols out.coefs
x0 33.0663095 35.2995589
season 18.5603565 18.5779534
mnth -0.1441603 -0.1458521
hr 7.4374031 7.4420685
holiday -21.0608520 -21.3284449
weekday 1.5115838 1.4813259
workingday 5.9953383 5.9643950
weathersit -0.2990723 -0.4073493
temp 100.0719903 147.1157262
atemp 226.9828394 174.0260534
hum -225.7411524 -225.2686640
windspeed 12.3671942 9.5792498

这里,x 指的是您的第一个代码块中定义的x。注意系数之间的相似性。但是,还要注意

out$converged
[1] FALSE

这样您就可以通过增加迭代次数或调整步长来提高准确性。它也可能有助于首先扩展您的变量。

关于r - R 中的线性回归梯度下降算法产生不同的结果,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46163492/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com