gpt4 book ai didi

r - 为什么需要使用 caret::train(..., method = "glmnet") 和 cv.glmnet() 调整 lambda?

转载 作者:行者123 更新时间:2023-12-03 11:38:52 26 4
gpt4 key购买 nike

正如我们所见,caret::train(..., method = "glmnet") 与交叉验证或 cv.glmnet() 实现都可以找到最小化交叉验证错误的 lambda.min。最终的最佳拟合模型应该是符合 lambda.min 的模型。那么,为什么我们需要在训练过程中设置一个lambda值的网格呢?

最佳答案

我们为 glmnet 模型使用自定义调整网格,因为默认调整网格非常小,而且我们可能想要探索更多潜在的 glmnet 模型。

glmnet 能够拟合 2 种不同的惩罚模型,它有 2 个调整参数:

  1. 测试版
    • 岭回归(或 alpha = 0)
    • 套索回归(或 alpha = 1)
  2. lambda
    • 系数惩罚的强度

glmnet模型可以一次拟合多个模型(对于单个alphalambda的所有值同时拟合),我们可以通过一个控制模型中惩罚量的大量 lambda 值。

train() 足够聪明,每个 alpha 值只适合一个模型,并将所有 lambda 值同时传递给一个模型以进行同时拟合.

例子:

# Make a custom tuning grid
tuneGrid <- expand.grid(alpha = 0:1, lambda = seq(0.0001, 1, length = 10))

# Fit a model
model <- train(y ~ ., overfit, method = "glmnet",
tuneGrid = tuneGrid, trControl = myControl
)


# Sample Output
Warning message: The metric "Accuracy" was not in the result set. ROC will be used instead.
+ Fold01: alpha=0, lambda=1
- Fold01: alpha=0, lambda=1
+ Fold01: alpha=1, lambda=1
- Fold01: alpha=1, lambda=1
+ Fold02: alpha=0, lambda=1
- Fold02: alpha=0, lambda=1
+ Fold02: alpha=1, lambda=1
- Fold02: alpha=1, lambda=1
+ Fold03: alpha=0, lambda=1
- Fold03: alpha=0, lambda=1
+ Fold03: alpha=1, lambda=1
- Fold03: alpha=1, lambda=1
+ Fold04: alpha=0, lambda=1
- Fold04: alpha=0, lambda=1
+ Fold04: alpha=1, lambda=1
- Fold04: alpha=1, lambda=1
+ Fold05: alpha=0, lambda=1
- Fold05: alpha=0, lambda=1
+ Fold05: alpha=1, lambda=1
- Fold05: alpha=1, lambda=1
+ Fold06: alpha=0, lambda=1
- Fold06: alpha=0, lambda=1
+ Fold06: alpha=1, lambda=1
- Fold06: alpha=1, lambda=1
+ Fold07: alpha=0, lambda=1
- Fold07: alpha=0, lambda=1
+ Fold07: alpha=1, lambda=1
- Fold07: alpha=1, lambda=1
+ Fold08: alpha=0, lambda=1
- Fold08: alpha=0, lambda=1
+ Fold08: alpha=1, lambda=1
- Fold08: alpha=1, lambda=1
+ Fold09: alpha=0, lambda=1
- Fold09: alpha=0, lambda=1
+ Fold09: alpha=1, lambda=1
- Fold09: alpha=1, lambda=1
+ Fold10: alpha=0, lambda=1
- Fold10: alpha=0, lambda=1
+ Fold10: alpha=1, lambda=1
- Fold10: alpha=1, lambda=1
Aggregating results
Selecting tuning parameters
Fitting alpha = 1, lambda = 1 on full training set


# Print model to console
model


# Sample Output
glmnet

250 samples
200 predictors
2 classes: 'class1', 'class2'

No pre-processing
Resampling: Cross-Validated (10 fold)
Summary of sample sizes: 225, 225, 225, 225, 224, 226, ...
Resampling results across tuning parameters:

alpha lambda ROC Sens Spec
0 0.0001 0.3877717 0.00 0.9786232
0 0.1112 0.4352355 0.00 1.0000000
0 0.2223 0.4546196 0.00 1.0000000
0 0.3334 0.4589674 0.00 1.0000000
0 0.4445 0.4718297 0.00 1.0000000
0 0.5556 0.4762681 0.00 1.0000000
0 0.6667 0.4783514 0.00 1.0000000
0 0.7778 0.4826087 0.00 1.0000000
0 0.8889 0.4869565 0.00 1.0000000
0 1.0000 0.4869565 0.00 1.0000000
1 0.0001 0.3368659 0.05 0.9188406
1 0.1112 0.5000000 0.00 1.0000000
1 0.2223 0.5000000 0.00 1.0000000
1 0.3334 0.5000000 0.00 1.0000000
1 0.4445 0.5000000 0.00 1.0000000
1 0.5556 0.5000000 0.00 1.0000000
1 0.6667 0.5000000 0.00 1.0000000
1 0.7778 0.5000000 0.00 1.0000000
1 0.8889 0.5000000 0.00 1.0000000
1 1.0000 0.5000000 0.00 1.0000000

ROC was used to select the optimal model using the largest value.
The final values used for the model were alpha = 1 and lambda = 1.


# Plot model
plot(model)

enter image description here

关于r - 为什么需要使用 caret::train(..., method = "glmnet") 和 cv.glmnet() 调整 lambda?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47526544/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com