gpt4 book ai didi

R:rpart 树使用两个解释变量生长,但在删除不太重要的变量后不再生长

转载 作者:行者123 更新时间:2023-12-02 02:42:21 25 4
gpt4 key购买 nike

数据:我正在使用 "attrition" dataset来自 rsample 包。

问题:使用 attrition 数据集和 rpart 库,我可以使用公式“Attrition ~ OverTime + JobRole”来生长一棵树,其中选择 OverTime 作为第一个分割。但是,当我尝试在没有 JobRole 变量(即“Attrition ~ OverTime”)的情况下生长树时,树不会 split 并仅返回根节点。使用 rpart 函数以及插入符的 train 函数(method = "rpart")会发生这种情况。

我对此感到困惑,因为我认为 rpart 中实现的 CART 算法选择了以迭代贪婪方式分割的最佳变量,并且没有“向前看”以查看其他变量的存在如何影响其选择的最佳分割。如果算法在有两个解释变量的情况下选择 OverTime 作为值得的第一次分割,为什么在删除 JobRole 变量后不选择 OverTime 作为值得的第一次分割?

我在 Windows 7 中使用 R 版本 3.4.2 和 RStudio 版本 1.1.442。

研究:我发现了类似的 Stack Overflow 问题 herehere ,但都没有完整的答案。

据我所知,rpart docs似乎在第 5 页上说 rpart 算法不使用“前瞻”规则:

One way around both of these problems is to use look-ahead rules; but these are computationally very expensive. Instead rpart uses one of several measures of impurity, or diversity, of a node.

此外,类似的描述herehere .

代码:这是一个表示。任何见解都会很棒 - 谢谢!

suppressPackageStartupMessages(library(rsample))                                                                                                           
#> Warning: package 'rsample' was built under R version 3.4.4
suppressPackageStartupMessages(library(rpart))
suppressPackageStartupMessages(library(caret))
suppressPackageStartupMessages(library(dplyr))
#> Warning: package 'dplyr' was built under R version 3.4.3
suppressPackageStartupMessages(library(purrr))

#################################################

# look at data
data(attrition)
attrition_subset <- attrition %>% select(Attrition, OverTime, JobRole)
attrition_subset %>% glimpse()
#> Observations: 1,470
#> Variables: 3
#> $ Attrition <fctr> Yes, No, Yes, No, No, No, No, No, No, No, No, No, N...
#> $ OverTime <fctr> Yes, No, Yes, Yes, No, No, Yes, No, No, No, No, Yes...
#> $ JobRole <fctr> Sales_Executive, Research_Scientist, Laboratory_Tec...
map_dfr(.x = attrition_subset, .f = ~ sum(is.na(.x)))
#> # A tibble: 1 x 3
#> Attrition OverTime JobRole
#> <int> <int> <int>
#> 1 0 0 0

#################################################

# with rpart
attrition_rpart_w_JobRole <- rpart(Attrition ~ OverTime + JobRole, data = attrition_subset, method = "class", cp = .01)
attrition_rpart_w_JobRole
#> n= 1470
#>
#> node), split, n, loss, yval, (yprob)
#> * denotes terminal node
#>
#> 1) root 1470 237 No (0.83877551 0.16122449)
#> 2) OverTime=No 1054 110 No (0.89563567 0.10436433) *
#> 3) OverTime=Yes 416 127 No (0.69471154 0.30528846)
#> 6) JobRole=Healthcare_Representative,Manager,Manufacturing_Director,Research_Director 126 11 No (0.91269841 0.08730159) *
#> 7) JobRole=Human_Resources,Laboratory_Technician,Research_Scientist,Sales_Executive,Sales_Representative 290 116 No (0.60000000 0.40000000)
#> 14) JobRole=Human_Resources,Research_Scientist,Sales_Executive 204 69 No (0.66176471 0.33823529) *
#> 15) JobRole=Laboratory_Technician,Sales_Representative 86 39 Yes (0.45348837 0.54651163) *

attrition_rpart_wo_JobRole <- rpart(Attrition ~ OverTime, data = attrition_subset, method = "class", cp = .01)
attrition_rpart_wo_JobRole
#> n= 1470
#>
#> node), split, n, loss, yval, (yprob)
#> * denotes terminal node
#>
#> 1) root 1470 237 No (0.8387755 0.1612245) *

#################################################

# with caret
attrition_caret_w_JobRole_non_dummies <- train(x = attrition_subset[ , -1], y = attrition_subset[ , 1], method = "rpart", tuneGrid = expand.grid(cp = .01))
attrition_caret_w_JobRole_non_dummies$finalModel
#> n= 1470
#>
#> node), split, n, loss, yval, (yprob)
#> * denotes terminal node
#>
#> 1) root 1470 237 No (0.83877551 0.16122449)
#> 2) OverTime=No 1054 110 No (0.89563567 0.10436433) *
#> 3) OverTime=Yes 416 127 No (0.69471154 0.30528846)
#> 6) JobRole=Healthcare_Representative,Manager,Manufacturing_Director,Research_Director 126 11 No (0.91269841 0.08730159) *
#> 7) JobRole=Human_Resources,Laboratory_Technician,Research_Scientist,Sales_Executive,Sales_Representative 290 116 No (0.60000000 0.40000000)
#> 14) JobRole=Human_Resources,Research_Scientist,Sales_Executive 204 69 No (0.66176471 0.33823529) *
#> 15) JobRole=Laboratory_Technician,Sales_Representative 86 39 Yes (0.45348837 0.54651163) *

attrition_caret_w_JobRole <- train(Attrition ~ OverTime + JobRole, data = attrition_subset, method = "rpart", tuneGrid = expand.grid(cp = .01))
attrition_caret_w_JobRole$finalModel
#> n= 1470
#>
#> node), split, n, loss, yval, (yprob)
#> * denotes terminal node
#>
#> 1) root 1470 237 No (0.8387755 0.1612245)
#> 2) OverTimeYes< 0.5 1054 110 No (0.8956357 0.1043643) *
#> 3) OverTimeYes>=0.5 416 127 No (0.6947115 0.3052885)
#> 6) JobRoleSales_Representative< 0.5 392 111 No (0.7168367 0.2831633) *
#> 7) JobRoleSales_Representative>=0.5 24 8 Yes (0.3333333 0.6666667) *

attrition_caret_wo_JobRole <- train(Attrition ~ OverTime, data = attrition_subset, method = "rpart", tuneGrid = expand.grid(cp = .01))
attrition_caret_wo_JobRole$finalModel
#> n= 1470
#>
#> node), split, n, loss, yval, (yprob)
#> * denotes terminal node
#>
#> 1) root 1470 237 No (0.8387755 0.1612245) *

最佳答案

这很有道理。上面有很多额外的代码,所以我将重复重要的部分。

library(rsample)
library(rpart)
data(attrition)

rpart(Attrition ~ OverTime + JobRole, data=attrition)
n= 1470
node), split, n, loss, yval, (yprob)
* denotes terminal node

1) root 1470 237 No (0.83877551 0.16122449)
2) OverTime=No 1054 110 No (0.89563567 0.10436433) *
3) OverTime=Yes 416 127 No (0.69471154 0.30528846)
6) JobRole=Healthcare_Representative,Manager,Manufacturing_Director,Research_Director 126 11 No (0.91269841 0.08730159) *
7) JobRole=Human_Resources,Laboratory_Technician,Research_Scientist,Sales_Executive,Sales_Representative 290 116 No (0.60000000 0.40000000)
14) JobRole=Human_Resources,Research_Scientist,Sales_Executive 204 69 No (0.66176471 0.33823529) *
15) JobRole=Laboratory_Technician,Sales_Representative 86 39 Yes (0.45348837 0.54651163) *

rpart(Attrition ~ OverTime, data=attrition)
n= 1470
node), split, n, loss, yval, (yprob)
* denotes terminal node

1) root 1470 237 No (0.8387755 0.1612245) *

看一下第一个模型(有两个变量)。就在根目录下面,我们有:

1) root 1470 237 No (0.83877551 0.16122449)        
2) OverTime=No 1054 110 No (0.89563567 0.10436433) *
3) OverTime=Yes 416 127 No (0.69471154 0.30528846)

模型继续拆分节点 3 (OverTime=Yes),但使用 JobRole。由于我们在第二个模型中没有 JobRole,因此 rpart 无法进行其他拆分。但请注意,在节点 2 和 3 处,Attrition=No 是多数类。在节点 3 处,69.5% 的实例为“否”,30.5% 为"is"。因此,对于节点 2 和 3,我们将预测“否”。由于分割两侧的预测相同,因此分割是不必要的并被剪掉。只需要根节点就可以预测所有实例都是No。

关于R:rpart 树使用两个解释变量生长,但在删除不太重要的变量后不再生长,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50072536/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com