gpt4 book ai didi

algorithm - Scikit-Learn RFECV 仅基于网格分数的特征数量

转载 作者:塔克拉玛干 更新时间:2023-11-03 03:16:55 26 4
gpt4 key购买 nike

来自 scikit-learn RFE documentation ,算法依次选择更小的特征集,只保留权重最高的特征。权重低的特征会被丢弃,这个过程会不断重复,直到剩余的特征数量与用户指定的数量相匹配(或者默认为原始特征数量的一半)。

RFECV docs表明特征是用RFE和KFCV排序的。

我们在 documentation example for RFECV 中显示的代码中有一组 25 个特征。 :

from sklearn.svm import SVC
from sklearn.cross_validation import StratifiedKFold
from sklearn.feature_selection import RFECV,RFE
from sklearn.datasets import make_classification

# Build a classification task using 3 informative features
X, y = make_classification(n_samples=1000, n_features=25, n_informative=3,
n_redundant=2, n_repeated=0, n_classes=8,
n_clusters_per_class=1, random_state=0)

# Create the RFE object and compute a cross-validated score.
svc = SVC(kernel="linear")
# The "accuracy" scoring is proportional to the number of correct
# classifications
rfecv = RFECV(estimator=svc, step=1, cv=StratifiedKFold(y, 2),scoring='accuracy')
rfecv.fit(X, y)
rfe = RFE(estimator=svc, step=1)
rfe.fit(X, y)

print('Original number of features is %s' % X.shape[1])
print("RFE final number of features : %d" % rfe.n_features_)
print("RFECV final number of features : %d" % rfecv.n_features_)
print('')

import numpy as np
g_scores = rfecv.grid_scores_
indices = np.argsort(g_scores)[::-1]
print('Printing RFECV results:')
for f in range(X.shape[1]):
print("%d. Number of features: %d;
Grid_Score: %f" % (f + 1, indices[f]+1, g_scores[indices[f]]))

这是我得到的输出:

Original number of features is 25
RFE final number of features : 12
RFECV final number of features : 3

Printing RFECV results:
1. Number of features: 3; Grid_Score: 0.818041
2. Number of features: 4; Grid_Score: 0.816065
3. Number of features: 5; Grid_Score: 0.816053
4. Number of features: 6; Grid_Score: 0.799107
5. Number of features: 7; Grid_Score: 0.797047
6. Number of features: 8; Grid_Score: 0.783034
7. Number of features: 10; Grid_Score: 0.783022
8. Number of features: 9; Grid_Score: 0.781992
9. Number of features: 11; Grid_Score: 0.778028
10. Number of features: 12; Grid_Score: 0.774052
11. Number of features: 14; Grid_Score: 0.762015
12. Number of features: 13; Grid_Score: 0.760075
13. Number of features: 15; Grid_Score: 0.752003
14. Number of features: 16; Grid_Score: 0.750015
15. Number of features: 18; Grid_Score: 0.750003
16. Number of features: 22; Grid_Score: 0.748039
17. Number of features: 17; Grid_Score: 0.746003
18. Number of features: 19; Grid_Score: 0.739105
19. Number of features: 20; Grid_Score: 0.739021
20. Number of features: 21; Grid_Score: 0.738003
21. Number of features: 23; Grid_Score: 0.729068
22. Number of features: 25; Grid_Score: 0.725056
23. Number of features: 24; Grid_Score: 0.725044
24. Number of features: 2; Grid_Score: 0.506952
25. Number of features: 1; Grid_Score: 0.272896

在这个特定的例子中:

  1. 对于 RFE:代码始终返回 12 个特征(大约是 25 个特征的一半,正如文档中预期的那样)
  2. 对于 RFECV,代码返回 1-25 之间的不同数字(不是特征数量的一半)

It seems to me that when RFECV is being selected, the number of features is being picked only based on the KFCV scores - i.e. the cross validation scores are over-riding RFE's successive pruning of features.

这是真的吗?如果想使用 native 递归特征消除算法,那么 RFECV 是使用该算法还是使用它的混合版本?

在 RFECV 中,是否对剪枝后剩余的特征子集进行交叉验证?如果是这样,在 RFECV 中每次修剪后保留了多少特征?

最佳答案

在交叉验证版本中,每一步都会对特征重新排序,并丢弃排名最低的特征——这在文档中称为“递归特征选择”。

如果您想将其与原始版本进行比较,您需要计算 RFE 选择的特征的交叉验证分数。我的猜测是 RFECV 的答案是正确的——从特征减少时模型性能的急剧增加来看,您可能有一些高度相关的特征正在损害模型的性能。

关于algorithm - Scikit-Learn RFECV 仅基于网格分数的特征数量,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37054995/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com