gpt4 book ai didi

r - 回归逻辑混淆矩阵

转载 作者:行者123 更新时间:2023-11-30 09:44:47 25 4
gpt4 key购买 nike

我正在尝试对提供的数据集执行一些逻辑回归 here通过使用5折交叉验证。

我的目标是对数据集的分类列进行预测,该列可以取值 1(如果没有癌症)和值 2(如果癌症)。

完整代码如下:

     library(ISLR)
library(boot)
dataCancer <- read.csv("http://archive.ics.uci.edu/ml/machine-learning-databases/00451/dataR2.csv")

#Randomly shuffle the data
dataCancer<-dataCancer[sample(nrow(dataCancer)),]
#Create 5 equally size folds
folds <- cut(seq(1,nrow(dataCancer)),breaks=5,labels=FALSE)
#Perform 5 fold cross validation
for(i in 1:5){
#Segement your data by fold using the which() function
testIndexes <- which(folds == i)
testData <- dataCancer[testIndexes, ]
trainData <- dataCancer[-testIndexes, ]
#Use the test and train data partitions however you desire...

classification_model = glm(as.factor(Classification) ~ ., data = trainData,family = binomial)
summary(classification_model)

#Use the fitted model to do predictions for the test data
model_pred_probs = predict(classification_model , testData , type = "response")
model_predict_classification = rep(0 , length(testData))
model_predict_classification[model_pred_probs > 0.5] = 1

#Create the confusion matrix and compute the misclassification rate
table(model_predict_classification , testData)
mean(model_predict_classification != testData)
}

最后我想得到一些帮助

 table(model_predict_classification , testData)
mean(model_predict_classification != testData)

我收到以下错误:

 Error in table(model_predict_classification, testData) : all arguments must have the same length

我不太明白如何使用混淆矩阵。

我想要 5 个错误分类率。 trainData 和 testData 已被切割成 5 段。大小应等于 model_predict_classification。

感谢您的帮助。

最佳答案

这里是一个解决方案,使用 caret 包在将癌症数据分为测试和训练数据集后对其执行 5 倍交叉验证。混淆矩阵是根据测试数据和训练数据生成的。

caret::train() 报告 5 次保留折叠的平均准确度。每个单独折叠的结果可以通过从输出模型对象中提取来获得。

library(caret)
data <- read.csv("http://archive.ics.uci.edu/ml/machine-learning-databases/00451/dataR2.csv")
# set classification as factor, and recode to
# 0 = no cancer, 1 = cancer
data$Classification <- as.factor((data$Classification - 1))
# split data into training and test, based on values of dependent variable
trainIndex <- createDataPartition(data$Classification, p = .75,list=FALSE)
training <- data[trainIndex,]
testing <- data[-trainIndex,]
trCntl <- trainControl(method = "CV",number = 5)
glmModel <- train(Classification ~ .,data = training,trControl = trCntl,method="glm",family = "binomial")
# print the model info
summary(glmModel)
glmModel
confusionMatrix(glmModel)
# generate predictions on hold back data
trainPredicted <- predict(glmModel,testing)
# generate confusion matrix for hold back data
confusionMatrix(trainPredicted,reference=testing$Classification)

...以及输出:

> # print the model info
> > summary(glmModel)
>
> Call: NULL
>
> Deviance Residuals:
> Min 1Q Median 3Q Max
> -2.1542 -0.8358 0.2605 0.8260 2.1009
>
> Coefficients:
> Estimate Std. Error z value Pr(>|z|) (Intercept) -4.4039248 3.9159157 -1.125 0.2607 Age -0.0190241 0.0177119 -1.074 0.2828 BMI -0.1257962 0.0749341 -1.679 0.0932 . Glucose 0.0912229 0.0389587 2.342 0.0192 * Insulin 0.0917095 0.2889870 0.317 0.7510 HOMA -0.1820392 1.2139114 -0.150 0.8808 Leptin -0.0207606 0.0195192 -1.064 0.2875 Adiponectin -0.0158448 0.0401506 -0.395 0.6931 Resistin 0.0419178 0.0255536 1.640 0.1009 MCP.1 0.0004672 0.0009093 0.514 0.6074
> --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>
> (Dispersion parameter for binomial family taken to be 1)
>
> Null deviance: 119.675 on 86 degrees of freedom Residual deviance: 89.804 on 77 degrees of freedom AIC: 109.8
>
> Number of Fisher Scoring iterations: 7
>
> > glmModel Generalized Linear Model
>
> 87 samples 9 predictor 2 classes: '0', '1'
>
> No pre-processing Resampling: Cross-Validated (5 fold) Summary of
> sample sizes: 70, 69, 70, 69, 70 Resampling results:
>
> Accuracy Kappa
> 0.7143791 0.4356231
>
> > confusionMatrix(glmModel) Cross-Validated (5 fold) Confusion Matrix
>
> (entries are percentual average cell counts across resamples)
>
> Reference Prediction 0 1
> 0 33.3 17.2
> 1 11.5 37.9
> Accuracy (average) : 0.7126
>
> > # generate predictions on hold back data
> > trainPredicted <- predict(glmModel,testing)
> > # generate confusion matrix for hold back data
> > confusionMatrix(trainPredicted,reference=testing$Classification) Confusion Matrix and Statistics
>
> Reference Prediction 0 1
> 0 11 2
> 1 2 14
>
> Accuracy : 0.8621
> 95% CI : (0.6834, 0.9611)
> No Information Rate : 0.5517
> P-Value [Acc > NIR] : 0.0004078
>
> Kappa : 0.7212 Mcnemar's Test P-Value : 1.0000000
>
> Sensitivity : 0.8462
> Specificity : 0.8750
> Pos Pred Value : 0.8462
> Neg Pred Value : 0.8750
> Prevalence : 0.4483
> Detection Rate : 0.3793 Detection Prevalence : 0.4483
> Balanced Accuracy : 0.8606
>
> 'Positive' Class : 0
>
> >

关于r - 回归逻辑混淆矩阵,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53963781/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com