gpt4 book ai didi

scala - Spark mllib f1score 阈值

转载 作者:行者123 更新时间:2023-11-30 08:43:34 26 4
gpt4 key购买 nike

我正在尝试找到最佳阈值,以使我的逻辑回归具有最高的 f1 分数。但是,当我写下以下几行时:

val f1Score = metrics.fMeasureByThreshold
f1Score.foreach { case (t, f) =>
println(s"Threshold: $t, F-score: $f, Beta = 1")

出现了一些奇怪的值,例如:

Threshold: 2.0939996826644833, F-score: 0.285648784961027, Beta = 1
Threshold: 2.093727854652065, F-score: 0.28604171441668574, Beta = 1
Threshold: 2.0904571465313113, F-score: 0.2864344637946838, Beta = 1
Threshold: 2.0884466833553468, F-score: 0.28682703321878583, Beta = 1
Threshold: 2.0882666552407283, F-score: 0.2872194228126431, Beta = 1
Threshold: 2.0835997800203447, F-score: 0.2876116326997939, Beta = 1
Threshold: 2.077892816382506, F-score: 0.28800366300366304, Beta = 1

阈值怎么可能大于 1?对于负值也是如此,负值将进一步显示在控制台输出中。

最佳答案

我之前在将 Dataframe 转换为 RDD 时犯了一个错误,而不是编写:

val  predictionAndLabels =predictions.select("probability", "labelIndex").rdd.map(x => (x(0).asInstanceOf[DenseVector](1), x(1).asInstanceOf[Double]))

我写道:

val  predictionAndLabels =predictions.select("rawPredictions", "labelIndex").rdd.map(x => (x(0).asInstanceOf[DenseVector](1), x(1).asInstanceOf[Double]))

所以阈值是关于原始预测而不是概率,现在一切都有意义了

关于scala - Spark mllib f1score 阈值,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45437990/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com