gpt4 book ai didi

pyspark - 交叉验证 SPARK 期间的自定义评估器

转载 作者:行者123 更新时间:2023-12-04 03:12:11 25 4
gpt4 key购买 nike

我的目标是向 CrossValidator 函数 (PySpark) 添加一个基于排名的评估器

cvExplicit = CrossValidator(estimator=cvSet, numFolds=8, estimatorParamMaps=paramMap,evaluator=rnkEvaluate)

虽然我需要将评估的数据帧传递给函数,但我不知道该怎么做。

class rnkEvaluate():
def __init__(self, user_col = "user", rating_col ="rating", prediction_col = "prediction"):
print(user_col)
print(rating_col)
print(prediction_col)

def isLargerBetter():
return True


def evaluate(self,predictions):
denominator =
predictions.groupBy().sum(self._rating_col).collect()[0][0]
TODO
rest of the calculation ...
return numerator / denominator

不知何故,我需要在每次折叠迭代时传递预测数据帧,但我无法管理它。

最佳答案

我已经解决了这个问题,下面是代码:

import numpy as np

from pyspark.ml.tuning import CrossValidator, CrossValidatorModel
from pyspark.sql.functions import rand

result = []
class CrossValidatorVerbose(CrossValidator):

def writeResult(result):
resfile = open('executions/results.txt', 'a')
resfile.writelines("\n")
resfile.writelines(result)
resfile.close()

def _fit(self, dataset):
est = self.getOrDefault(self.estimator)
epm = self.getOrDefault(self.estimatorParamMaps)
numModels = len(epm)

eva = self.getOrDefault(self.evaluator)
metricName = eva.getMetricName()

nFolds = self.getOrDefault(self.numFolds)
seed = self.getOrDefault(self.seed)
h = 1.0 / nFolds

randCol = self.uid + "_rand"
df = dataset.select("*", rand(seed).alias(randCol))
metrics = [0.0] * numModels

for i in range(nFolds):
foldNum = i + 1
print("Comparing models on fold %d" % foldNum)

validateLB = i * h
validateUB = (i + 1) * h
condition = (df[randCol] >= validateLB) & (df[randCol] < validateUB)
validation = df.filter(condition)
train = df.filter(~condition)

for j in range(numModels):
paramMap = epm[j]
model = est.fit(train, paramMap)

predictions = model.transform(validation, paramMap)
#print(predictions.show())
metric = eva.evaluate(spark=spark, predictions=predictions)
metrics[j] += metric

avgSoFar = metrics[j] / foldNum

res=("params: %s\t%s: %f\tavg: %f" % (
{param.name: val for (param, val) in paramMap.items()},
metricName, metric, avgSoFar))
writeResult(res)
result.append(res)
print(res)

if eva.isLargerBetter():
bestIndex = np.argmax(metrics)
else:
bestIndex = np.argmin(metrics)

bestParams = epm[bestIndex]
bestModel = est.fit(dataset, bestParams)
avgMetrics = [m / nFolds for m in metrics]
bestAvg = avgMetrics[bestIndex]
print("Best model:\nparams: %s\t%s: %f" % (
{param.name: val for (param, val) in bestParams.items()},
metricName, bestAvg))

return self._copyValues(CrossValidatorModel(bestModel, avgMetrics))


evaluator = RankUserWeighted("user","rating","prediction")

cvImplicit = CrossValidatorVerbose(estimator=customImplicit, numFolds=8, estimatorParamMaps=paramMap
,evaluator=evaluator)

关于pyspark - 交叉验证 SPARK 期间的自定义评估器,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44249089/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com