gpt4 book ai didi

python - 结合 Spark Streaming + MLlib

转载 作者:太空狗 更新时间:2023-10-30 01:13:10 24 4
gpt4 key购买 nike

我尝试使用随机森林模型来预测示例流,但似乎无法使用该模型对示例进行分类。这是 pyspark 中使用的代码:

sc = SparkContext(appName="App")

model = RandomForest.trainClassifier(trainingData, numClasses=2, categoricalFeaturesInfo={}, impurity='gini', numTrees=150)


ssc = StreamingContext(sc, 1)
lines = ssc.socketTextStream(hostname, int(port))

parsedLines = lines.map(parse)
parsedLines.pprint()

predictions = parsedLines.map(lambda event: model.predict(event.features))

在集群中编译时返回的错误:

  Error : "It appears that you are attempting to reference SparkContext from a broadcast "
Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.

有没有办法使用从静态数据生成的模型来预测流式示例?

谢谢你们,我真的很感激!!!!

最佳答案

是的,您可以使用从静态数据生成的模型。您遇到的问题与流媒体根本无关。您根本无法在操作或转换中使用基于 JVM 的模型(有关原因的解释,请参见 How to use Java/Scala function from an action or a transformation?)。相反,您应该将 predict 方法应用于完整的 RDD,例如在 DStream 上使用 transform:

from pyspark.mllib.tree import RandomForest
from pyspark.mllib.util import MLUtils
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from operator import attrgetter


sc = SparkContext("local[2]", "foo")
ssc = StreamingContext(sc, 1)

data = MLUtils.loadLibSVMFile(sc, 'data/mllib/sample_libsvm_data.txt')
trainingData, testData = data.randomSplit([0.7, 0.3])

model = RandomForest.trainClassifier(
trainingData, numClasses=2, nmTrees=3
)

(ssc
.queueStream([testData])
# Extract features
.map(attrgetter("features"))
# Predict
.transform(lambda _, rdd: model.predict(rdd))
.pprint())

ssc.start()
ssc.awaitTerminationOrTimeout(10)

关于python - 结合 Spark Streaming + MLlib,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36838024/

24 4 0