gpt4 book ai didi

python - Spark 引发 OutOfMemoryError

转载 作者:太空宇宙 更新时间:2023-11-04 01:17:19 26 4
gpt4 key购买 nike

当我如下运行我的 spark python 代码时:

import pyspark
conf = (pyspark.SparkConf()
.setMaster("local")
.setAppName("My app")
.set("spark.executor.memory", "512m"))
sc = pyspark.SparkContext(conf = conf) #start the conf
data =sc.textFile('/Users/tsangbosco/Downloads/transactions')
data = data.flatMap(lambda x:x.split()).take(all)

文件大约20G,我的电脑有8G内存,当我在独立模式下运行程序时,它引发了OutOfMemoryError:

Exception in thread "Local computation of job 12" java.lang.OutOfMemoryError: Java heap space
at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:131)
at org.apache.spark.api.python.PythonRDD$$anon$1.next(PythonRDD.scala:119)
at org.apache.spark.api.python.PythonRDD$$anon$1.next(PythonRDD.scala:112)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at org.apache.spark.api.python.PythonRDD$$anon$1.foreach(PythonRDD.scala:112)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at org.apache.spark.api.python.PythonRDD$$anon$1.to(PythonRDD.scala:112)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at org.apache.spark.api.python.PythonRDD$$anon$1.toBuffer(PythonRDD.scala:112)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at org.apache.spark.api.python.PythonRDD$$anon$1.toArray(PythonRDD.scala:112)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$1.apply(JavaRDDLike.scala:259)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$1.apply(JavaRDDLike.scala:259)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:884)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:884)
at org.apache.spark.scheduler.DAGScheduler.runLocallyWithinThread(DAGScheduler.scala:681)
at org.apache.spark.scheduler.DAGScheduler$$anon$1.run(DAGScheduler.scala:666)

spark 无法处理比我的 ram 大的文件吗?你能告诉我如何解决吗?

最佳答案

Spark 可以处理一些情况。但是您正在使用 take 强制 Spark 将所有数据提取到一个数组(在内存中)。在这种情况下,您应该将它们存储到文件中,例如使用 saveAsTextFile

如果您有兴趣查看一些数据,可以使用sampletakeSample

关于python - Spark 引发 OutOfMemoryError,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/23601607/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com