gpt4 book ai didi

apache-spark - spark master 因内存不足异常而宕机

转载 作者:行者123 更新时间:2023-12-04 05:27:07 25 4
gpt4 key购买 nike

我在 AWS 上设置了 1 个 spark 主节点和 2 个从节点,每个节点具有 8 GB 内存。我已将 spark master 设置为每 1 小时运行一次。我有一个 cassandra 数据库,每小时从 spark 读取一次以获取记录并在 spark 中处理它。每小时大约有 5000 条记录。我的 spark master 在一次运行中坠毁说

"15/12/20 11:04:45 ERROR ActorSystemImpl: Uncaught fatal error from thread [sparkMaster-akka.actor.default-dispatcher-4436] shutting down ActorSystem [sparkMaster]
java.lang.OutOfMemoryError: GC overhead limit exceeded
at scala.math.BigInt$.apply(BigInt.scala:82)
at org.json4s.jackson.JValueDeserializer.deserialize(JValueDeserializer.scala:16)
at org.json4s.jackson.JValueDeserializer.deserialize(JValueDeserializer.scala:42)
at org.json4s.jackson.JValueDeserializer.deserialize(JValueDeserializer.scala:35)
at org.json4s.jackson.JValueDeserializer.deserialize(JValueDeserializer.scala:42)
at org.json4s.jackson.JValueDeserializer.deserialize(JValueDeserializer.scala:35)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3066)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2161)
at org.json4s.jackson.JsonMethods$class.parse(JsonMethods.scala:19)
at org.json4s.jackson.JsonMethods$.parse(JsonMethods.scala:44)
at org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:58)
at org.apache.spark.deploy.master.Master.rebuildSparkUI(Master.scala:793)
at org.apache.spark.deploy.master.Master.removeApplication(Master.scala:734)
at org.apache.spark.deploy.master.Master.org$apache$spark$deploy$master$Master$$finishApplication(Master.scala:712)
at org.apache.spark.deploy.master.Master$$anonfun$receiveWithLogging$1$$anonfun$applyOrElse$28.apply(Master.scala:445)
at org.apache.spark.deploy.master.Master$$anonfun$receiveWithLogging$1$$anonfun$applyOrElse$28.apply(Master.scala:445)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.deploy.master.Master$$anonfun$receiveWithLogging$1.applyOrElse(Master.scala:445)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:59)
at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42)
at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
at org.apache.spark.deploy.master.Master.aroundReceive(Master.scala:52)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
"

能否请您告诉我 spark master 因内存不足而崩溃的原因。我有这个作为 Spark 的设置_executorMemory=6G_driverMemory=6G在我的代码中创建 8 个分区。为什么master down掉which out of memory

这是代码

//create spark context
_sparkContext = new SparkContext(_conf)

//load the cassandra table
val tabledf = _sqlContext.read.format("org.apache.spark.sql.cassandra").options(Map( "table" -> "events", "keyspace" -> "sams")).load

val whereQuery = "addedtime >= '" + _from + "' AND addedtime < '" + _to + "'"
helpers.printnextLine("Where query to run on Cassandra : " + whereQuery)
val rdd = tabledf.filter(whereQuery)

rdd.registerTempTable("rdd")

val selectQuery = "lower(brandname) as brandname, lower(appname) as appname, lower(packname) as packname, lower(assetname) as assetname, eventtime, lower(eventname) as eventname, lower(client.OSName) as platform, lower(eventorigin) as eventorigin, meta.price as price"
val modefiedDF = _sqlContext.sql("select " + selectQuery + " from rdd")

//cache the rdd
modefiedDF.cache


// perform groupby operation
grprdd = filterrdd.groupBy("brandname", "appname", "packname", "eventname", "platform", "eventorigin", "price").count()

grprdd.foreachPartition{iter =>
{

iter.foreach(element =>
{

// Write to sql server table
val statement = con.createStatement()
statement.executeUpdate(insertQuery)
finally
{
if(con != null)
con.close
}

// clear the cache
_sqlContext.clearCache()

最佳答案

问题可能是您要求 spark master 使用 6 GB,而 spark executor 使用另外 6 GB(总共使用 12 GB)。但是,系统总共只有 8 GB RAM 可用。

在这 8 GB 中,您还应该允许一些内存用于操作系统进程(比如 1 GB)k。因此,可用于 spark 的总 RAM(master 和 worker 加起来)只有 7 GB。
相应地设置 executorMemorydriverMemory

关于apache-spark - spark master 因内存不足异常而宕机,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34387732/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com