gpt4 book ai didi

scala - 系统内存 259522560 必须至少为 4.718592E8。请使用更大的堆大小

转载 作者:行者123 更新时间:2023-12-05 07:09:30 26 4
gpt4 key购买 nike

当我使用 1.6 版 Spark 运行我的 Spark 脚本时出现此错误。我的脚本适用于 1.5 版。

  • Java 版本:1.8
  • scala 版本:2.11.7

我多次尝试更改系统环境变量 JAVA_OPTS=-Xms128m -Xmx512m,使用不同的 Xms 和 Xmx 值,但它没有改变任何东西......

我也试过修改Intellij的内存设置

  • 帮助/更改内存设置...
  • 文件/设置/scal 编译器...

没有任何效果。

我在计算机中有不同的用户,Java 安装在计算机的根目录中,而 intellij 安装在其中一个用户的文件夹中。有影响吗?

错误日志如下:

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/04/30 17:06:54 INFO SparkContext: Running Spark version 1.6.0
20/04/30 17:06:55 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/04/30 17:06:55 INFO SecurityManager: Changing view acls to:
20/04/30 17:06:55 INFO SecurityManager: Changing modify acls to:
20/04/30 17:06:55 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(); users with modify permissions: Set()
20/04/30 17:06:56 INFO Utils: Successfully started service 'sparkDriver' on port 57698.
20/04/30 17:06:57 INFO Slf4jLogger: Slf4jLogger started
20/04/30 17:06:57 INFO Remoting: Starting remoting
20/04/30 17:06:57 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@10.1.5.175:57711]
20/04/30 17:06:57 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 57711.
20/04/30 17:06:57 INFO SparkEnv: Registering MapOutputTracker
20/04/30 17:06:57 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: System memory 259522560 must be at least 4.718592E8. Please use a larger heap size.
at org.apache.spark.memory.UnifiedMemoryManager$.getMaxMemory(UnifiedMemoryManager.scala:193)
at org.apache.spark.memory.UnifiedMemoryManager$.apply(UnifiedMemoryManager.scala:175)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:354)
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:193)
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:288)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:457)
at batch.BatchJob$.main(BatchJob.scala:23)
at batch.BatchJob.main(BatchJob.scala)
20/04/30 17:06:57 INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" java.lang.IllegalArgumentException: System memory 259522560 must be at least 4.718592E8. Please use a larger heap size.
at org.apache.spark.memory.UnifiedMemoryManager$.getMaxMemory(UnifiedMemoryManager.scala:193)
at org.apache.spark.memory.UnifiedMemoryManager$.apply(UnifiedMemoryManager.scala:175)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:354)
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:193)
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:288)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:457)
at batch.BatchJob$.main(BatchJob.scala:23)
at batch.BatchJob.main(BatchJob.scala)

以及代码的开头:

package batch

import java.lang.management.ManagementFactory
import org.apache.spark.{SparkContext, SparkConf}
import org.apache.spark.sql.{SaveMode, SQLContext}


object BatchJob {
def main (args: Array[String]): Unit = {

// get spark configuration
val conf = new SparkConf()
.setAppName("Lambda with Spark")

// Check if running from IDE
if (ManagementFactory.getRuntimeMXBean.getInputArguments.toString.contains("IntelliJ IDEA")) {
System.setProperty("hadoop.home.dir", "C:\\Libraries\\WinUtils") // required for winutils
conf.setMaster("local[*]")
}

// setup spark context
val sc = new SparkContext(conf)
implicit val sqlContext = new SQLContext(sc)

...

最佳答案

终于找到解决办法了:

直接在 Intellij Scala 控制台中的 VM 选项中添加 -Xms2g -Xmx4g。

这是唯一对我有用的东西

关于scala - 系统内存 259522560 必须至少为 4.718592E8。请使用更大的堆大小,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/61526825/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com