gpt4 book ai didi

scala - 创建新的 SparkContext 时出现 Hadoop FileAlreadyExistsException

转载 作者:可可西里 更新时间:2023-11-01 16:55:32 29 4
gpt4 key购买 nike

即使成功运行了几十次 spark 程序,在最新的 sbt 包之后,最新的运行在启动 SparkContext 时出现 FileAlreadyExistsException:

Note: I had run sbt clean package assembly first.

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/mapred/FileAlreadyExistsException
at org.apache.hadoop.mapreduce.util.ConfigUtil.addDeprecatedKeys(ConfigUtil.java:54)
at org.apache.hadoop.mapreduce.util.ConfigUtil.loadResources(ConfigUtil.java:42)
at org.apache.hadoop.mapred.JobConf.<clinit>(JobConf.java:119)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:270)
at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:1844)
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:91)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at org.apache.hadoop.security.Groups.<init>(Groups.java:64)
at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:257)
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:234)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:749)
at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:734)
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:607)
at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2154)
at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2154)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2154)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:301)
at mllib.perf.TestRunner$.main(TestRunner.scala:27)
val sc = new SparkContext(new SparkConf().setAppName("TestRunner: " + testName))

Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.mapred.FileAlreadyExistsException
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)

无论 Hadoop 临时目录和输入/输出目录的创建都在 Spark 的控制范围内,而不是由用户代码执行。那么如何解决呢?

最佳答案

查看错误,我猜您需要一个干净的构建包程序集,因为它甚至无法找到 FileAreadyExists 异常。

关于scala - 创建新的 SparkContext 时出现 Hadoop FileAlreadyExistsException,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/30401761/

29 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com