gpt4 book ai didi

hadoop - Spark数据帧插入到配置单元表失败,因为使用用户名映射器创建的一些暂存零件文件

转载 作者:行者123 更新时间:2023-12-02 19:00:29 28 4
gpt4 key购买 nike

我正在使用Spark数据帧插入到配置单元表中。即使使用用户名“myuser”提交应用程序,某些配置单元暂存零件文件仍使用用户名“mapr”创建。因此,在重命名临时文件说访问被拒绝的同时,最终写入配置单元表失败。
命令:

resultDf.write.mode("append").insertInto(insTable)



错误:

Exception in thread "main" org.apache.hadoop.security.AccessControlException: User myuser(user id 2547) does has been denied access to rename /ded /data/db/da_mydb.db/managed/da_primary/.hive-staging_hive_2017-12-27_13-25-22_586_3120774356819313410-1/-ext-10000/_temporary/0/task_201712271325_0080_m_000000/part-00000 to /ded /data/db/da_mydb.db/managed/da_primary/.hive-staging_hive_2017-12-27_13-25-22_586_3120774356819313410-1/-ext-10000/part-00000 at com.mapr.fs.MapRFileSystem.rename(MapRFileSystem.java:1112) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:461) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:475) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJobInternal(FileOutputCommitter.java:392) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob(FileOutputCommitter.java:364) at org.apache.hadoop.mapred.FileOutputCommitter.commitJob(FileOutputCommitter.java:136) at org.apache.spark.sql.hive.SparkHiveWriterContainer.commitJob(hiveWriterContainers.scala:108) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.saveAsHiveFile(InsertIntoHiveTable.scala:85) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult$lzycompute(InsertIntoHiveTable.scala:201) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult(InsertIntoHiveTable.scala:127) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.doExecute(InsertIntoHiveTable.scala:276) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55) at org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWriter.scala:189) at org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWriter.scala:166) at com.iri.suppChain.RunKeying$.execXForm(RunKeying.scala:74) at com.iri.suppChain.RunKeying$$anonfun$1.apply(RunKeying.scala:36) at com.iri.suppChain.RunKeying$$anonfun$1.apply(RunKeying.scala:36) at scala.collection.immutable.List.foreach(List.scala:318) at com.iri.suppChain.RunKeying$delayedInit$body.apply(RunKeying.scala:36) at scala.Function0$class.apply$mcV$sp(Function0.scala:40) at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)



以下是环境详细信息:
  • Spark 1.6.1
  • 分发映射器
  • 最佳答案

    请尝试以下内容并提供反馈

    resultDF.registerTempTable("results_tbl")
    sqlContext.sql("INSERT INTO TABLE insTable SELECT * FROM results_tbl")

    关于hadoop - Spark数据帧插入到配置单元表失败,因为使用用户名映射器创建的一些暂存零件文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48021186/

    28 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com