gpt4 book ai didi

shell - 为什么执行 SparkR 作业会拒绝使用 Oozie 的权限?

转载 作者:可可西里 更新时间:2023-11-01 14:58:05 25 4
gpt4 key购买 nike

我正在使用 oozie 通过 shell 脚本运行 sparkr。当我运行作业时,我面临权限问题:

Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=yarn, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
ERROR Utils: Uncaught exception in thread delete Spark local dirs
java.lang.NullPointerException
Exception in thread "delete Spark local dirs" java.lang.NullPointerException

整体日志..

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/mntc/yarn/nm/filecache/2452/sparkr- assembly-0.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
15/09/08 01:39:11 INFO SparkContext: Running Spark version 1.3.0
15/09/08 01:39:13 INFO SecurityManager: Changing view acls to: yarn
15/09/08 01:39:13 INFO SecurityManager: Changing modify acls to: yarn
15/09/08 01:39:13 INFO SecurityManager: SecurityManager: authentication
disabled; ui acls disabled; users with view permissions: Set(yarn); users with modify permissions: Set(yarn)
15/09/08 01:39:13 INFO Slf4jLogger: Slf4jLogger started
15/09/08 01:39:13 INFO Remoting: Starting remoting
15/09/08 01:39:14 INFO Remoting: Remoting started; listening on addresses :
/mnt/yarn/nm/usercache/karun/appcache
/application_1437539731669_0786/blockmgr-1760ec19-b1de-4bcc-9100- b2c1364b54c8
/mntc/yarn/nm/usercache/karun/appcache/application_1437539731669_0786/
blockmgr-f57c89eb-4a4b-4fd5-9796-ca3c3a7f2c6f
15/09/08 01:39:14 INFO DiskBlockManager: Created local directory at
15/09/08 01:39:14 INFO MemoryStore: MemoryStore started with capacity 265.4 MB
15/09/08 01:39:15 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/09/08 01:39:16 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (6610 MB per container)
15/09/08 01:39:16 INFO Client: Preparing resources for our AM container
createSparkContext on edu.berkeley.cs.amplab.sparkr.RRDD failed with java.lang.reflect.InvocationTargetException
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=yarn, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
Exception in thread "delete Spark local dirs" java.lang.NullPointerException
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.ShellMain], exit code [1]

我不知道如何解决这个问题。我们将不胜感激。

最佳答案

问题可能是因为您在 HDFS 中没有这样的用户“yarn”。有两种可能的解决方案。

  1. 在 HDFS 上创建这样的用户并授予他访问所需资源的权限。
  2. 更简单,只需在 hdfs 用户(或您在 hdfs 上拥有的任何用户)下执行您的工作。在 oozie 属性文件中设置 user.name=hdfs。 ozie doc

关于shell - 为什么执行 SparkR 作业会拒绝使用 Oozie 的权限?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/32450208/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com