- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
与 jvm 中使用的用户相比,我想更改 hdfs 中使用的用户,因为出现此错误:
Stream spark: org.apache.hadoop.security.AccessControlException: Permission denied: user=www, access=WRITE, node="/user/www/.sparkStaging/application_1460635834146_0012":hdfs:hdfs:drwxr-xr-x
我想将用户“www”更改为另一个具有写入权限的“joe”。 (我没有文件夹“user/www”,但我有“user/joe”)
这是我的java代码:
LOGGER.debug("start submitSparkJob");
Process spark;
SparkLauncher sl;
try {
sl = new SparkLauncher()
.setAppName(argsMap.get(SparkParametersEnum.NAME))
.setSparkHome(argsMap.get(SparkParametersEnum.SPARK_HOME))
.setAppResource(argsMap.get(SparkParametersEnum.JAR))
.setMainClass(argsMap.get(SparkParametersEnum.CLASS))
.addAppArgs(argsMap.get(SparkParametersEnum.ARG))
.setMaster(argsMap.get(SparkParametersEnum.MASTER))
.setDeployMode(argsMap.get(SparkParametersEnum.DEPLOY_MODE))
.setConf(SparkLauncher.DRIVER_MEMORY, "2g")
.setVerbose(true);
if(argsMap.containsKey(SparkParametersEnum.STAGING_DIR)){
sl.setConf("spark.yarn.stagingDir", argsMap.get(SparkParametersEnum.STAGING_DIR));
}
if(argsMap.containsKey(SparkParametersEnum.ACCESS_NAMENODES)){
sl.setConf("spark.yarn.access.namenodes", argsMap.get(SparkParametersEnum.ACCESS_NAMENODES));
}
if(argsMap.containsKey(SparkParametersEnum.PRINCIPAL)){
sl.setConf("spark.yarn.principal", argsMap.get(SparkParametersEnum.PRINCIPAL));
}
if(argsMap.containsKey(SparkParametersEnum.DIST_JAR)){
sl.setConf("spark.yarn.dist.jars", argsMap.get(SparkParametersEnum.DIST_JAR));
}
LOGGER.debug("SparkLauncher set");
spark = sl.launch();
LOGGER.debug("SparkLauncher launched");
我试过了:
但都没有用
在这里你可以看到 strak 痕迹:
15 Feb 2017 15:36:22,794 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: Parsed arguments:
15 Feb 2017 15:36:22,794 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: master yarn//*****
15 Feb 2017 15:36:22,795 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: deployMode cluster
15 Feb 2017 15:36:22,795 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: executorMemory null
15 Feb 2017 15:36:22,795 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: executorCores null
15 Feb 2017 15:36:22,795 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: totalExecutorCores null
15 Feb 2017 15:36:22,795 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: propertiesFile /usr/hdp/2.3.0.0-2557/spark/conf/spark-defaults.conf
15 Feb 2017 15:36:22,796 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: driverMemory 2g
15 Feb 2017 15:36:22,796 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: driverCores null
15 Feb 2017 15:36:22,796 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: driverExtraClassPath null
15 Feb 2017 15:36:22,796 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: driverExtraLibraryPath null
15 Feb 2017 15:36:22,796 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: driverExtraJavaOptions null
15 Feb 2017 15:36:22,796 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: supervise false
15 Feb 2017 15:36:22,797 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: queue null
15 Feb 2017 15:36:22,797 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: numExecutors null
15 Feb 2017 15:36:22,797 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: files null
15 Feb 2017 15:36:22,797 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: pyFiles null
15 Feb 2017 15:36:22,797 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: archives null
15 Feb 2017 15:36:22,797 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: mainClass **********.ExtractLauncher
15 Feb 2017 15:36:22,798 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: primaryResource file:/usr/*****/MyJar.jar
15 Feb 2017 15:36:22,798 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: name mySparkApp
15 Feb 2017 15:36:22,798 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: childArgs [application-context.xml -s "2017-02-08" -e "2017-02-08" -t "******" -te "*****"]
15 Feb 2017 15:36:22,798 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: jars null
15 Feb 2017 15:36:22,798 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: packages null
15 Feb 2017 15:36:22,798 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: repositories null
15 Feb 2017 15:36:22,799 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: verbose true
15 Feb 2017 15:36:22,799 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark:
15 Feb 2017 15:36:22,799 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: Spark properties used, including those specified through
15 Feb 2017 15:36:22,800 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: --conf and those from the properties file /usr/hdp/2.3.0.0-2557/spark/conf/spark-defaults.conf:
15 Feb 2017 15:36:22,800 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.yarn.queue -> default
15 Feb 2017 15:36:22,801 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.local.dir -> /hadoop/spark
15 Feb 2017 15:36:22,801 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.history.kerberos.principal -> none
15 Feb 2017 15:36:22,802 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.driver.memory -> 2g
15 Feb 2017 15:36:22,802 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.yarn.max.executor.failures -> 3
15 Feb 2017 15:36:22,802 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.yarn.historyServer.address -> ********:*****
15 Feb 2017 15:36:22,803 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.yarn.services -> org.apache.spark.deploy.yarn.history.YarnHistoryService
15 Feb 2017 15:36:22,803 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.history.ui.port -> *****
15 Feb 2017 15:36:22,804 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.history.provider -> org.apache.spark.deploy.yarn.history.YarnHistoryProvider
15 Feb 2017 15:36:22,804 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.yarn.scheduler.heartbeat.interval-ms -> 5000
15 Feb 2017 15:36:22,805 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.yarn.submit.file.replication -> 3
15 Feb 2017 15:36:22,805 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.yarn.executor.memoryOverhead -> 384
15 Feb 2017 15:36:22,805 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.yarn.containerLauncherMaxThreads -> 25
15 Feb 2017 15:36:22,806 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.yarn.driver.memoryOverhead -> 384
15 Feb 2017 15:36:22,806 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.history.kerberos.keytab -> none
15 Feb 2017 15:36:22,807 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.yarn.preserve.staging.files -> false
15 Feb 2017 15:36:22,807 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark:
15 Feb 2017 15:36:22,808 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark:
15 Feb 2017 15:36:22,814 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: Main class:
15 Feb 2017 15:36:22,814 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: org.apache.spark.deploy.yarn.Client
15 Feb 2017 15:36:22,815 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: Arguments:
15 Feb 2017 15:36:22,815 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: --name
15 Feb 2017 15:36:22,815 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: mySparkApp
15 Feb 2017 15:36:22,815 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: --driver-memory
15 Feb 2017 15:36:22,815 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: 2g
15 Feb 2017 15:36:22,815 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: --jar
15 Feb 2017 15:36:22,816 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: file:/usr/***/MyJar.jar
15 Feb 2017 15:36:22,816 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: --class
15 Feb 2017 15:36:22,816 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: **********.ExtractLauncher
15 Feb 2017 15:36:22,816 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: --arg
15 Feb 2017 15:36:22,816 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: application-context.xml -s "2017-02-08" -e "2017-02-08" -t "******" -te "******"
15 Feb 2017 15:36:22,817 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: System properties:
15 Feb 2017 15:36:22,817 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.yarn.queue -> default
15 Feb 2017 15:36:22,817 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.local.dir -> /hadoop/spark
15 Feb 2017 15:36:22,817 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.history.kerberos.principal -> none
15 Feb 2017 15:36:22,817 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.driver.memory -> 2g
15 Feb 2017 15:36:22,818 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.yarn.max.executor.failures -> 3
15 Feb 2017 15:36:22,818 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.yarn.historyServer.address -> ******:*****
15 Feb 2017 15:36:22,818 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.yarn.services -> org.apache.spark.deploy.yarn.history.YarnHistoryService
15 Feb 2017 15:36:22,818 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.history.ui.port -> *****
15 Feb 2017 15:36:22,818 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: SPARK_SUBMIT -> true
15 Feb 2017 15:36:22,818 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.history.provider -> org.apache.spark.deploy.yarn.history.YarnHistoryProvider
15 Feb 2017 15:36:22,818 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.app.name -> mySparkApp
15 Feb 2017 15:36:22,819 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.yarn.executor.memoryOverhead -> 384
15 Feb 2017 15:36:22,819 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.yarn.submit.file.replication -> 3
15 Feb 2017 15:36:22,819 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.yarn.scheduler.heartbeat.interval-ms -> 5000
15 Feb 2017 15:36:22,819 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.yarn.driver.memoryOverhead -> 384
15 Feb 2017 15:36:22,819 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.yarn.containerLauncherMaxThreads -> 25
15 Feb 2017 15:36:22,820 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.history.kerberos.keytab -> none
15 Feb 2017 15:36:22,820 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.yarn.preserve.staging.files -> false
15 Feb 2017 15:36:22,821 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: spark.master -> yarn-cluster
15 Feb 2017 15:36:22,821 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: Classpath elements:
15 Feb 2017 15:36:22,821 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark:
15 Feb 2017 15:36:22,821 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark:
15 Feb 2017 15:36:22,821 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark:
15 Feb 2017 15:36:23,275 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: 17/02/15 15:36:23 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15 Feb 2017 15:36:23,796 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: 17/02/15 15:36:23 INFO RMProxy: Connecting to ResourceManager at *********:*******
15 Feb 2017 15:36:24,030 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: 17/02/15 15:36:24 INFO Client: Requesting a new application from cluster with 1 NodeManagers
15 Feb 2017 15:36:24,043 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: 17/02/15 15:36:24 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (5120 MB per container)
15 Feb 2017 15:36:24,044 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: 17/02/15 15:36:24 INFO Client: Will allocate AM container, with 2432 MB memory including 384 MB overhead
15 Feb 2017 15:36:24,045 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: 17/02/15 15:36:24 INFO Client: Setting up container launch context for our AM
15 Feb 2017 15:36:24,046 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: 17/02/15 15:36:24 INFO Client: Preparing resources for our AM container
15 Feb 2017 15:36:24,364 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: 17/02/15 15:36:24 WARN DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
15 Feb 2017 15:36:24,402 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: Error: application failed with exception
15 Feb 2017 15:36:24,402 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: org.apache.hadoop.security.AccessControlException: Permission denied: user=www, access=WRITE, inode="/user/www/.sparkStaging/application_1460635834146_0012":hdfs:hdfs:drwxr-xr-x
15 Feb 2017 15:36:24,402 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
15 Feb 2017 15:36:24,402 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
15 Feb 2017 15:36:24,403 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
15 Feb 2017 15:36:24,403 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
15 Feb 2017 15:36:24,403 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
15 Feb 2017 15:36:24,403 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1682)
15 Feb 2017 15:36:24,403 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1665)
15 Feb 2017 15:36:24,403 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
15 Feb 2017 15:36:24,404 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3895)
15 Feb 2017 15:36:24,404 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:983)
15 Feb 2017 15:36:24,404 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
15 Feb 2017 15:36:24,404 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
15 Feb 2017 15:36:24,404 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
15 Feb 2017 15:36:24,404 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
15 Feb 2017 15:36:24,404 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2081)
15 Feb 2017 15:36:24,405 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2077)
15 Feb 2017 15:36:24,410 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at java.security.AccessController.doPrivileged(Native Method)
15 Feb 2017 15:36:24,410 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at javax.security.auth.Subject.doAs(Subject.java:422)
15 Feb 2017 15:36:24,410 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
15 Feb 2017 15:36:24,411 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2075)
15 Feb 2017 15:36:24,411 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark:
15 Feb 2017 15:36:24,414 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
15 Feb 2017 15:36:24,414 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
15 Feb 2017 15:36:24,414 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
15 Feb 2017 15:36:24,414 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
15 Feb 2017 15:36:24,414 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
15 Feb 2017 15:36:24,415 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
15 Feb 2017 15:36:24,415 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3010)
15 Feb 2017 15:36:24,415 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2978)
15 Feb 2017 15:36:24,415 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
15 Feb 2017 15:36:24,415 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
15 Feb 2017 15:36:24,415 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
15 Feb 2017 15:36:24,416 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1043)
15 Feb 2017 15:36:24,416 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
15 Feb 2017 15:36:24,416 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1877)
15 Feb 2017 15:36:24,416 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:598)
15 Feb 2017 15:36:24,416 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:224)
15 Feb 2017 15:36:24,416 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:384)
15 Feb 2017 15:36:24,416 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:102)
15 Feb 2017 15:36:24,417 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.spark.deploy.yarn.Client.run(Client.scala:619)
15 Feb 2017 15:36:24,417 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.spark.deploy.yarn.Client$.main(Client.scala:647)
15 Feb 2017 15:36:24,417 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.spark.deploy.yarn.Client.main(Client.scala)
15 Feb 2017 15:36:24,417 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
15 Feb 2017 15:36:24,417 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
15 Feb 2017 15:36:24,417 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
15 Feb 2017 15:36:24,417 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at java.lang.reflect.Method.invoke(Method.java:497)
15 Feb 2017 15:36:24,421 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:577)
15 Feb 2017 15:36:24,421 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:174)
15 Feb 2017 15:36:24,421 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:197)
15 Feb 2017 15:36:24,422 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112)
15 Feb 2017 15:36:24,422 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
15 Feb 2017 15:36:24,422 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=www, access=WRITE, inode="/user/www/.sparkStaging/application_1460635834146_0012":hdfs:hdfs:drwxr-xr-x
15 Feb 2017 15:36:24,422 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
15 Feb 2017 15:36:24,422 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
15 Feb 2017 15:36:24,422 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
15 Feb 2017 15:36:24,422 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
15 Feb 2017 15:36:24,422 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
15 Feb 2017 15:36:24,423 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1682)
15 Feb 2017 15:36:24,432 [DEBUG] (InputStreamReaderRunnable.java:run:32): Stream spark: ... 33 more
如果有人有想法:)谢谢!
最佳答案
您可以设置以下将自动使用的环境变量:
export HADOOP_USER_NAME=<your hdfs user>
还有 mentioned here :
HADOOP_USER_NAME
This the Hadoop environment variable which propagates the identity of a user in an insecure cluster
关于java - 如何在 java 中使用 sparkSubmit 更改 hdfs 中的用户,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42253750/
SO亲爱的 friend 们: 2014 年 3 月 18 日。我正在处理一种情况,在使用 ng-repeat 时,数组内的元素(我从 Json 字符串中获取)更改了原始顺序。 需要明确的是,数组中的
有很多问题询问如何在 JavaScript 单击处理程序中更改 div 的类,例如,此处:Change Div style onclick .我理解得很好(只需更改 .className),并且它有效
我从access导入了一个数据库到mysql,但其中一个表的列名“股数”带有空格,但我尝试更改、替换甚至删除列名,但失败了。任何人都可以帮助解决这一问题 String UpdateQuary = "U
我正在做一个随机的学校元素。 目前,我有一个包含两个 CSS 的页面。一种用于正常 View ,一种用于残障人士 View 。 此页面还包括两个按钮,它们将更改使用的样式表。 function c
我需要使用 javascript 更改 HTML 元素中的文本,但我不知道该怎么做。 ¿有什么帮助吗? 我把它定义成这样: Text I want to change. 我正在尝试这样做: docum
我在它自己的文件 nav_bar.shtml 中有一个主导航栏,每个其他页面都包含该导航栏。这个菜单栏是一个 jQuery 菜单栏(ApyCom 是销售这些导航栏的公司的名称)。导航栏上的元素如何确定
我正在摆弄我的代码,并开始想知道这个变化是否来自: if(array[index] == 0) 对此: if(!array[index] != 0) 可能会影响任何代码,或者它只是做同样的事情而我不需
我一直在想办法调整控制台窗口的大小。这是我正在使用的函数的代码: #include #include #define WIDTH 70 #define HEIGHT 35 HANDLE wHnd;
我有很多情况会导致相同的消息框警报。 有没有比做几个 if 语句更简单/更好的解决方案? PRODUCTS BOX1 BOX2 BOX3
我有一个包含这些元素的 XELEMENT B Bob Petier 19310227 1 我想像这样转换前缀。 B Bob Pet
我使用 MySQL 5.6 遇到了这种情况: 此查询有效并返回预期结果: select * from some_table where a = 'b' and metadata->>"$.countr
我想知道是否有人知道可以检测 R 中日期列格式的任何中断的包或函数,即检测日期向量格式更改的位置,例如: 11/2/90 12/2/90 . . . 15/Feb/1990 16/Feb/1990 .
我希望能够在小部件显示后更改 GtkButton 的标签 char *ButtonStance == "Connect"; GtkWidget *EntryButton = gtk_button_ne
我正在使用 Altera DE2 FPGA 开发板并尝试使用 SD 卡端口和音频线路输出。我正在使用 VHDL 和 C 进行编程,但由于缺乏经验/知识,我在 C 部分遇到了困难。 目前,我可以从 SD
注意到这个链接后: http://www.newscientist.com/blogs/nstv/2010/12/best-videos-of-2010-progress-bar-illusion.h
我想知道在某些情况下,即使剧本任务已成功执行并且 ok=2,ansible 也会显示“changed=0”。使用 Rest API 和 uri 模块时会发生这种情况。我试图找到解释但没有成功。谁能告诉
这个问题已经有答案了: 已关闭12 年前。 Possible Duplicate: add buttons to push notification alert 是否可以在远程通知显示的警报框中指定有
当您的 TabBarController 中有超过 5 个 View Controller 时,系统会自动为您设置一个“更多” View 。是否可以更改此 View 中导航栏的颜色以匹配我正在使用的颜
如何更改.AndroidStudioBeta文件夹的位置,默认情况下,该文件夹位于Windows中的\ .. \ User \ .AndroidStudioBeta,而不会破坏任何内容? /编辑: 找
我目前正在尝试将更具功能性的编程风格应用于涉及低级(基于 LWJGL)GUI 开发的项目。显然,在这种情况下,需要携带很多状态,这在当前版本中是可变的。我的目标是最终拥有一个完全不可变的状态,以避免状
我是一名优秀的程序员,十分优秀!