gpt4 book ai didi

java - 在 MAPR 中找不到 Hadoop FS API 文件问题

转载 作者:可可西里 更新时间:2023-11-01 16:30:30 25 4
gpt4 key购买 nike

我在运行 hadoop fs api 删除目录时遇到问题。即使我有适当的配置,程序也会抛出异常。需要帮助解决问题。

我正在使用下面的 maven 依赖项

hadoop-common 2.4.1-mapr-1408hadoop-核心 2.4.1-mapr-1408hadoop 客户端 2.7.1

repo :http://repository.mapr.com/maven/

package com.cisco.installbase.hiveconnector;

import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.log4j.Logger;

public class ShellUtilities {

private static final Logger LOGGER = Logger.getLogger(ShellUtilities.class);

String target_dir = ReadProperties.getInstance().getProperty("target_dir");
String tablename = "";

Configuration conf = new Configuration();
FileSystem fs = null;
String dir = " ";

public void DeleteDirectory(String tablename) {
String fullpath = target_dir + tablename;
try {
LOGGER.info("Deleting the HDFS directory " + fullpath);
fs = FileSystem.get(conf);
} catch (IOException e) {
LOGGER.error(e.getMessage());
} catch (Exception e) {
LOGGER.error(e.getMessage());
}

Path directory = new Path(fullpath);

try {
if (fs.exists(directory)) {
fs.delete(directory, true);
}
} catch (IOException e) {
LOGGER.error(e.getMessage());
} catch (Exception e) {
LOGGER.error(e.getMessage());
}

}
}

堆栈跟踪:

16/02/19 23:04:33 ERROR cldbutils.CLDBRpcCommonUtils: File is not found: /opt/mapr/conf/mapr-clusters.conf
java.io.FileNotFoundException: \opt\mapr\conf\mapr-clusters.conf (The system cannot find the path specified)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:146)
at java.io.FileInputStream.<init>(FileInputStream.java:101)
at java.io.FileReader.<init>(FileReader.java:58)
at com.mapr.baseutils.cldbutils.CLDBRpcCommonUtils.init(CLDBRpcCommonUtils.java:144)
at com.mapr.baseutils.cldbutils.CLDBRpcCommonUtils.<init>(CLDBRpcCommonUtils.java:72)
at com.mapr.baseutils.cldbutils.CLDBRpcCommonUtils.<clinit>(CLDBRpcCommonUtils.java:63)
at org.apache.hadoop.conf.CoreDefaultProperties.<clinit>(CoreDefaultProperties.java:68)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:274)
at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:1847)
at org.apache.hadoop.conf.Configuration.getProperties(Configuration.java:2062)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2272)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2224)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2141)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:1081)
at org.apache.hadoop.fs.FileSystem.getDefaultUri(FileSystem.java:177)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169)
at com.cisco.installbase.hiveconnector.ShellUtilities.DeleteDirectory(ShellUtilities.java:24)
at com.cisco.installbase.hiveconnector.MainApp.importTables(MainApp.java:66)
at com.cisco.installbase.hiveconnector.MainApp.startTimeLogger(MainApp.java:51)
at com.cisco.installbase.hiveconnector.MainApp.main(MainApp.java:40)
16/02/19 23:04:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/02/19 23:04:35 ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:318)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:333)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:326)
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
at org.apache.hadoop.security.Groups.parseStaticMapping(Groups.java:93)
at org.apache.hadoop.security.Groups.<init>(Groups.java:77)
at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:279)
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:185)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:621)
at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:606)
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:519)
at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2590)
at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2582)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2448)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:404)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169)
at com.cisco.installbase.hiveconnector.ShellUtilities.DeleteDirectory(ShellUtilities.java:24)
at com.cisco.installbase.hiveconnector.MainApp.importTables(MainApp.java:66)
at com.cisco.installbase.hiveconnector.MainApp.startTimeLogger(MainApp.java:51)
at com.cisco.installbase.hiveconnector.MainApp.main(MainApp.java:40)
16/02/19 23:04:36 WARN fs.MapRFileSystem: Could not find any cluster, defaulting to localhost
Some error on socket 1472
2016-02-19 23:04:37,3423 ERROR Client fs/client/fileclient/cc/client.cc:394 Thread: 4332 Failed to initialize client for cluster 127.0.0.1:7222, error Cannot send after transport endpoint shutdown(108)
16/02/19 23:04:37 ERROR hiveconnector.ShellUtilities: Could not create FileClient
16/02/19 23:04:37 INFO hiveconnector.ShellUtilities: Deleting the HDFS directory /app/dev/SmartAnalytics/sqoop_temp/XXCCS_DS_CVDPRDLINE_DETAIL
Some error on socket 1488
2016-02-19 23:04:38,3434 ERROR Client fs/client/fileclient/cc/client.cc:394 Thread: 4332 Failed to initialize client for cluster 127.0.0.1:7222, error Cannot send after transport endpoint shutdown(108)
16/02/19 23:04:38 ERROR hiveconnector.ShellUtilities: Could not create FileClient
16/02/19 23:04:38 INFO hiveconnector.ShellUtilities: Deleting the HDFS directory /app/dev/SmartAnalytics/sqoop_temp/XXCCS_DS_INSTANCE_DETAIL
Some error on socket 1488
2016-02-19 23:04:39,3424 ERROR Client fs/client/fileclient/cc/client.cc:394 Thread: 4332 Failed to initialize client for cluster 127.0.0.1:7222, error Cannot send after transport endpoint shutdown(108)
16/02/19 23:04:39 ERROR hiveconnector.ShellUtilities: Could not create FileClient
16/02/19 23:04:39 INFO hiveconnector.ShellUtilities: Deleting the HDFS directory /app/dev/SmartAnalytics/sqoop_temp/XXCCS_DS_CVDPRDLINE_DETAIL
Some error on socket 1488
2016-02-19 23:04:40,3445 ERROR Client fs/client/fileclient/cc/client.cc:394 Thread: 4332 Failed to initialize client for cluster 127.0.0.1:7222, error Cannot send after transport endpoint shutdown(108)
16/02/19 23:04:40 ERROR hiveconnector.ShellUtilities: Could not create FileClient
16/02/19 23:04:40 INFO hiveconnector.ShellUtilities: Deleting the HDFS directory /app/dev/SmartAnalytics/sqoop_temp/XXCCS_DS_SAHDR_CORE
Some error on socket 1488
2016-02-19 23:04:41,3525 ERROR Client fs/client/fileclient/cc/client.cc:394 Thread: 4332 Failed to initialize client for cluster 127.0.0.1:7222, error Cannot send after transport endpoint shutdown(108)
16/02/19 23:04:41 ERROR hiveconnector.ShellUtilities: Could not create FileClient

最佳答案

当您从一台移除机器上使用 MapR 时,您必须安装“MapR 客户端”,它包含运行时的所有依赖项。

这是对允许您编译的 Maven 依赖项的补充。 MapR Client是一组Java库,也是用于以最有效的方式访问MapR Cluster的本地库。

我邀请您按照此处记录的步骤进行操作:

这将允许您:

  • 为您安装客户端库
  • 配置您要使用的集群“目标”
  • 这将创建 /opt/mapr/.. 文件夹

关于java - 在 MAPR 中找不到 Hadoop FS API 文件问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35521488/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com