gpt4 book ai didi

java - Hadoop hdfs DistributedFileSystem 无法实例化

转载 作者:行者123 更新时间:2023-11-30 06:18:15 25 4
gpt4 key购买 nike

我已经建立了一个hadoop hdfs集群,由于我是hadoop新手,我一直在尝试遵循一个简单的示例,从我在本地计算机中编写的java驱动程序中读取/写入hdfs。我尝试测试的示例如下:

public static void main(String[] args) throws IOException {

args = new String[3];
args[0] = "add";
args[1] = "./files/jaildata.csv";
args[2] = "hdfs://<Namenode-Host>:<Port>/dir1/dir2/";
if (args.length < 1) {
System.out.println("Usage: hdfsclient add/read/delete/mkdir [<local_path> <hdfs_path>]");
System.exit(1);
}

FileSystemOperations client = new FileSystemOperations();
String hdfsPath = "hdfs://<Namenode-Host>:<Port>";

Configuration conf = new Configuration();
conf.addResource(new Path("file:///user/local/hadoop/etc/hadoop/core-site.xml"));
conf.addResource(new Path("file:///user/local/hadoop/etc/hadoop/hdfs-site.xml"));

if (args[0].equals("add")) {
if (args.length < 3) {
System.out.println("Usage: hdfsclient add <local_path> <hdfs_path>");
System.exit(1);
}
client.addFile(args[1], args[2], conf);

} else {
System.out.println("Usage: hdfsclient add/read/delete/mkdir [<local_path> <hdfs_path>]");
System.exit(1);
}
System.out.println("Done!");
}

其中 addFile 函数如下:

public void addFile(String source, String dest, Configuration conf) throws IOException {

FileSystem fileSystem = FileSystem.get(conf);

// Get the filename out of the file path
String filename = source.substring(source.lastIndexOf('/') + 1, source.length());

// Create the destination path including the filename.
if (dest.charAt(dest.length() - 1) != '/') {
dest = dest + "/" + filename;
} else {
dest = dest + filename;
}
Path path = new Path(dest);
if (fileSystem.exists(path)) {
System.out.println("File " + dest + " already exists");
return;
}

// Create a new file and write data to it.
FSDataOutputStream out = fileSystem.create(path);
InputStream in = new BufferedInputStream(new FileInputStream(new File(source)));

byte[] b = new byte[1024];
int numBytes = 0;
while ((numBytes = in.read(b)) > 0) {
out.write(b, 0, numBytes);
}

// Close all the file descriptors
in.close();
out.close();
fileSystem.close();
}

该项目是一个maven项目,有hadoop-common-2.6.5hadoop-hdfs-2.9.0hadoop=hdfs-client 2.9 .0 添加到依赖项并配置为构建包含所有依赖项的 jar。

我的问题是,无论我尝试过不同的演示示例,在 FileSystem fileSystem = FileSystem.get(conf) 创建 FileSystem 时都会出现以下异常;:

Exception in thread "main" java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.hdfs.DistributedFileSystem could not be instantiated
at java.util.ServiceLoader.fail(ServiceLoader.java:232)
at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2565)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2576)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2593)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2632)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2614)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:354)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FSDataOutputStreamBuilder

我不知道如何进行,并且我已经尝试了我在网上看到的几种解决方案,因此我将不胜感激任何有关这方面的建议......

谢谢。

最佳答案

org.apache.hadoop.fs.FSDataOutputStreamBuilder 类不在 hadoop-common-2.6.5 中,而是在 hadoop-common-2.9.0 中.

据我所知,您已经在使用 hdfs-client 2.9.0 版本。将其他 hadoop 软件包与 2.9.0 保持一致,以避免类似问题。

在您的构建中引用 hadoop-common 的 2.9.0 版本来解决此问题。

关于java - Hadoop hdfs DistributedFileSystem 无法实例化,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48767756/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com