gpt4 book ai didi

java - 调用使用 libhdfs 的程序时出现 loadFileSystems 错误

转载 作者:可可西里 更新时间:2023-11-01 14:26:39 30 4
gpt4 key购买 nike

代码为libhdfs测试代码。

 int main(int argc, char **argv)
{
hdfsFS fs = hdfsConnect("hdfs://labossrv14", 9000);
const char* writePath = "/libhdfs_test.txt";
hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0, 0);
if(!writeFile)
{
fprintf(stderr, "Failed to open %s for writing!\n", writePath);
exit(-1);
}
char* buffer = "Hello, libhdfs!";
tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer, strlen(buffer)+1);
if (hdfsFlush(fs, writeFile))
{
fprintf(stderr, "Failed to 'flush' %s\n", writePath);
exit(-1);
}
hdfsCloseFile(fs, writeFile);
}

这段代码我费了很大功夫才编译成功,但是运行程序却不行。错误信息如下。

loadFileSystems error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
hdfsBuilderConnect(forceNewInstance=0, nn=labossrv14, port=9000, kerbTicketCachePath=(NULL), userName=(NULL)) error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
hdfsOpenFile(/libhdfs_test.txt): constructNewObjectOfPath error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
Failed to open /libhdfs_test.txt for writing!

我根据 official document 玩这个东西.而且我发现问题可能是CLASSPATH不正确。下面是我的CLASSPATH,由“hadoop classpath --glob”生成的classpath和jdk、jre的lib路径组合而成。

export CLASSPATH=/home/junzhao/hadoop/hadoop-2.5.2/etc/hadoop:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/common/lib/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/common/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/hdfs:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/hdfs/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/yarn/lib/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/yarn/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/*:/home/junzhao/hadoop/hadoop-2.5.2/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar:/usr/lib/jvm/java-8-oracle/lib:/usr/lib/jvm/java-8-oracle/jre/lib:$CLASSPATH

有没有人有一些好的解决方案?谢谢!

最佳答案

又看了一遍教程中的一些资料和之前提出的一些问题。最后发现问题出在JNI没有对CLASSPATH中的通配符进行扩展。所以我只是把所有的jar都放到CLASSPATH中,问题就解决了。由于此命令“hadoop classpath --glob”也会生成通配符,它​​解释了为什么 official document说这个

It is not valid to use wildcard syntax for specifying multiple jars. It may be useful to run hadoop classpath --glob or hadoop classpath --jar to generate the correct classpath for your deployment.

昨天我误解了这一段。

另见 Hadoop C++ HDFS test running ExceptionCan JNI be made to honour wildcard expansion in the classpath?

关于java - 调用使用 libhdfs 的程序时出现 loadFileSystems 错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/31189476/

30 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com