gpt4 book ai didi

java - Hadoop : java. io.IOException : Call to localhost/127. 0.0.1 :54310 failed on local exception: java. io.EOFException

转载 作者:可可西里 更新时间:2023-11-01 14:33:48 24 4
gpt4 key购买 nike

我是 hadoop 的新手,今天才开始使用它,我想将文件写入 hdfs hadoop 服务器,我正在使用服务器 hadoop 1.2.1,当我在 cli 中给出 jps 命令时,我能够看到所有节点都在运行,

31895 Jps
29419 SecondaryNameNode
29745 TaskTracker
29257 DataNode

这是我将文件写入 hdfs 系统的示例客户端代码

public static void main(String[] args) 
{
try {
//1. Get the instance of COnfiguration
Configuration configuration = new Configuration();
configuration.addResource(new Path("/data/WorkArea/hadoop/hadoop-1.2.1/hadoop-1.2.1/conf/core-site.xml"));
configuration.addResource(new Path("/data/WorkArea/hadoop/hadoop-1.2.1/hadoop-1.2.1/conf/hdfs-site.xml"));
//2. Create an InputStream to read the data from local file
InputStream inputStream = new BufferedInputStream(new FileInputStream("/home/local/PAYODA/hariprasanth.l/Desktop/ProjectionTest"));
//3. Get the HDFS instance
FileSystem hdfs = FileSystem.get(new URI("hdfs://localhost:54310"), configuration);
//4. Open a OutputStream to write the data, this can be obtained from the FileSytem
OutputStream outputStream = hdfs.create(new Path("hdfs://localhost:54310/user/hadoop/Hadoop_File.txt"),
new Progressable() {
@Override
public void progress() {
System.out.println("....");
}
});
try
{
IOUtils.copyBytes(inputStream, outputStream, 4096, false);
}
finally
{
IOUtils.closeStream(inputStream);
IOUtils.closeStream(outputStream);
}
} catch (Exception e) {
e.printStackTrace();
}
}

运行代码时我的异常,

java.io.IOException: Call to localhost/127.0.0.1:54310 failed on local exception: java.io.EOFException
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1063)
at org.apache.hadoop.ipc.Client.call(Client.java:1031)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at com.sun.proxy.$Proxy0.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:235)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:275)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:249)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:163)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:283)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:247)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:109)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1792)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:76)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1826)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1808)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:265)
at com.test.hadoop.writefiles.FileWriter.main(FileWriter.java:27)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:760)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:698)

当我调试它时,当我尝试连接到 hdfs 本地服务器时,错误发生在行中,

  FileSystem hdfs = FileSystem.get(new URI("hdfs://localhost:54310"), configuration);

据我搜索,它显示版本不匹配,

hadoop 的服务器版本是 - 1.2.1我正在使用的客户端 jar 是

hadoop-common-0.22.0.jar
hadoop-hdfs-0.22.0.jar

请尽快告诉我问题,

如果可能的话,推荐我可以找到 hadoop 客户端 jar 的地方,也给 jar 命名......请......

问候,哈里

最佳答案

这是因为不同 jar 中的相同类表示(即 hadoop commonshadoop core 具有相同的类)。实际上我对使用相应的 jar 感到困惑。

最后,我最终使用了 apache hadoop 核心。它像苍蝇一样工作。

关于java - Hadoop : java. io.IOException : Call to localhost/127. 0.0.1 :54310 failed on local exception: java. io.EOFException,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/25130799/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com