gpt4 book ai didi

hadoop - 击jps命令时未显示Datanode

转载 作者:行者123 更新时间:2023-12-02 21:37:34 31 4
gpt4 key购买 nike

我是hadoop的新手,我已经设置了多节点群集,但是当我在主节点上按jps命令时,它仅显示namenode而不是datanode;当我粘贴此URL'Master:50070'时,它显示no live node,因此我无法从我的数据库中复制数据本地系统进入hdfs会引发此错误

hduser@oodles-Latitude-3540:~$ hadoop fs -copyFromLocal /home/oodles/input/test /tmp
15/06/28 16:27:56 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/test._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

使用此命令 start-dfs.sh启动hadoop集群后,我的namenode成功启动,但datanode没有启动。当我检查datanode日志时,它显示此
ToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-06-28 04:01:53,496 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Master/192.168.0.126:9000. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-06-28 04:01:54,498 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Master/192.168.0.126:9000. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-06-28 04:01:55,499 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Master/192.168.0.126:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-06-28 04:01:56,500 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Master/192.168.0.126:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)

我用谷歌搜索,但没有找到解决方案。

当我在从节点上点击jps命令时,它仅显示datanode

当我将“Master:50070”粘贴到浏览器并浏览文件系统时,还有另一件事
它告诉我这个错误
HTTP ERROR 500

Problem accessing /nn_browsedfscontent.jsp. Reason:

Can't browse the DFS since there are no live nodes available to redirect to.
Caused by:

java.io.IOException: Can't browse the DFS since there are no live nodes available to redirect to.
at org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.redirectToRandomDataNode(NamenodeJspHelper.java:666)
at org.apache.hadoop.hdfs.server.namenode.nn_005fbrowsedfscontent_jsp._jspService(nn_005fbrowsedfscontent_jsp.java:70)

我的Hadoop集群配置是这样的

1)主服务器上的/ etc / host文件

2)从站上的/ etc / hosts文件

我在hadoop配置文件夹的master和slave文件中有编辑条目,即masters文件我添加了master和slaves文件我添加了Slave1

谁能帮助我解决这些问题!

显示两个图片的datanode日志

最佳答案

您配置ssh吗?尝试使用ssh登录另一个节点以检查ssh连接。

关于hadoop - 击jps命令时未显示Datanode,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/31099133/

31 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com