gpt4 book ai didi

hadoop - 停止所有 Hadoop 守护程序并再次重新启动后,无法访问在 HDFS 中创建的目录

转载 作者:可可西里 更新时间:2023-11-01 15:17:25 27 4
gpt4 key购买 nike

我是 Hadoop 的新手,我有几个问题,但我找不到任何解决方案,我的问题如下:

**Created a directory on HDFS using below command:
--bin/hadoop fs -mkdir /user/abhijit/apple_poc
**Checking if my directory has been created:
--bin/hadoop fs -ls
--(output)-->drwxr-xr-x - abhijit supergroup 0 2013-07-11 11:09 /user/abhijit/apple_poc
**Stopping all hadoop daemons:
--bin/stop-all.sh
**Restarting all the daemons again:
--bin/start-all.sh
**Again checking if my directory on HDFS created above is present or not:
--bin/hadoop fs -ls
--(output):
2013-07-11 11:37:57.304 java[3457:1903] Unable to load realm info from SCDynamicStore
13/07/11 11:37:58 INFO ipc.Client: Retrying connect to server:localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/07/11 11:37:59 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/07/11 11:38:00 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/07/11 11:38:01 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/07/11 11:38:02 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/07/11 11:38:03 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/07/11 11:38:04 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/07/11 11:38:05 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/07/11 11:38:06 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/07/11 11:38:07 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
Bad connection to FS. command aborted. exception: Call to localhost/127.0.0.1:9000 failed on connection exception: java.net.ConnectException: Connection refused

请澄清..

  1. 我真的不确定我做错了什么,或者属性文件中有什么要更改的吗?

  2. HDFS 默认目录存储是/user//,我应该更改这个默认目录以便解决我的问题吗?

  3. 每次我必须格式化 namenode 才能解决这个问题,但在格式化之后我上面创建的目录丢失了。

请让我知道这背后的问题是什么..非常感谢您的帮助。

谢谢,阿比 git

最佳答案

这个错误的发生有多种原因,我一直在玩hadoop。多次遇到此问题,原因不同

  1. 如果主节点没有运行 -> 检查日志
  2. 如果在主机文件中没有提到适当的 ip[设置主机名后,在主机文件中提供它的 ip,以便其他节点可以访问它]

关于hadoop - 停止所有 Hadoop 守护程序并再次重新启动后,无法访问在 HDFS 中创建的目录,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/17586426/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com