gpt4 book ai didi

hadoop - Hadoop-2.2.0 NammeNode启动问题

转载 作者:行者123 更新时间:2023-12-02 21:51:05 25 4
gpt4 key购买 nike

我是Hadoop的新手,并使用./hadoop-daemon.sh start namenode命令启动NameNode时遇到以下问题。

我遵循的步骤:

1. Downloaded Ubuntu13 VM ans installed Java 1.6 and hadoop-2.2.0
2. updated the configuration files
3. ran this hadoop namenode –format
4. ran this from sbin dir ./hadoop-daemon.sh start namenode

错误是:
2014-01-04 06:55:48,561 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2014-01-04 06:55:48,565 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2014-01-04 06:55:48,565 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2014-01-04 06:55:48,571 INFO org.apache.hadoop.util.GSet: 2.0% max memory = 888.9 MB
2014-01-04 06:55:48,571 INFO org.apache.hadoop.util.GSet: capacity = 2^22 = 4194304 entries
2014-01-04 06:55:48,603 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2014-01-04 06:55:48,604 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 1
2014-01-04 06:55:48,604 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2014-01-04 06:55:48,604 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2014-01-04 06:55:48,604 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2014-01-04 06:55:48,604 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
2014-01-04 06:55:48,605 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2014-01-04 06:55:48,605 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2014-01-04 06:55:48,616 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = user (auth:SIMPLE)
2014-01-04 06:55:48,617 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2014-01-04 06:55:48,617 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2014-01-04 06:55:48,617 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2014-01-04 06:55:48,621 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2014-01-04 06:55:48,717 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2014-01-04 06:55:48,717 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2014-01-04 06:55:48,717 INFO org.apache.hadoop.util.GSet: 1.0% max memory = 888.9 MB
2014-01-04 06:55:48,717 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries
2014-01-04 06:55:48,732 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2014-01-04 06:55:48,738 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2014-01-04 06:55:48,738 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2014-01-04 06:55:48,738 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
2014-01-04 06:55:48,740 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2014-01-04 06:55:48,740 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2014-01-04 06:55:48,744 INFO org.apache.hadoop.util.GSet: Computing capacity for map Namenode Retry Cache
2014-01-04 06:55:48,744 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2014-01-04 06:55:48,744 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory = 888.9 MB
2014-01-04 06:55:48,744 INFO org.apache.hadoop.util.GSet: capacity = 2^16 = 65536 entries
2014-01-04 06:55:48,768 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/user/hadoop2_data/hdfs/namenode/in_use.lock acquired by nodename 12574@ubuntuvm
2014-01-04 06:55:48,785 INFO org.mortbay.log: Stopped SelectChannelConnector@0.0.0.0:50070
2014-01-04 06:55:48,789 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2014-01-04 06:55:48,791 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2014-01-04 06:55:48,791 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2014-01-04 06:55:48,793 **FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.io.IOException: NameNode is not formatted.**
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:210)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:787)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:568)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:443)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:491)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:684)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:669)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320)
2014-01-04 06:55:48,798 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2014-01-04 06:55:48,803 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntuvm/127.0.1.1
************************************************************/

有人可以帮我解决此问题吗,我尝试用google搜索,但仍然找不到解决方案。

最佳答案

看来您的“hadoop namenode -format”没用(我想您已经尝试过再次命中该命令,但它仍然不起作用)。当您调用hadoop namenode -format时,您所运行的用户必须对dfs.data.dir和dfs.name.dir中的目录具有写权限。

默认情况下,它们设置为

${hadoop.tmp.dir}/dfs/data


${hadoop.tmp.dir}/dfs/name

其中hadoop.tmp.dir是另一个配置属性,默认为/ tmp / hadoop-$ {username}。

因此,默认情况下,hadoop数据文件保存在/ tmp目录下,这并不是很好,尤其是当您具有可以清除这些目录的脚本时。

确保在core-site.xml中将dfs.data.dir和dfs.name.dir设置为运行hadoop admin命令并启动hadoop守护程序的用户可以写入的目录。然后重新格式化HDFS,然后重试。

关于hadoop - Hadoop-2.2.0 NammeNode启动问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/20920861/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com