gpt4 book ai didi

hadoop - HDFS未格式化,但没有错误

转载 作者:行者123 更新时间:2023-12-02 20:49:39 25 4
gpt4 key购买 nike

我正在4个节点(3个从属)上建立一个Hadoop集群,VPC中的所有单个EC2都在其中。大致遵循以下步骤(但安装了Hadoop 2.8.1):http://arturmkrtchyan.com/how-to-setup-multi-node-hadoop-2-yarn-cluster

我格式化了名称节点,它给出了以下响应:

$ hdfs namenode -format
17/09/26 07:05:34 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: user = hduser
STARTUP_MSG: host = ec2-xx-xx-xx-01.eu-central-1.compute.amazonaws.com/10.0.0.190
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.8.1
STARTUP_MSG: classpath = /usr/...

STARTUP_MSG: build = Unknown -r Unknown; compiled by 'hduser' on 2017-09-22T14:53Z
STARTUP_MSG: java = 1.8.0_144
************************************************************/
17/09/26 07:07:33 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
17/09/26 07:07:33 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-15524170-7dfa-481b-add9-4c2542a55ca5
17/09/26 07:07:33 INFO namenode.FSEditLog: Edit logging is async:false
17/09/26 07:07:33 INFO namenode.FSNamesystem: KeyProvider: null
17/09/26 07:07:33 INFO namenode.FSNamesystem: fsLock is fair: true
17/09/26 07:07:33 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
17/09/26 07:07:33 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
17/09/26 07:07:33 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=false
17/09/26 07:07:33 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
17/09/26 07:07:33 INFO blockmanagement.BlockManager: The block deletion will start around 2017 Sep 26 07:07:33
17/09/26 07:07:33 INFO util.GSet: Computing capacity for map BlocksMap
17/09/26 07:07:33 INFO util.GSet: VM type = 64-bit
17/09/26 07:07:33 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
17/09/26 07:07:33 INFO util.GSet: capacity = 2^21 = 2097152 entries
17/09/26 07:07:33 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
17/09/26 07:07:33 INFO blockmanagement.BlockManager: defaultReplication = 3
17/09/26 07:07:33 INFO blockmanagement.BlockManager: maxReplication = 512
17/09/26 07:07:33 INFO blockmanagement.BlockManager: minReplication = 1
17/09/26 07:07:33 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
17/09/26 07:07:33 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
17/09/26 07:07:33 INFO blockmanagement.BlockManager: encryptDataTransfer = false
17/09/26 07:07:33 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
17/09/26 07:07:33 INFO namenode.FSNamesystem: fsOwner = hduser (auth:SIMPLE)
17/09/26 07:07:33 INFO namenode.FSNamesystem: supergroup = supergroup
17/09/26 07:07:33 INFO namenode.FSNamesystem: isPermissionEnabled = false
17/09/26 07:07:33 INFO namenode.FSNamesystem: HA Enabled: false
17/09/26 07:07:33 INFO namenode.FSNamesystem: Append Enabled: true
17/09/26 07:07:34 INFO util.GSet: Computing capacity for map INodeMap
17/09/26 07:07:34 INFO util.GSet: VM type = 64-bit
17/09/26 07:07:34 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
17/09/26 07:07:34 INFO util.GSet: capacity = 2^20 = 1048576 entries
17/09/26 07:07:34 INFO namenode.FSDirectory: ACLs enabled? false
17/09/26 07:07:34 INFO namenode.FSDirectory: XAttrs enabled? true
17/09/26 07:07:34 INFO namenode.NameNode: Caching file names occurring more than 10 times
17/09/26 07:07:34 INFO util.GSet: Computing capacity for map cachedBlocks
17/09/26 07:07:34 INFO util.GSet: VM type = 64-bit
17/09/26 07:07:34 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
17/09/26 07:07:34 INFO util.GSet: capacity = 2^18 = 262144 entries
17/09/26 07:07:34 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
17/09/26 07:07:34 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
17/09/26 07:07:34 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
17/09/26 07:07:34 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
17/09/26 07:07:34 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
17/09/26 07:07:34 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
17/09/26 07:07:34 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17/09/26 07:07:34 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
17/09/26 07:07:34 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17/09/26 07:07:34 INFO util.GSet: VM type = 64-bit
17/09/26 07:07:34 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
17/09/26 07:07:34 INFO util.GSet: capacity = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /usr/local/hadoop/data/namenode ? (Y or N)
$ Y
17/09/26 07:09:21 INFO namenode.FSImage: Allocated new BlockPoolId: BP-793961451-10.0.0.190-1506409761821
17/09/26 07:09:21 INFO common.Storage: Storage directory /usr/local/hadoop/data/namenode has been successfully formatted.
17/09/26 07:09:21 INFO namenode.FSImageFormatProtobuf: Saving image file /usr/local/hadoop/data/namenode/current/fsimage.ckpt_0000000000000000000 using no compression
17/09/26 07:09:21 INFO namenode.FSImageFormatProtobuf: Image file /usr/local/hadoop/data/namenode/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.
17/09/26 07:09:21 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/09/26 07:09:21 INFO util.ExitUtil: Exiting with status 0
17/09/26 07:09:21 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ec2-xx-xx-xx-01.eu-central-1.compute.amazonaws.com/10.0.0.190
************************************************************/

当我启动dfs和yarn时,它似乎可以正确启动:
$ start-dfs.sh
Starting namenodes on [ec2-xx-xx-xx-01.eu-central-1.compute.amazonaws.com]
ec2-xx-xx-xx-01.eu-central-1.compute.amazonaws.com: starting namenode, logging to ...
10.0.0.185: starting datanode, logging to ...
10.0.0.244: starting datanode, logging to ...
10.0.0.83: starting datanode, logging to ...
Starting secondary namenodes [ec2-xx-xx-xx-01.eu-central-1.compute.amazonaws.com]
ec2-xx-xx-xx-01.eu-central-1.compute.amazonaws.com: starting secondarynamenode, logging to ...


$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to ...
10.0.0.185: starting nodemanager, logging to ...
10.0.0.83: starting nodemanager, logging to ...
10.0.0.244: starting nodemanager, logging to ...

$ jps
14326 NameNode
14998 Jps
14552 SecondaryNameNode
14729 ResourceManager

在其他节点上,例如:
15880 Jps
15563 DataNode
15693 NodeManager

但是,当我尝试将数据写入HDFS时,它告诉我实际上没有节点可用。这似乎是一个非常笼统的错误,我找不到错误所在。
$ hdfs dfs -put pg1661.txt /samples/input
WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /samples/input/pg1661.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.

然后,当我检查状态时,它似乎无法正常工作:
$ hdfs dfsadmin -report
Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: NaN%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 0

我检查了日志文件,除了尝试上载文件外,它们均未指示任何(FATAL)错误。

鉴于以上内容在启动时不会产生任何错误,并且错误消息本身非常通用,因此我很难找到该错误。

最佳答案

从“hdfs dfsadmin -report”命令输出中,容量显示为0。好像您已经忘记格式化namenode了。在启动HDFS之前,您需要运行以下命令。

hdfs namenode -format

此“hdfs dfsadmin -report”输出看起来应类似于以下内容,
Configured Capacity: 32195477504 (29.98 GB)
Present Capacity: 29190479872 (27.19 GB)
DFS Remaining: 29190471680 (27.19 GB)
DFS Used: 8192 (8 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 0

我在下面的链接上有一个单节点设置视频教程。希望对您有帮助。这是针对Hadoop 2.8.1版的,

http://hadooptutorials.info/2017/09/14/hadoop-installation-on-signle-node-cluster/

关于hadoop - HDFS未格式化,但没有错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46407291/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com