gpt4 book ai didi

hadoop - datanode因org.apache.hadoop.util.DiskChecker $ DiskErrorException而失败

转载 作者:行者123 更新时间:2023-12-02 20:32:15 24 4
gpt4 key购买 nike

最近,我安装了Hadoop并格式化了namenode。名称节点启动正常,但数据节点启动失败。这是datanode错误日志

  STARTUP_MSG:   build = git@github.com:hortonworks/hadoop.git -r 3091053c59a62c82d82c9f778c48bde5ef0a89a1; compiled by 'jenkins' on 2018-05-11T07:53Z
STARTUP_MSG: java = 1.8.0_181
************************************************************/
2018-10-17 15:08:42,769 INFO datanode.DataNode (LogAdapter.java:info(47)) - registered UNIX signal handlers for [TERM, HUP, INT]
2018-10-17 15:08:43,665 INFO checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for [DISK]file:/hadoop/hdfs/data/
2018-10-17 15:08:43,682 ERROR datanode.DataNode (DataNode.java:secureMain(2692)) - Exception in secureMain
org.apache.hadoop.util.DiskChecker$DiskErrorException: Invalid value configured for dfs.datanode.failed.volumes.tolerated - 1. Value configured is >= to the number of configured volumes (1).
at org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:174)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2584)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2493)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2540)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2685)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2709)
2018-10-17 15:08:43,688 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2018-10-17 15:08:43,696 INFO datanode.DataNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at hdp2.com/192.168.100.12
dfs.datanode.failed.volumes.tolerated - 1是什么意思?是什么引起了这样的错误?

最佳答案

检查hdfs-site.xml。此属性必须设置为0或更高:

dfs.datanode.failed.volumes.tolerated

The number of volumes that are allowed to fail before a datanode stops offering service. By default any volume failure will cause a datanode to shutdown.

关于hadoop - datanode因org.apache.hadoop.util.DiskChecker $ DiskErrorException而失败,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52849795/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com