gpt4 book ai didi

java - Datanode + VM初始化期间发生错误 初始堆太小

转载 作者:行者123 更新时间:2023-12-02 01:44:20 26 4
gpt4 key购买 nike

我们重新启动集群上的数据节点

我们在 ambari 集群中有 15 台数据节点机器而每台datanode机器有128G RAM

版本 -(HDP - 2.6.4 和 ambari 版本 2.6.1)

但是由于以下错误,datanode 无法启动

Error occurred during initialization of VM
Too small initial heap

这很奇怪,因为 dtnode_heapsize 是 8G(DataNode 最大 Java 堆大小 = 8G)从日志中我们还可以看到

InitialHeapSize=8192 -XX:MaxHeapSize=8192

所以我们不明白这是怎么回事

dose - 初始堆大小与 DataNode 最大 Java 堆大小相关吗?

来自datanode机器的日志

Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 197804180k(12923340k free), swap 16777212k(16613164k free)
CommandLine flags: -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:GCLogFileSize=1024000 -XX:InitialHeapSize=8192 -XX:MaxHeapSize=8192 -XX:MaxNewSize=209715200 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:NumberOfGCLogFiles=5 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:+PrintAdaptiveSizePolicy -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseGCLogFileRotation -XX:+UseParNewGC
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-worker01.sys242.com.out <==
Error occurred during initialization of VM
Too small initial heap
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 772550
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

另一个日志示例:

resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/2.6.4.0-91/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.4.0-91/hadoop/conf start datanode'' returned 1. starting datanode, logging to 
Error occurred during initialization of VM
Too small initial heap

最佳答案

您提供的值以字节为单位指定。应为 InitialHeapSize=8192m -XX:MaxHeapSize=8192m

参见https://docs.oracle.com/javase/8/docs/technotes/tools/unix/java.html

关于java - Datanode + VM初始化期间发生错误 初始堆太小,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53924644/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com