gpt4 book ai didi

java - Hadoop - 启动 YARN 服务时 Java 运行时环境内存不足

转载 作者:可可西里 更新时间:2023-11-01 16:42:18 25 4
gpt4 key购买 nike

我已经根据教程 http://pingax.com/install-apache-hadoop-ubuntu-cluster-setup 设置了一个集群 (1-master & 2-slaves(slave1, slave2)) .当我第一次运行时 HDFS & YARN服务运行没有任何问题。但是当我停止并再次运行它们时,我在运行时得到了以下信息 YARN来自主人的服务(start-yarn.sh)。

# starting yarn daemons
# starting resourcemanager, logging to /local/hadoop/logs/yarn-dev-resourcemanager-login200.out
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 168 bytes for AllocateHeap
# An error report file with more information is saved as: /local/hadoop/hs_err_pid21428.log

Compiler replay data is saved as: /local/hadoop/replay_pid21428.log
slave1: starting nodemanager, logging to /local/hadoop/logs/yarn-dev-nodemanager-login198.out
slave2: starting nodemanager, logging to /local/hadoop/logs/yarn-dev-nodemanager-login199.out
slave2: #
slave2: # There is insufficient memory for the Java Runtime Environment to continue.
slave2: # Native memory allocation (malloc) failed to allocate 168 bytes for AllocateHeap
slave2: # An error report file with more information is saved as:
slave2: # /local/hadoop/hs_err_pid27199.log
slave2: #
slave2: # Compiler replay data is saved as:
slave2: # /local/hadoop/replay_pid27199.log

基于out of Memory Error in Hadoop的建议和 "Java Heap space Out Of Memory Error" while running a mapreduce program , 我改变了 heap memory所有 3 个文件中的大小限制为 256、512、1024 和 2048 ~/.bashrc , hadoop-env.shmapred-site.sh但没有任何效果。

注意:我不是 Linux 或 JVM 方面的专家。

来自其中一个节点的日志文件内容:

# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 32784 bytes for Chunk::new
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (allocation.cpp:390), pid=16375, tid=0x00007f39a352c700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_102-b14) (build 1.8.0_102-b14)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.102-b14 mixed mode linux-amd64 compressed oops)
# Core dump written. Default location: /local/hadoop/core or core.16375 (max size 1 kB). To ensure a full core dump, try "ulimit -c unlimited" before starting Java again

CPU:total 1 (1 cores per cpu, 1 threads per core) family 6 model 45 stepping 2, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, avx, aes, clmul, tsc, tscinvbit, tscinv

Memory: 4k page, physical 2051532k(254660k free), swap 1051644k(1051324k free)

最佳答案

从您的帖子中不清楚 VM 本身有多少内存,但 VM 似乎只有 2GB 物理内存和 1GB 交换空间。如果真是这样,您将真正增加 VM 的内存。绝对不少于 4GB 的物理内存,否则你会很幸运地让 Hadoop 堆栈运行并同时保持操作系统快乐。理想情况下,将每个 VM 设置为大约 8GB 的​​ RAM,以确保您有几 GB 的 RAM 用于 MapReduce 作业。

关于java - Hadoop - 启动 YARN 服务时 Java 运行时环境内存不足,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/39771269/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com