gpt4 book ai didi

java - JVM 崩溃时 hs_err_pidXXX.log 文件中的 rlimit 信息错误

转载 作者:行者123 更新时间:2023-12-02 11:06:28 27 4
gpt4 key购买 nike

我有一个使用 jdk 1.8.0_102 运行的 java 应用程序。当应用程序崩溃(内存不足 - 见下文)时,会生成一个文件 hs_err_pidXXX,其中包含有关 jvm、系统、内存、线程等的一些信息...

    #    # There is insufficient memory for the Java Runtime Environment to continue.    # Native memory allocation (mmap) failed to map 65536 bytes for committing reserved memory.    # Possible reasons:    #   The system is out of physical RAM or swap space    #   In 32 bit mode, the process size limit was hit    # Possible solutions:    #   Reduce memory load on the system    #   Increase physical memory or swap space    #   Check if swap backing store is full    #   Use 64 bit Java on a 64 bit OS    #   Decrease Java heap size (-Xmx/-Xms)    #   Decrease number of Java threads    #   Decrease Java thread stack sizes (-Xss)    #   Set larger code cache with -XX:ReservedCodeCacheSize=    # This output file may be truncated or incomplete.    #    #  Out of Memory Error (os_linux.cpp:2627), pid=1094, tid=0x00007fac4041e700    #    # JRE version: Java(TM) SE Runtime Environment (8.0_102-b14) (build 1.8.0_102-b14)    # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.102-b14 mixed mode linux-amd64 compressed oops)    # Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again    #

...

    ---------------  S Y S T E M  ---------------    OS:DISTRIB_ID=Ubuntu    DISTRIB_RELEASE=14.04    DISTRIB_CODENAME=trusty    DISTRIB_DESCRIPTION="Ubuntu 14.04.5 LTS"    uname:Linux 3.13.0-101-generic #148-Ubuntu SMP Thu Oct 20 22:08:32 UTC 2016 x86_64    libc:glibc 2.19 NPTL 2.19     rlimit: STACK 8192k, CORE 0k, NPROC 30034, NOFILE 4096, AS infinity    load average:10.38 6.50 2.88

In the list of information above, i can see the following line:rlimit: STACK 8192k, CORE 0k, NPROC 30034, NOFILE 4096, AS infinity

My question is the following:why NOFILE does not correspond to the value I have set on my system (in limits.conf file, it should be 20000)? When I run the command ulimit -n with the same user as the one running the jvm, i have a different value. Note that STACK displayed is the correct value that I have set on my system, not the default one.JVM is run on AWS c3.large on demand instance.

Here is the result of the ulimit -a command :

core file size          (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 30034
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 20000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 30034
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

最佳答案

我检查了源代码1。事实证明,转储中打印的 rlimit 值是通过调用 getrlimit 获取的。这是一个 POSIX 库方法,直接从操作系统获取信息。 JVM 不会减少其 NOFILES rlimit,但在某些情况下,它会将软 rlimit 增加到硬 rlimit 的值。

因此,我怀疑转储中报告的 rlimit必须在启动运行应用程序的 JVM 的环境中设置。这不能与运行 ulimit 的 shell 环境相同。

<小时/>

1 - ...使用 findgrep,所以我可能错过了一些东西。

关于java - JVM 崩溃时 hs_err_pidXXX.log 文件中的 rlimit 信息错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40716989/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com