gpt4 book ai didi

java - 超出了 OutOfMemory GC 开销限制来获取 log4j 对象上的锁

转载 作者:行者123 更新时间:2023-12-02 14:46:17 27 4
gpt4 key购买 nike

任何人都可以帮我确定问题的确切位置。是 JVM、Log4j 还是我们应用程序中的其他东西?

我们正在 Solaris 10 (SUNW、Sun-Fire-V240) 服务器上使用 JDK 1.6.0.24 运行多线程应用程序。其中有 RMI 调用来与客户端进行通信。

我们的应用程序挂起。我在 threaddump 中看到了以下 OutOfMemory。但是,我知道这是因为 GC 只能占用 2% 的对象内存。

  # java.lang.OutOfMemoryError: GC overhead limit exceeded    Heap     PSYoungGen      total 304704K, used 154560K [0xe0400000, 0xfbc00000, 0xfbc00000)      eden space 154560K, 100% used [0xe0400000,0xe9af0000,0xe9af0000)      from space 150144K, 0% used [0xf2960000,0xf2960000,0xfbc00000)      to   space 145856K, 0% used [0xe9af0000,0xe9af0000,0xf2960000)     PSOldGen        total 897024K, used 897023K [0xa9800000, 0xe0400000, 0xe0400000)      object space 897024K, 99% used [0xa9800000,0xe03ffff0,0xe0400000)     PSPermGen       total 28672K, used 27225K [0xa3c00000, 0xa5800000, 0xa9800000)      object space 28672K, 94% used [0xa3c00000,0xa5696580,0xa5800000)

就我而言,这应该是因为 GC 无法从大量等待线程中索取内存。如果我看到线程转储。大多数线程正在等待获取 org.apache.log4j.Logger 上的锁。使用log4j-1.2.15

如果您看到下面第一个线程的跟踪。它获取 2 个对象的锁,其他线程(约 50 个)正在等待获取锁。 20 分钟内几乎可以看到相同的痕迹。

这是线程转储:

     pool-3-thread-51" prio=3 tid=0x00a38000 nid=0xa4 runnable [0xa0d5f000]        java.lang.Thread.State: RUNNABLE       at java.text.DateFormat.format(DateFormat.java:316)      at org.apache.log4j.helpers.PatternParser$DatePatternConverter.convert(PatternParser.java:443)    at org.apache.log4j.helpers.PatternConverter.format(PatternConverter.java:65)    at org.apache.log4j.PatternLayout.format(PatternLayout.java:506)    at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:310)    at org.apache.log4j.RollingFileAppender.subAppend(RollingFileAppender.java:276)    at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)    at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)    - locked  (a org.apache.log4j.RollingFileAppender)    at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)    at org.apache.log4j.Category.callAppenders(Category.java:206)    - locked  (a org.apache.log4j.Logger)    at org.apache.log4j.Category.forcedLog(Category.java:391)    at org.apache.log4j.Category.info(Category.java:666)    at com.airvana.faultServer.niohandlers.NioNotificationHandler.parseAndQueueData(NioNotificationHandler.java:296)    at com.airvana.faultServer.niohandlers.NioNotificationHandler.messageReceived(NioNotificationHandler.java:145)    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:105)    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:567)"Timer-3" prio=3 tid=0x0099a800 nid=0x53 waiting for monitor entry [0xa1caf000]   java.lang.Thread.State: BLOCKED (on object monitor)    at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:231)    - waiting to lock  (a org.apache.log4j.RollingFileAppender)    at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)    at org.apache.log4j.Category.callAppenders(Category.java:206)    - locked  (a org.apache.log4j.spi.RootLogger)    at org.apache.log4j.Category.forcedLog(Category.java:391)    at org.apache.log4j.Category.info(Category.java:666)    at com.airvana.controlapp.export.AbstractOMDataCollector.run(AbstractOMDataCollector.java:100)    at java.util.TimerThread.mainLoop(Timer.java:512)    at java.util.TimerThread.run(Timer.java:462)"TrapHandlerThreadPool:Thread-10" prio=3 tid=0x014dac00 nid=0x4f waiting for monitor entry [0xa1d6f000]   java.lang.Thread.State: BLOCKED (on object monitor)    at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:231)    - waiting to lock  (a org.apache.log4j.RollingFileAppender)    at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)    at org.apache.log4j.Category.callAppenders(Category.java:206)    - locked  (a org.apache.log4j.Logger)    at org.apache.log4j.Category.forcedLog(Category.java:391)    at org.apache.log4j.Category.info(Category.java:666)    at com.airvana.faultServer.db.ConnectionPool.printDataSourceStats(ConnectionPool.java:146)    at com.airvana.faultServer.db.SQLUtil.freeConnection(SQLUtil.java:267)    at com.airvana.faultServer.db.DbAPI.addEventOrAlarmOptimized(DbAPI.java:904)    at com.airvana.faultServer.eventProcessing.EventProcessor.processEvent(EventProcessor.java:24)    at com.airvana.faultServer.filters.BasicTrapFilter.processTrap(BasicTrapFilter.java:80)    at com.airvana.faultServer.eventEngine.EventEngine.notifyTrapProcessors(EventEngine.java:314)    at com.airvana.faultServer.eventEngine.NodewiseTrapQueue.run(NodewiseTrapQueue.java:94)    at com.airvana.common.utils.ThreadPool$PoolThread.run(ThreadPool.java:356)"RMI TCP Connection(27927)-10.193.3.41" daemon prio=3 tid=0x0186c800 nid=0x1d53 waiting for monitor entry [0x9f84e000]   java.lang.Thread.State: BLOCKED (on object monitor)    at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:231)    - waiting to lock  (a org.apache.log4j.RollingFileAppender)    at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)    at org.apache.log4j.Category.callAppenders(Category.java:206)    - locked  (a org.apache.log4j.Logger)    at org.apache.log4j.Category.forcedLog(Category.java:391)    at org.apache.log4j.Category.info(Category.java:666)    at com.airvana.faultServer.processCommunications.ConfigAppCommReceiver.sendEvent(ConfigAppCommReceiver.java:178)    at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)    at java.lang.reflect.Method.invoke(Method.java:597)    at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305)    at sun.rmi.transport.Transport$1.run(Transport.java:159)    at java.security.AccessController.doPrivileged(Native Method)    at sun.rmi.transport.Transport.serviceCall(Transport.java:155)    at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535)    at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790)    at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649)    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)    at java.lang.Thread.run(Thread.java:619)"pool-3-thread-49" prio=3 tid=0x01257800 nid=0xa1 waiting for monitor entry [0xa0def000]   java.lang.Thread.State: BLOCKED (on object monitor)    at org.apache.log4j.Category.callAppenders(Category.java:204)    - waiting to lock  (a org.apache.log4j.Logger)    at org.apache.log4j.Category.forcedLog(Category.java:391)    at org.apache.log4j.Category.info(Category.java:666)    at com.airvana.faultServer.niohandlers.NioNotificationHandler.processSeqNumber(NioNotificationHandler.java:548)    at com.airvana.faultServer.niohandlers.NioNotificationHandler.parseAndQueueData(NioNotificationHandler.java:301)    at com.airvana.faultServer.niohandlers.NioNotificationHandler.messageReceived(NioNotificationHandler.java:145)    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:105)    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:567)    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:803)    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:385)    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:324)    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:306)    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:223)    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:87)    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:567)    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:803)    at org.jboss.netty.handler.timeout.ReadTimeoutHandler.messageReceived(ReadTimeoutHandler.java:149)    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:87)    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:567)    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipe"pool-3-thread-44" prio=3 tid=0x00927800 nid=0x9b waiting for monitor entry [0xa0f0f000]   java.lang.Thread.State: BLOCKED (on object monitor)    at org.apache.log4j.Category.callAppenders(Category.java:204)    - waiting to lock  (a org.apache.log4j.Logger)    at org.apache.log4j.Category.forcedLog(Category.java:391)    at org.apache.log4j.Category.info(Category.java:666)    at com.airvana.faultServer.niohandlers.NioNotificationHandler.parseAndQueueData(NioNotificationHandler.java:296)    at com.airvana.faultServer.niohandlers.NioNotificationHandler.messageReceived(NioNotificationHandler.java:145)    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:105)    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:567)    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:803)    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:385)    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:324)    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:306)    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:223)    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:87)    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:567)    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:803)    at org.jboss.netty.handler.timeout.ReadTimeoutHandler.messageReceived(ReadTimeoutHandler.java:149)    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:87)    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:567)    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:803)    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:385)    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:324)    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:306)    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:221)    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:87)    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:567)    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:803)    at org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:76)    at org.jboss.netty.handler.execution.OrderedMemoryAwareThreadPoolExecutor$ChildExecutor.run(OrderedMemoryAwareThreadPoolExecutor.java:314)    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)    at java.lang.Thread.run(Thread.java:619)

最佳答案

当 JVM 决定运行垃圾收集器所花费的时间比例太大时,会发生由于 GC 开销限制而导致的 OutOfMemoryError。这是堆即将满的典型标志。

如果堆太满,JVM 会花费越来越多的时间进行垃圾收集,以回收越来越少的内存。剩下来做有用工作的时间相应百分比减少。

我的假设是,您的记录器正在备份,因为 GC 运行之间没有足够的时间来应对记录速率。因此,大量阻塞的线程是次要症状,而不是问题的根本原因。

<小时/>

假设上述情况属实,短期修复方法是使用 JVM 选项重新启动应用程序以获得更大的堆大小。您还可以更改 GC 开销阈值,以便您的应用程序更快终止。 (这可能看起来很奇怪,但对于您的应用程序来说,快速终止可能比在几分钟或几小时内陷入停滞更好。)

真正的解决方法是找出堆空间不足的原因。您需要启用 GC 日志记录并观察应用程序运行时的内存使用趋势……数小时、数天、数周。如果您注意到内存使用量长期增长,则很可能存在某种内存泄漏。您需要使用内存分析器来追踪它。

关于java - 超出了 OutOfMemory GC 开销限制来获取 log4j 对象上的锁,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/6198885/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com