gpt4 book ai didi

java - Dag 调度程序事件循环 java.lang.OutOfMemoryError : unable to create new native thread

转载 作者:塔克拉玛干 更新时间:2023-11-03 03:22:16 26 4
gpt4 key购买 nike

运行 5-6 小时后,我从 spark-driver 程序中收到以下错误。我正在使用 Ubuntu 16.04 LTS 和 open-jdk-8。

Exception in thread "ForkJoinPool-50-worker-11" Exception in thread "dag-scheduler-event-loop" Exception in thread "ForkJoinPool-50-worker-13" java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at scala.concurrent.forkjoin.ForkJoinPool.tryAddWorker(ForkJoinPool.java:1672)
at scala.concurrent.forkjoin.ForkJoinPool.deregisterWorker(ForkJoinPool.java:1795)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:117)
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at scala.concurrent.forkjoin.ForkJoinPool.tryAddWorker(ForkJoinPool.java:1672)
at scala.concurrent.forkjoin.ForkJoinPool.signalWork(ForkJoinPool.java:1966)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.push(ForkJoinPool.java:1072)
at scala.concurrent.forkjoin.ForkJoinTask.fork(ForkJoinTask.java:654)
at scala.collection.parallel.ForkJoinTasks$WrappedTask$class.start(Tasks.scala:377)
at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.start(Tasks.scala:443)
at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$$anonfun$spawnSubtasks$1.apply(Tasks.scala:189)
at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$$anonfun$spawnSubtasks$1.apply(Tasks.scala:186)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$class.spawnSubtasks(Tasks.scala:186)
at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.spawnSubtasks(Tasks.scala:443)
at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$class.internal(Tasks.scala:157)
at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.internal(Tasks.scala:443)
at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$class.compute(Tasks.scala:149)
at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:443)
at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinTask.doJoin(ForkJoinTask.java:341)
at scala.concurrent.forkjoin.ForkJoinTask.join(ForkJoinTask.java:673)
at scala.collection.parallel.ForkJoinTasks$WrappedTask$class.sync(Tasks.scala:378)
at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.sync(Tasks.scala:443)
at scala.collection.parallel.ForkJoinTasks$class.executeAndWaitResult(Tasks.scala:426)
at scala.collection.parallel.ForkJoinTaskSupport.executeAndWaitResult(TaskSupport.scala:56)
at scala.collection.parallel.ParIterableLike$ResultMapping.leaf(ParIterableLike.scala:958)
at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply$mcV$sp(Tasks.scala:49)
at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
at scala.collection.parallel.Task$class.tryLeaf(Tasks.scala:51)
at scala.collection.parallel.ParIterableLike$ResultMapping.tryLeaf(ParIterableLike.scala:953)
at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$class.compute(Tasks.scala:152)
at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:443)
at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at scala.concurrent.forkjoin.ForkJoinPool.tryAddWorker(ForkJoinPool.java:1672)
at scala.concurrent.forkjoin.ForkJoinPool.deregisterWorker(ForkJoinPool.java:1795)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:117)

这是默认情况下在客户端模式下运行的 Spark Driver 程序产生的错误,所以有些人说只需通过传递 --driver-memory 3g 标志或其他东西来增加堆大小消息 "unable to create new native thread" 确实表示 JVM 正在要求操作系统创建一个新线程,但操作系统无法再分配它,以及 JVM 可以通过请求操作系统创建的线程数取决于平台,但通常在 64 位操作系统和 JVM 上是 32K 线程。

当我执行 ulimit -a 时,我得到以下结果

core file size          (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 120242
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 120242
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

cat/proc/sys/kernel/pid_max

32768

cat/proc/sys/kernel/threads-max

240484

“unable to create new native thread”很明显是和heap无关。所以我认为这更多是操作系统问题。

最佳答案

在 Spark 2.0.0 中使用 ForkJoinPool 似乎有一个错误,它会创建太多线程。特别是在调用 Dstream 上的窗口操作时使用的 UnionRDD.scala 中。

https://issues.apache.org/jira/browse/SPARK-17396所以根据这张票,我升级到 2.0.1 并解决了这个问题。

关于java - Dag 调度程序事件循环 java.lang.OutOfMemoryError : unable to create new native thread,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40315589/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com