gpt4 book ai didi

java - Spark 工作之间的巨大时间差距

转载 作者:行者123 更新时间:2023-12-05 06:59:32 28 4
gpt4 key购买 nike

我创建并保留了一个 df1,然后我在上面执行以下操作:

df1.persist (From the Storage Tab in spark UI it says it is 3Gb)

df2=df1.groupby(col1).pivot(col2) (This is a df with 4.827 columns and 40107 rows)
df2.collect
df3=df1.groupby(col2).pivot(col1) (This is a df with 40.107 columns and 4.827 rows)

-----it hangs here for almost 2 hours-----

df4 = (..Imputer or na.fill on df3..)
df5 = (..VectorAssembler on df4..)
(..PCA on df5..)
df1.unpersist

我有一个包含 16 个节点的集群(每个节点有 1 个 worker 和 1 个具有 4 个内核和 24Gb Ram 的执行器)和一个 master(有 15Gb 的 Ram)。 spark.shuffle.partitions 也是 192。它挂起 2 小时,没有任何反应。 Spark UI 中没有任何 Activity 。为什么它挂了这么久?是 DagScheduler 吗?我怎样才能检查它?如果您需要更多信息,请告诉我。

----编辑1----

在等待将近两个小时后,它继续进行,然后最终失败。以下是 Spark UI 中的阶段和执行器选项卡: Stages Executors

此外,在工作节点的 stderr 文件中,它说:

OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000003fe900000, 6434586624, 0) failed; error='Cannot allocate memory' (errno=12)

此外,似乎在 stderr 和 stdout 旁边的文件夹中生成了一个名为“hs_err_pid11877”的文件,内容如下:

There is insufficient memory for the Java Runtime Environment to continue.Native memory allocation (mmap) failed to map 6434586624 bytes for committing reserved memory.Possible reasons:The system is out of physical RAM or swap spaceThe process is running with CompressedOops enabled, and the Java Heap may be blocking the growth of the native heapPossible solutions:Reduce memory load on the systemIncrease physical memory or swap spaceCheck if swap backing store is fullDecrease Java heap size (-Xmx/-Xms)Decrease number of Java threadsDecrease Java thread stack sizes (-Xss)Set larger code cache with -XX:ReservedCodeCacheSize=JVM is running with Zero Based Compressed Oops mode in which the Java heap isplaced in the first 32GB address space. The Java Heap base address is themaximum limit for the native heap growth. Please use -XX:HeapBaseMinAddressto set the Java Heap base and to place the Java Heap above 32GB virtual address.This output file may be truncated or incomplete.Out of Memory Error (os_linux.cpp:2792), pid=11877, tid=0x00007f237c1f8700JRE version: OpenJDK Runtime Environment (8.0_265-b01) (build 1.8.0_265-8u265-b01-0ubuntu2~18.04-b01)Java VM: OpenJDK 64-Bit Server VM (25.265-b01 mixed mode linux-amd64 compressed oops)Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again

...以及它失败的任务的其他信息,GC信息等。

----编辑2----

这是最后一个枢轴的任务部分(阶段图片中 ID 为 16 的阶段).. 就在悬挂之前。似乎所有 192 个分区都有相当多的数据,从 15 到 20MB。

tasksSection

最佳答案

Spark 中的

pivot 生成一个额外的 Stage 来获取枢轴值,这发生在水下并且可能需要一些时间并且取决于您的资源分配方式等。

关于java - Spark 工作之间的巨大时间差距,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64403440/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com