- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我有一台机器“c3.8xlarge”的 EMR 集群,在阅读了几个资源后,我明白我必须允许大量的堆外内存,因为我使用的是 pyspark,所以我配置了集群如下:
一名执行人:
cache()
DataFrame 大约需要 3.6GB 的内存。
collect()
或
toPandas()
在 DataFrame 上,进程崩溃。
collect()
或
toPandas()
我收到此错误:
Py4JJavaError: An error occurred while calling o181.collectToPython.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 5 in stage 6.0 failed 4 times, most recent failure: Lost task 5.3 in stage 6.0 (TID 110, ip-10-0-47-207.prod.eu-west-1.hs.internal, executor 9): ExecutorLostFailure (executor 9 exited caused by one of the running tasks) Reason: Container marked as failed: container_1511879540686_0005_01_000016 on host: ip-10-0-47-207.prod.eu-west-1.hs.internal. Exit status: 137. Diagnostics: Container killed on request. Exit code is 137
Container exited with a non-zero exit code 137
Killed by external signal
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1690)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1678)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1677)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1677)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:855)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:855)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:855)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1905)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1860)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1849)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:671)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2022)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2043)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2062)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2087)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:278)
at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply$mcI$sp(Dataset.scala:2803)
at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:2800)
at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:2800)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2823)
at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:2800)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 7 in stage 2.0 failed 4 times, most recent failure: Lost task 7.3 in stage 2.0 (TID 41, ip-10-0-33-57.prod.eu-west-1.hs.internal, executor 5): ExecutorLostFailure (executor 5 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 13.5 GB of 12 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1690)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1678)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1677)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1677)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:855)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:855)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:855)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1905)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1860)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1849)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:671)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2022)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2043)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2062)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2087)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:458)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
最佳答案
TL;博士 我相信您严重低估了内存要求。
即使假设数据已完全缓存,存储信息也只会显示将数据带回驱动程序所需的峰值内存的一小部分。
List[Row]
.后者还存储列名,进一步增加了内存使用量。 toPandas
实现集Rows
首先,then creates Pandas DataFrame
locally .这进一步增加(可能加倍)内存使用量。幸运的是,这部分已经在 master (Spark 2.3) 上解决了,使用 Arrow 序列化 ( SPARK-13534 - Implement Apache Arrow serializer for Spark DataFrame for use in DataFrame.toPandas ) 更直接。关于pandas - collect() 或 toPandas() 在 pyspark/EMR 中的大型 DataFrame 上,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47536123/
pandas.crosstab 和 Pandas 数据透视表似乎都提供了完全相同的功能。有什么不同吗? 最佳答案 pivot_table没有 normalize争论,不幸的是。 在 crosstab
我能找到的最接近的答案似乎太复杂:How I can create an interval column in pandas? 如果我有一个如下所示的 pandas 数据框: +-------+ |
这是我用来将某一行的一列值移动到同一行的另一列的当前代码: #Move 2014/15 column ValB to column ValA df.loc[(df.Survey_year == 201
我有一个以下格式的 Pandas 数据框: df = pd.DataFrame({'a' : [0,1,2,3,4,5,6], 'b' : [-0.5, 0.0, 1.0, 1.2, 1.4,
所以我有这两个数据框,我想得到一个新的数据框,它由两个数据框的行的克罗内克积组成。正确的做法是什么? 举个例子:数据框1 c1 c2 0 10 100 1 11 110 2 12
TL;DR:在 pandas 中,如何绘制条形图以使其 x 轴刻度标签看起来像折线图? 我制作了一个间隔均匀的时间序列(每天一个项目),并且可以像这样很好地绘制它: intensity[350:450
我有以下两个时间列,“Time1”和“Time2”。我必须计算 Pandas 中的“差异”列,即 (Time2-Time1): Time1 Time2
从这个 df 去的正确方法是什么: >>> df=pd.DataFrame({'a':['jeff','bob','jill'], 'b':['bob','jeff','mike']}) >>> df
我想按周从 Pandas 框架中的列中累积计算唯一值。例如,假设我有这样的数据: df = pd.DataFrame({'user_id':[1,1,1,2,2,2],'week':[1,1,2,1,
数据透视表的表示形式看起来不像我在寻找的东西,更具体地说,结果行的顺序。 我不知道如何以正确的方式进行更改。 df示例: test_df = pd.DataFrame({'name':['name_1
我有一个数据框,如下所示。 Category Actual Predicted 1 1 1 1 0
我有一个 df,如下所示。 df: ID open_date limit 1 2020-06-03 100 1 2020-06-23 500
我有一个 df ,其中包含与唯一值关联的各种字符串。对于这些唯一值,我想删除不等于单独列表的行,最后一行除外。 下面使用 Label 中的各种字符串值与 Item 相关联.所以对于每个唯一的 Item
考虑以下具有相同名称的列的数据框(显然,这确实发生了,目前我有一个像这样的数据集!:() >>> df = pd.DataFrame({"a":range(10,15),"b":range(5,10)
我在 Pandas 中有一个 DF,它看起来像: Letters Numbers A 1 A 3 A 2 A 1 B 1 B 2
如何减去两列之间的时间并将其转换为分钟 Date Time Ordered Time Delivered 0 1/11/19 9:25:00 am 10:58:00 am
我试图理解 pandas 中的下/上百分位数计算,但有点困惑。这是它的示例代码和输出。 test = pd.Series([7, 15, 36, 39, 40, 41]) test.describe(
我有一个多索引数据框,如下所示: TQ bought HT Detailed Instru
我需要从包含值“低”,“中”或“高”的数据框列创建直方图。当我尝试执行通常的df.column.hist()时,出现以下错误。 ex3.Severity.value_counts() Out[85]:
我试图根据另一列的长度对一列进行子串,但结果集是 NaN .我究竟做错了什么? import pandas as pd df = pd.DataFrame([['abcdefghi','xyz'],
我是一名优秀的程序员,十分优秀!