- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我有一个Dockerfile如下
FROM python:3.7
RUN apt-get update
RUN apt-get install default-jdk -y
COPY requirements.txt ./
RUN pip install -r requirements.txt
requirements.txt
文件,所以这可能是因为
default-jdk
已更改吗?
/usr/local/lib/python3.7/site-packages/pyspark/rdd.py:824: in collect
port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
/usr/local/lib/python3.7/site-packages/py4j/java_gateway.py:1160: in __call__
answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro1291'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x7f6490c2a350>
target_id = 'z:org.apache.spark.api.python.PythonRDD', name = 'collectAndServe'
def get_return_value(answer, gateway_client, target_id=None, name=None):
"""Converts an answer received from the Java gateway into a Python object.
For example, string representation of integers are converted to Python
integer, string representation of objects are converted to JavaObject
instances, etc.
:param answer: the string returned by the Java gateway
:param gateway_client: the gateway client used to communicate with the Java
Gateway. Only necessary if the answer is a reference (e.g., object,
list, map)
:param target_id: the name of the object from which the answer comes from
(e.g., *object1* in `object1.hello()`). Optional.
:param name: the name of the member from which the answer comes from
(e.g., *hello* in `object1.hello()`). Optional.
"""
if is_error(answer)[0]:
if len(answer) > 1:
type = answer[1]
value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
if answer[1] == REFERENCE_TYPE:
raise Py4JJavaError(
"An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
E : java.lang.IllegalArgumentException
E at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
E at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
E at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
E at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
E at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
E at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
E at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
E at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
E at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
E at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
E at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
E at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
E at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
E at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
E at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
E at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
E at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
E at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
E at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
E at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
E at scala.collection.immutable.List.foreach(List.scala:381)
E at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
E at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
E at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
E at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
E at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
E at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
E at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
E at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
E at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
E at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
E at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:153)
E at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:282)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
/usr/local/lib/python3.7/site-packages/py4j/protocol.py:320: Py4JJavaError
最佳答案
将基本图像更改为python:3.7-stretch
对我有用
关于docker - 在Dockerfile中安装pyspark,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58283180/
我在数据框中有一列月份数字,想将其更改为月份名称,所以我使用了这个: df['monthName'] = df['monthNumber'].apply(lambda x: calendar.mont
Pyspark 中是否有一个 input() 函数,我可以通过它获取控制台输入。如果是,请详细说明一下。 如何在 PySpark 中编写以下代码: directory_change = input("
我们正在 pyspark 中构建数据摄取框架,并想知道处理数据类型异常的最佳方法是什么。基本上,我们希望有一个拒绝表来捕获所有未与架构确认的数据。 stringDf = sparkSession.cr
我正在开发基于一组 ORC 文件的 spark 数据框的 sql 查询。程序是这样的: from pyspark.sql import SparkSession spark_session = Spa
我有一个 Pyspark 数据框( 原始数据框 )具有以下数据(所有列都有 字符串 数据类型): id Value 1 103 2
我有一台配置了Redis和Maven的服务器 然后我执行以下sparkSession spark = pyspark .sql .SparkSession .builder .master('loca
从一些简短的测试来看,pyspark 数据帧的列删除功能似乎不区分大小写,例如。 from pyspark.sql import SparkSession from pyspark.sql.funct
我有: +---+-------+-------+ | id| var1| var2| +---+-------+-------+ | a|[1,2,3]|[1,2,3]| | b|[2,
从一些简短的测试来看,pyspark 数据帧的列删除功能似乎不区分大小写,例如。 from pyspark.sql import SparkSession from pyspark.sql.funct
我有一个带有多个数字列的 pyspark DF,我想为每一列根据每个变量计算该行的十分位数或其他分位数等级。 这对 Pandas 来说很简单,因为我们可以使用 qcut 函数为每个变量创建一个新列,如
我有以下使用 pyspark.ml 包进行线性回归的代码。但是,当模型适合时,我在最后一行收到此错误消息: IllegalArgumentException: u'requirement failed
我有一个由 | 分隔的平面文件(管道),没有引号字符。示例数据如下所示: SOME_NUMBER|SOME_MULTILINE_STRING|SOME_STRING 23|multiline text
给定如下模式: root |-- first_name: string |-- last_name: string |-- degrees: array | |-- element: struc
我有一个 pyspark 数据框如下(这只是一个简化的例子,我的实际数据框有数百列): col1,col2,......,col_with_fix_header 1,2,.......,3 4,5,.
我有一个数据框 +------+--------------------+-----------------+---- | id| titulo |tipo | formac
我从 Spark 数组“df_spark”开始: from pyspark.sql import SparkSession import pandas as pd import numpy as np
如何根据行号/行索引值删除 Pyspark 中的行值? 我是 Pyspark(和编码)的新手——我尝试编码一些东西,但它不起作用。 最佳答案 您不能删除特定的列,但您可以使用 filter 或其别名
我有一个循环生成多个因子表的输出并将列名存储在列表中: | id | f_1a | f_2a | |:---|:----:|:-----| |1 |1.2 |0.95 | |2 |0.7
我正在尝试将 hql 脚本转换为 pyspark。我正在努力如何在 groupby 子句之后的聚合中实现 case when 语句的总和。例如。 dataframe1 = dataframe0.gro
我想添加新的 2 列值服务 arr 第一个和第二个值 但我收到错误: Field name should be String Literal, but it's 0; production_targe
我是一名优秀的程序员,十分优秀!