- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试将 csv 转换为 Parquet。我正在使用 python 3.6 和 Spark 2.3.1 64 位。我无法找到给定回溯的解决方案。我也在使用 64 位 python。
我有这个 csv:
Corp,Vathanya Beck
Corp,Mario Bazile
Open,Hasom Bennitt-traflet
Open,Jonathon Berry
Corp,Ayinde Amezquita
Corp,Carol Airiofolo
Corp,Wilfredo Brozo
我可以使用 pandas 函数 to_parquet 将 csv 制作为 Parquet ,但不知何故 Spark 无法正常工作。在 pandas 中,我使用 pyarrow 引擎进行转换。我正在使用以下 Spark 代码将 csv 转换为 Parquet:
from pyspark import SparkContext, SparkConf
conf = SparkConf()
sc = SparkContext(conf=conf)
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
import pyspark
from pyspark.sql.session import SparkSession
from pyspark.sql.types import StructType ,StructField,StringType
SparkSession.builder.config(conf=conf).appName("BLEH").getOrCreate()
schema = StructType([StructField('type', StringType(), True),
StructField('name1', StringType(), True)])
df = sqlContext.read.csv('cv_transactions.csv',schema)
df.show()
以下是读取 Spark 数据帧中的 csv 后的给定输出。
+----+--------------------+
|type| name1|
+----+--------------------+
|Corp| Vathanya Beck|
|Corp| Mario Bazile|
|Open|Hasom Bennitt-tra...|
|Open| Jonathon Berry|
|Corp| Ayinde Amezquita|
|Corp| Carol Airiofolo|
|Corp| Wilfredo Brozo|
+----+--------------------+
但是当我尝试使用以下代码转换为 Parquet 时:
df.write.parquet('r.parquet')
它给了我以下错误:
Py4JJavaError: An error occurred while calling o347.parquet.
: org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:224)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:154)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:225)
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:547)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 25.0 failed 1 times, most recent failure: Lost task 0.0 in stage 25.0 (TID 25, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:285)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:197)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:196)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: (null) entry in command string: null chmod 0644 C:\Users\rohan_pawar\Documents\parquet\r\_temporary\0\_temporary\attempt_20181214173824_0025_m_000000_0\part-00000-e076c220-6226-4617-abf9-14e7f3a2ce81-c000.snappy.parquet
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:770)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:866)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:849)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:225)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:209)
at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:307)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:296)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:328)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:398)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:461)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:892)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:789)
at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:241)
at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:342)
at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:302)
at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetOutputWriter.scala:37)
at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anon$1.newInstance(ParquetFileFormat.scala:151)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.newOutputWriter(FileFormatWriter.scala:367)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:378)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:269)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:267)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1414)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272)
... 8 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1602)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1590)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1589)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1589)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1823)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1772)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1761)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:194)
... 31 more
Caused by: org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:285)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:197)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:196)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
Caused by: java.io.IOException: (null) entry in command string: null chmod 0644 C:\Users\rohan_pawar\Documents\parquet\r\_temporary\0\_temporary\attempt_20181214173824_0025_m_000000_0\part-00000-e076c220-6226-4617-abf9-14e7f3a2ce81-c000.snappy.parquet
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:770)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:866)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:849)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:225)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:209)
at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:307)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:296)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:328)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:398)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:461)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:892)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:789)
at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:241)
at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:342)
at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:302)
at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetOutputWriter.scala:37)
at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anon$1.newInstance(ParquetFileFormat.scala:151)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.newOutputWriter(FileFormatWriter.scala:367)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:378)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:269)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:267)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1414)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272)
... 8 more
最佳答案
我能够很好地运行你的代码。我的 Spark 版本如下。
$ pyspark --version
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.3.1
/_/
Using Scala version 2.11.8, OpenJDK 64-Bit Server VM, 1.8.0_191
查看Py4JJavaError: An error occurred while calling o26.parquet. (Reading Parquet file)看看你是否有同样的问题。请检查您正在运行的 pyspark 版本,如果问题仍然存在,请使用完整的堆栈跟踪更新问题。
关于python - 使用 pyspark 将 csv 文件转换为 parquet 文件 : Py4JJavaError: An error occurred while calling o347. parquet 错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53779327/
我使用的是 Windows 8.1 和 Python 2.7,我在特定文件路径中设置了所有文件(希望正确),但每当我运行 python manage.py runserver 时,我都会收到此错误。
背景: 我有一个像这样的目录结构: Package/ setup.py src/ __init__.py __main__.py cod
我从这个线程运行了一个示例代码。 How to properly use coverage.py in Python? 但是,当我执行此命令时 py.test test.py --cov=sample
IPython 0.13.1 文档说: $ ipython -h ... Usage ipython [subcommand] [options] [files] If invoked
我写了一个网站,让我困惑的是当我运行这个网站时,首先我需要启动应用程序,所以有 3 种方法: sudo python xxx.py python xxx.py xxx.py 每一个我都不清楚怎么用,目
我不确定为什么它不起作用,这可能是一个您无法解决的问题,但我只是想知道为什么它不起作用。如果我浪费了您的时间,或者没有正确地提出问题,我很抱歉,我 16 岁,对 Python 还算陌生。 在main.
鉴于以下情况:models.py from .managers import PersonManager from django.db import models class Person(model
有没有办法将参数传递给 web.py 处理程序类构造函数? 例如。这些参数可能来自命令行(当主 web.py 脚本运行时),在第一个参数(作为端口号)之后 最佳答案 当然,这取决于你的意思。毕竟都是p
我对 python/django 编程很陌生,因为我没有编程背景。我正在在线上课,我只想确切地知道 manage.py 文件的作用。我试过用谷歌搜索它,但除了在 django-admin.py 周围放
我想将类别及其子类别保存到数据库中,这里每个类别都有多个子类别。您能帮我保存与类别相对应的用户、类别和多个子类别吗?Models.py、Serializers.py、Views .py 并附加传入请求
所以我的机器人开始有很多命令,并且在 main.py 上变得有点困惑。我知道有一种方法可以将命令存储在其他文件中,然后在 discord.js 上触发它们时将它们应用于 main.py。在 disco
我正在尝试制作一个类似于 mee6 的 Discord 机器人,因为它会按特定时间间隔计算用户在我的 Discord 服务器中发送的消息。我已经在网上搜索过,但即使有类似的问题也找不到我要找的东西。例
我正在尝试制作一个机器人,它根据特定 channel 中的消息创建线程。如果有在 discord.py 中的文本 channel 中创建线程的方法,请告诉我。 最佳答案 是的,但有一个问题。 当前版本
我一直在尝试制作一个命令来显示一些信息,然后当我对表情使用react时,它应该会显示另一组信息。 我尝试使用 this 的部分内容,特别是第 335 到 393 行的部分让它工作。但是,它什么也不做。
这是我试过的代码: @client.event async def on_message(message): if client.user.mention in message.content
我试过这段代码,机器人说猜但没有回应我的猜测。 @commands.command() async def game(self, ctx): number = random.randint(0
我决定尝试让我的不和谐机器人播放音乐,但我已经卡住了。主要是因为我找不到任何资源来帮助当前版本,我一直在从文档中获取所有内容。但是,我不知道如何检查机器人是否已连接到语音 channel 。 我试过
我在一个目录中有三个文件: # Untitled-1.py print("UTITLEDPY") if __name__== "__main__": from telegram.ext imp
我对 python 相当陌生,并且一直只使用 Jupyter Notebooks。当我需要运行我已保存在计算机中某处的 .py 文件时,我通常所做的就是使用魔术命令 %run %run '/home/
我有 Django 1.4 和 Python 2.6.6当我使用“django-amdin.py startproject djproject”时,请按照网页中的步骤操作 https://www.ib
我是一名优秀的程序员,十分优秀!