- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
我正在尝试读取一些 json,推断模式,然后将其作为 parquet 再次写出到 s3 (s3a)。出于某种原因,在运行的写入部分进行了大约三分之一的过程中,spark 总是出错并出现以下错误。我找不到任何明显的问题原因:它不是内存不足;没有长时间的 GC 暂停。各个执行者的日志中似乎没有任何其他错误消息。
该脚本在我拥有的另一组数据上运行良好,它具有非常相似的结构,但小了几个数量级。
我正在运行 spark 2.0.1-hadoop-2.7 并使用 FileOutputCommitter。算法版本似乎并不重要。
编辑:对于格式错误的 json 或损坏的文件,这似乎不是问题。我已经解压缩并单独读取每个文件,没有错误。
这是脚本的简化版本:
object Foo {
def parseJson(json: String): Option[Map[String, Any]] = {
if (json == null)
Some(Map())
else
parseOpt(json).map((j: JValue) => j.values.asInstanceOf[Map[String, Any]])
}
}
}
// read in as text and parse json using json4s
val jsonRDD: RDD[String] = sc.textFile(inputPath)
.map(row -> Foo.parseJson(row))
// infer a schema that will encapsulate the most rows in a sample of size sampleRowNum
val schema: StructType = Infer.getMostCommonSchema(sc, jsonRDD, sampleRowNum)
// get documents compatibility with schema
val jsonWithCompatibilityRDD: RDD[(String, Boolean)] = jsonRDD
.map(js => (js, Infer.getSchemaCompatibility(schema, Infer.inferSchema(js)).toBoolean))
.repartition(partitions)
val jsonCompatibleRDD: RDD[String] = jsonWithCompatibilityRDD
.filter { case (js: String, compatible: Boolean) => compatible }
.map { case (js: String, _: Boolean) => js }
// create a dataframe from documents with compatible schema
val dataFrame: DataFrame = spark.read.schema(schema).json(jsonCompatibleRDD)
它成功地完成了前面的模式推断步骤。错误本身发生在最后一行,但我想这至少可以包含前面的语句,如果不是更早的话:
org.apache.spark.SparkException: Task failed while writing rows
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Failed to commit task
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:275)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:257)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1345)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258)
... 8 more
Suppressed: java.lang.NullPointerException
at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:147)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$abortTask$1(WriterContainer.scala:282)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$2.apply$mcV$sp(WriterContainer.scala:258)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1354)
... 9 more
Caused by: com.amazonaws.AmazonClientException: Unable to unmarshall response (Failed to parse XML document with handler class com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListBucketHandler). Response Code: 200, Response Text: OK
at com.amazonaws.http.AmazonHttpClient.handleResponse(AmazonHttpClient.java:738)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:399)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3480)
at com.amazonaws.services.s3.AmazonS3Client.listObjects(AmazonS3Client.java:604)
at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:962)
at org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:1147)
at org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:1136)
at org.apache.hadoop.fs.s3a.S3AOutputStream.close(S3AOutputStream.java:142)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at org.apache.parquet.hadoop.ParquetFileWriter.end(ParquetFileWriter.java:400)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:117)
at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:267)
... 13 more
Caused by: com.amazonaws.AmazonClientException: Failed to parse XML document with handler class com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListBucketHandler
at com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.parseXmlInputStream(XmlResponsesSaxParser.java:150)
at com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.parseListBucketObjectsResponse(XmlResponsesSaxParser.java:279)
at com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsUnmarshaller.unmarshall(Unmarshallers.java:75)
at com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsUnmarshaller.unmarshall(Unmarshallers.java:72)
at com.amazonaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:62)
at com.amazonaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:31)
at com.amazonaws.http.AmazonHttpClient.handleResponse(AmazonHttpClient.java:712)
... 29 more
Caused by: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 2; XML document structures must start and end within the same entity.
at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source)
at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLScanner.reportFatalError(Unknown Source)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.endEntity(Unknown Source)
at org.apache.xerces.impl.XMLDocumentScannerImpl.endEntity(Unknown Source)
at org.apache.xerces.impl.XMLEntityManager.endEntity(Unknown Source)
at org.apache.xerces.impl.XMLEntityScanner.load(Unknown Source)
at org.apache.xerces.impl.XMLEntityScanner.skipChar(Unknown Source)
at org.apache.xerces.impl.XMLDocumentScannerImpl$PrologDispatcher.dispatch(Unknown Source)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
at com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.parseXmlInputStream(XmlResponsesSaxParser.java:141)
... 35 more
这是我的 session :
spark.executor.extraJavaOptions -XX:+UseG1GC -XX:MaxPermSize=1G -XX:+HeapDumpOnOutOfMemoryError
spark.executor.memory 16G
spark.executor.uri https://s3.amazonaws.com/foo/spark-2.0.1-bin-hadoop2.7.tgz
spark.hadoop.fs.s3a.impl org.apache.hadoop.fs.s3a.S3AFileSystem
spark.hadoop.fs.s3a.buffer.dir /raid0/spark
spark.hadoop.fs.s3n.buffer.dir /raid0/spark
spark.hadoop.fs.s3a.connection.timeout 500000
spark.hadoop.fs.s3n.multipart.uploads.enabled true
spark.hadoop.parquet.block.size 2147483648
spark.hadoop.parquet.enable.summary-metadata false
spark.jars.packages com.databricks:spark-avro_2.11:3.0.1
spark.local.dir /raid0/spark
spark.mesos.coarse false
spark.mesos.constraints priority:1
spark.network.timeout 600
spark.rpc.message.maxSize 500
spark.speculation false
spark.sql.parquet.mergeSchema false
spark.sql.planner.externalSort true
spark.submit.deployMode client
spark.task.cpus 1
最佳答案
我可以想到这个问题的三个可能原因。
关于scala - Spark : SAXParseException while writing to parquet on s3,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40270486/
这个问题在这里已经有了答案: Why don't Java's +=, -=, *=, /= compound assignment operators require casting? (11 个
我搜索了很多,但没有一个链接能帮助我解决这个问题。我得到了 ORA-21500: internal error code, arguments: [%s], [%s], [%s], [%s], [%s
我正在做 RegexOne 正则表达式教程,它有一个 question关于编写正则表达式以删除不必要的空格。 教程中提供的解决方案是 We can just skip all the starting
([\s\S]+|\s?) 中 |\s? 的目的或作用是什么?如果没有它,表达式会不会与 ([\s\S]+) 相同? 最佳答案 这不是完全相同的。 ([\s\S]+|\s?) 会匹配空字符串,而 ([
这个正则表达式有一组还是两组? 我正在尝试使用第二组访问 bookTitle 但出现错误: Pattern pattern = Pattern.compile("^\\s*(.*?)\\s+-\\s+
在 C 中给定一个字符串指针 s,下面的迭代会做什么?即它以什么方式遍历字符串? for (++s ; *s; ++s); 最佳答案 for (++s ; *s;++s) 表示 将指针 s 递增到字符
我正在用一个 node.js 应用程序解析一个大列表并有这段代码 sizeCode = dbfr.CN_DESC.split('\s+-\s*|\s*-\s+') 这似乎不起作用,因为它返回了 [ '
我正在编写一个简单的字符串连接程序。 该程序按照我发布的方式运行。但是,我首先使用以下代码编写它来查找字符串的结尾: while (*s++) ; 但是,这个方法并没有奏效。我传递给它的字符串
这个问题已经有答案了: What does (?和aramchand来自Mohandas Karamchand G 因此,在使用这些匹配来分割字符串后,您最终会得到 {"M", "K", "G"} 注
我正在尝试转换 Map到 List使用 lambda。 本质上,我想将键和值与 '=' 连接起来之间。这看起来微不足道,但我找不到如何去做。 例如 Map map = new HashMap<>();
我正在经历 K & R,并且在递增指针时遇到困难。练习 5.3(第 107 页)要求您使用指针编写一个 strcat 函数。 在伪代码中,该函数执行以下操作: 将 2 个字符串作为输入。 找到字符串
在下面的代码中,pS 和 s.pS 在最后一行是否保证相等?也就是说,在语句S s = S();中,是否可以确定不会构造一个临时的S? #include using namespace std; s
演示示例代码: public void ReverseString(char[] s) { for(int i = 0, j = s.Length-1; i < j; i++, j--){
我一直在寻找类似于 .NET examples 中的示例的 PowerShell 脚本.取一个 New-TimeSpan 并显示为 1 天 2 小时 3 分钟 4 秒。排除其零的地方,在需要的地方添加
def func(s): s = s + " is corrected" return s string_list = ["She", "He"] for s in string_li
我是 python 的新手。当我在互联网上搜索 lambda 时。我在 lambda_functions 中找到了这个声明. processFunc = collapse and (lambda s:
我最近开始学习正则表达式,并试图为上面的问题写一个正则表达式。如果限制只放在一个字母上(例如不超过 2 个“b”),这并不困难。 那么答案就是:a* c*(b|ε)a* c*(b|ε)a* c* 但是
当我运行 npm install 时出现以下错误,但我无法修复它。 我试过:npm install -g windows-build-tools 也没有修复这个错误 ERR! configure
有很多有趣的haskell网上可以找到片段。 This post可以在 this (awesome) Stack Overflow question 下找到. The author写道: discou
我知道以下三行代码旨在将字符串提取到$ value中并将其存储在$ header中。但是我不知道$value =~ s/^\s+//;和$value =~ s/\s+$//;之间有什么区别。 $val
我是一名优秀的程序员,十分优秀!