gpt4 book ai didi

scala - 使用 spark 读取 CSV 时,相当于 ^G 的分隔符是什么?

转载 作者:行者123 更新时间:2023-12-01 03:08:36 24 4
gpt4 key购买 nike

所以,我真的需要帮助做一件愚蠢的事情,但显然我自己无法做到。

我在具有这种格式的文件中有一组行(在 OSX 上使用 less 读取):

XXXXXXXX^GT^XXXXXXXX^G\N^G0^GDL^G\N^G2018-09-14 13:57:00.0^G2018-09-16 00:00:00.0^GCompleted^G\N^G\N^G1^G2018-09-16 21:41:02.267^G1^G2018-09-16 21:41:02.267^GXXXXXXX^G\N
YYYYYYYY^GS^XXXXXXXX^G\N^G0^GDL^G\N^G2018-08-29 00:00:00.0^G2018-08-29 23:00:00.0^GCompleted^G\N^G\N^G1^G2018-09-16 21:41:03.797^G1^G2018-09-16 21:41:03.81^GXXXXXXX^G\N

所以分隔符是 BEL分隔符,我正在以这种方式加载 CSV:
val df = sqlContext.read.format("csv")
.option("header", "false")
.option("inferSchema", "true")
.option("delimiter", "\u2407")
.option("nullValue", "\\N")
.load("part0000")

但是当我阅读它时,它只是通过这种方式将行读取为一列:
XXXXXXXXCXXXXXXXX\N0DL\N2018-09-15 00:00:00.02018-09-16 00:00:00.0Completed\N\N12018-09-16 21:41:03.25712018-09-16 21:41:03.263XXXXXXXX\N
XXXXXXXXSXXXXXXXX\N0DL\N2018-09-15 00:00:00.02018-09-15 23:00:00.0Completed\N\N12018-09-16 21:41:03.3712018-09-16 21:41:03.373XXXXXXXX\N

好像有 unkown character (你什么也看不到,只是因为我在 stackoverflow 上格式化了它)代替 ^G .

更新:
这可能是scala对spark的限制吗?
如果我以这种方式使用 Scala 运行代码:
val df = sqlContext.read.format("csv")
.option("header", "false")
.option("inferSchema", "true")
.option("delimiter", "\\a")
.option("nullValue", "\\N")
.load("part-m-00000")

display(df)

我长了个大肥
java.lang.IllegalArgumentException: Unsupported special character for delimiter: \a

而如果我用 python 运行:
df = sqlContext.read.format('csv').options(header='false', inferSchema='true', delimiter = "\a", nullValue = '\\N').load('part-m-00000')

display(df)

一切安好!

最佳答案

这些版本在 spark-scala 中看起来有限制,这是代码中支持的 csv 分隔符,

apache/spark/sql/catalyst/csv/CSVOptions.scala

val delimiter = CSVExprUtils.toChar(
parameters.getOrElse("sep", parameters.getOrElse("delimiter", ",")))

--- CSVExprUtils.toChar
apache/spark/sql/catalyst/csv/CSVExprUtils.scala
  def toChar(str: String): Char = {
(str: Seq[Char]) match {
case Seq() => throw new IllegalArgumentException("Delimiter cannot be empty string")
case Seq('\\') => throw new IllegalArgumentException("Single backslash is prohibited." +
" It has special meaning as beginning of an escape sequence." +
" To get the backslash character, pass a string with two backslashes as the delimiter.")
case Seq(c) => c
case Seq('\\', 't') => '\t'
case Seq('\\', 'r') => '\r'
case Seq('\\', 'b') => '\b'
case Seq('\\', 'f') => '\f'
// In case user changes quote char and uses \" as delimiter in options
case Seq('\\', '\"') => '\"'
case Seq('\\', '\'') => '\''
case Seq('\\', '\\') => '\\'
case _ if str == """\u0000""" => '\u0000'
case Seq('\\', _) =>
throw new IllegalArgumentException(s"Unsupported special character for delimiter: $str")
case _ =>
throw new IllegalArgumentException(s"Delimiter cannot be more than one character: $str")
}

关于scala - 使用 spark 读取 CSV 时,相当于 ^G 的分隔符是什么?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/54158996/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com