gpt4 book ai didi

r - Sparklyr 忽略行分隔符

转载 作者:行者123 更新时间:2023-12-02 03:33:42 24 4
gpt4 key购买 nike

我正在尝试在sparklyr中读取2GB〜(5mi行)的.csv:

bigcsvspark <- spark_read_csv(sc, "bigtxt", "path", 
delimiter = "!",
infer_schema = FALSE,
memory = TRUE,
overwrite = TRUE,
columns = list(
SUPRESSED COLUMNS AS = 'character'))

并出现以下错误:

Job aborted due to stage failure: Task 9 in stage 15.0 failed 4 times, most recent failure: Lost task 9.3 in stage 15.0 (TID 3963,
10.1.4.16): com.univocity.parsers.common.TextParsingException: Length of parsed input (1000001) exceeds the maximum number of characters defined in your parser settings (1000000). Identified line separator characters in the parsed content. This may be the cause of the error. The line separator in your parser settings is set to '\n'. Parsed content: ---lines of my csv---[\n]
---begin of a splited line --- Parser Configuration: CsvParserSettings: ... default settings ...

和:

CsvFormat:
Comment character=\0
Field delimiter=!
Line separator (normalized)=\n
Line separator sequence=\n
Quote character="
Quote escape character=\
Quote escape escape character=null Internal state when error was thrown:
line=10599,
column=6,
record=8221,
charIndex=4430464,
headers=[---SUPRESSED HEADER---],
content parsed=---more lines without the delimiter.---

如上所示,在某些时候行分隔符开始被忽略。在纯 R 中,可以毫无问题地读取,只需 read.csv 传递路径和分隔符。

最佳答案

看起来该文件并不是真正的 CSV,我想知道 spark_read_text() 在这种情况下是否会更好。您应该能够将所有行放入 Spark,并将这些行拆分为内存中的字段,最后一部分将是最棘手的。

关于r - Sparklyr 忽略行分隔符,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46736470/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com