gpt4 book ai didi

apache-spark - Apache Spark : SparkFiles. get(fileName.txt) - 无法从 SparkContext 检索文件内容

转载 作者:行者123 更新时间:2023-12-03 16:52:04 25 4
gpt4 key购买 nike

我用过 SparkContext.addFile("hdfs://host:54310/spark/fileName.txt")并将文件添加到 SparkContext .我使用 org.apache.spark.SparkFiles.get(fileName.txt) 验证了它的存在.它显示了一个绝对路径,类似于 /tmp/spark-xxxx/userFiles-xxxx/fileName.txt .

Now I want to read that file from the above given absolute path location from SparkContext. I tried sc.textFile(org.apache.spark.SparkFiles.get("fileName.txt")).collect().foreach(println) It considers the path returned by SparkFiles.get() as a HDFS path, which is incorrect.



我进行了广泛的搜索以找到有关此的任何有用的读物​​,但运气不佳。

方法有什么问题吗?任何帮助都非常感谢。

这是代码和结果:
scala> sc.addFile("hdfs://localhost:54310/spark/fileName.txt")

scala> org.apache.spark.SparkFiles.get("fileName.txt")
res23: String = /tmp/spark-3646b5fe-0a67-4a16-bd25-015cc73533cd/userFiles-a7d54640-fab2-4dfa-a94f-7de6f74a0764/fileName.txt

scala> sc.textFile(org.apache.spark.SparkFiles.get("fileName.txt")).collect().foreach(println)
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:54310/tmp/spark-3646b5fe-0a67-4a16-bd25-015cc73533cd/userFiles-a7d54640-fab2-4dfa-a94f-7de6f74a0764/fileName.txt
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:287)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:200)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
... 49 elided

最佳答案

使用“file://”语法引用本地文件。

sc.textFile("file://" + org.apache.spark.SparkFiles.get("fileName.txt"))
.collect()
.foreach(println)

关于apache-spark - Apache Spark : SparkFiles. get(fileName.txt) - 无法从 SparkContext 检索文件内容,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51115914/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com