gpt4 book ai didi

Hadoop、 Apache 星火

转载 作者:可可西里 更新时间:2023-11-01 16:49:05 27 4
gpt4 key购买 nike

我已经在 Window 中安装了 Spark。我正在尝试从 D: 驱动器加载文本文件。 RDD 正在正常创建,但是当我对该接收错误执行任何操作时。我尝试了斜线的所有组合但没有成功

scala> val file = sc.textFile("D:\\file\\file1.txt")
15/12/16 07:53:51 INFO MemoryStore: ensureFreeSpace(175321) called with curMem=4
01474, maxMem=280248975
15/12/16 07:53:51 INFO MemoryStore: Block broadcast_2 stored as values in memory
(estimated size 171.2 KB, free 266.7 MB)
15/12/16 07:53:51 INFO MemoryStore: ensureFreeSpace(25432) called with curMem=57
6795, maxMem=280248975
15/12/16 07:53:51 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in
memory (estimated size 24.8 KB, free 266.7 MB)
15/12/16 07:53:51 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on l
ocalhost:51963 (size: 24.8 KB, free: 267.2 MB)
15/12/16 07:53:51 INFO BlockManagerMaster: Updated info of block broadcast_2_pie
ce0
15/12/16 07:53:51 INFO SparkContext: Created broadcast 2 from textFile at <conso
le>:21
file: org.apache.spark.rdd.RDD[String] = D:\file\file1.txt MapPartitionsRDD[5] a
t textFile at <console>:21

RDD 正常创建,但是当我尝试对 RDD 执行任何操作时收到以下错误

scala> file.count()
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file
/D:/file/file1.txt
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(Fi
eInputFormat.java:285)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.
ava:228)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.j
va:313)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:203)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD
scala:32)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1512)
at org.apache.spark.rdd.RDD.count(RDD.scala:1006)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:24)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:29)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:31)
at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:33)
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:35)
at $iwC$$iwC$$iwC.<init>(<console>:37)
at $iwC$$iwC.<init>(<console>:39)
at $iwC.<init>(<console>:41)
at <init>(<console>:43)
at .<init>(<console>:47)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl
java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcce
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala
1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala
1338)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:84
)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:
56)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.sc
la:901)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813)
at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:656)
at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:664)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$l
op(SparkILoop.scala:669)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$Spar
ILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:996)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$Spar
ILoop$$process$1.apply(SparkILoop.scala:944)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$Spar
ILoop$$process$1.apply(SparkILoop.scala:944)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClas
Loader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$p
ocess(SparkILoop.scala:944)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1058)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl
java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcce
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSu
mit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:1
6)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)


scala>

最佳答案

你必须使用 sc.textFile("file:///...")

默认情况下,它可能会查看 HDFS。通过使用 file 协议(protocol),您将使用本地文件系统。

在 Windows 上,我可以使用此命令

sc.textFile("file:\\C:\\Users\\data.txt").count()

为您尝试 sc.textFile("file:\\D:\\file\\file1.txt")。还要检查你是否有 D:/file/file.txt 权限。你可以去文件浏览器看看你对目录文件和文件file.txt有什么权限

关于Hadoop、 Apache 星火,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34303140/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com