gpt4 book ai didi

scala - <控制台> :22: error: not found: value sc

转载 作者:可可西里 更新时间:2023-11-01 14:20:10 26 4
gpt4 key购买 nike

我是 Spark 的新手,正在学习 Spark。在实践中,面临以下几个问题。多步而幽长。 我在 UNIX 环境中使用 spark-shell。出现如下错误。

第一步

    $ spark-shell    Welcome to          ____              __         / __/__  ___ _____/ /__        _\ \/ _ \/ _ `/ __/  '_/       /___/ .__/\_,_/_/ /_/\_\   version 1.3.1          /_/    Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_25)    Type in expressions to have them evaluated.    Type :help for more information.    2016-04-22 07:44:31,5095 ERROR JniCommon fs/client/fileclient/cc/jni_MapRClient.cc:1473 Thread: 20535 mkdirs failed for /user/cni/.sparkStaging/application_1459074732364_1192326, error 13    org.apache.hadoop.security.AccessControlException: User cni(user id 5689)  has been denied access to create application_1459074732364_1192326            at com.mapr.fs.MapRFileSystem.makeDir(MapRFileSystem.java:1100)            at com.mapr.fs.MapRFileSystem.mkdirs(MapRFileSystem.java:1120)            at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1851)            at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:631)            at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:224)            at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:384)            at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:102)            at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:58)            at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:141)            at org.apache.spark.SparkContext.(SparkContext.scala:381)            at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1016)            at $iwC$$iwC.(:9)            at $iwC.(:18)            at (:20)            at .(:24)            at .()            at .(:7)            at .()            at $print()            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)            at java.lang.reflect.Method.invoke(Method.java:606)            at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)            at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)            at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)            at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)            at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)            at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:856)            at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:901)            at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813)            at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:123)            at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:122)            at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)            at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:122)            at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)            at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:973)            at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:157)            at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)            at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:106)            at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)            at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:990)            at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)            at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)            at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)            at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:944)            at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1058)            at org.apache.spark.repl.Main$.main(Main.scala:31)            at org.apache.spark.repl.Main.main(Main.scala)            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)            at java.lang.reflect.Method.invoke(Method.java:606)            at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)            at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)            at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)            at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)            at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)    java.lang.NullPointerException            at org.apache.spark.sql.SQLContext.(SQLContext.scala:145)            at org.apache.spark.sql.hive.HiveContext.(HiveContext.scala:49)            at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)            at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)            at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)            at java.lang.reflect.Constructor.newInstance(Constructor.java:526)            at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1027)            at $iwC$$iwC.(:9)            at $iwC.(:18)            at (:20)            at .(:24)            at .()            at .(:7)            at .()            at $print()            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)            at java.lang.reflect.Method.invoke(Method.java:606)            at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)            at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)            at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)            at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)            at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)            at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:856)            at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:901)            at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813)            at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:130)            at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:122)            at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)            at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:122)            at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)            at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:973)            at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:157)            at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)            at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:106)            at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)            at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:990)            at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)            at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)            at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)            at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:944)            at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1058)            at org.apache.spark.repl.Main$.main(Main.scala:31)            at org.apache.spark.repl.Main.main(Main.scala)            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)            at java.lang.reflect.Method.invoke(Method.java:606)            at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)            at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)            at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)            at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)            at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)    :10: error: not found: value sqlContext           import sqlContext.implicits._                  ^    :10: error: not found: value sqlContext           import sqlContext.sql                  ^    

Step 2:

I just ignored warning/errors above, and moved on with my code. I read that, sc will get created automatically if i use spark-shell, So coded as below.

<pre>
scala> val textFile = sc.textFile("README.md")
<console>:13: error: not found: value sc
val textFile = sc.textFile("README.md")
</pre>

第 3 步:正如它所说的 sc 未找到,尝试创建它。

scala> import org.apache.spark._
import org.apache.spark._

scala> import org.apache.spark.streaming._
import org.apache.spark.streaming._

scala> import org.apache.spark.streaming.StreamingContext._
import org.apache.spark.streaming.StreamingContext._

scala> val conf = new SparkConf().setMaster("local[2]").setAppName("NetworkWordCount").set("spark.ui.port", "44040" ).set("spark.driver.allowMultipleContexts", "true")
conf: org.apache.spark.SparkConf = org.apache.spark.SparkConf@1a58697d

scala> val ssc = new StreamingContext(conf, Seconds(2) )
16/04/22 08:19:18 WARN SparkContext: Another SparkContext is being constructed (or threw an exception in its constructor). This may indicate an error, since only one SparkContext may be running in this JVM (see SPARK-2243). The other SparkContext was created at:
org.apache.spark.SparkContext.<init>(SparkContext.scala:80)
org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1016)
$line3.$read$$iwC$$iwC.<init>(<console>:9)
$line3.$read$$iwC.<init>(<console>:18)
$line3.$read.<init>(<console>:20)
$line3.$read$.<init>(<console>:24)
$line3.$read$.<clinit>(<console>)
$line3.$eval$.<init>(<console>:7)
$line3.$eval$.<clinit>(<console>)
$line3.$eval.$print(<console>)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:856)
ssc: org.apache.spark.streaming.StreamingContext = org.apache.spark.streaming.StreamingContext@15492914

因为spark告诉我它是warning(当然也说了,可能表示error),所以忽略并继续创建RDD。同样,我不确定,这是错误/警告吗???

第 4 步

如下创建RDD。

<pre>

scala> var fil = ssc.textFile("/mapr/datalake/01.Call_ID.txt")
<console>:21: error: value textFile is not a member of org.apache.spark.streaming.StreamingContext
var fil = ssc.textFile("/mapr/datalake/01.Call_ID.txt")
^

</pre>

这是说我的 textFile 不是 streamingContext 的成员。所有这些我都快疯了。另外,我在一家公司工作,在公司的笔记本电脑(JFYI)上执行脚本。

最佳答案

我认为这一切都是由于缺乏权限造成的。假设您有正确的访问权限来使用您可以键入的集群

HADOOP_USER_NAME=hdfs spark-shell

这应该会覆盖您帐户的权限。

关于scala - <控制台> :22: error: not found: value sc,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36796894/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com