- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
我是 Spark 的新手,正在学习 Spark。在实践中,面临以下几个问题。多步而幽长。 我在 UNIX 环境中使用 spark-shell。出现如下错误。
第一步
$ spark-shell Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 1.3.1 /_/ Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_25) Type in expressions to have them evaluated. Type :help for more information. 2016-04-22 07:44:31,5095 ERROR JniCommon fs/client/fileclient/cc/jni_MapRClient.cc:1473 Thread: 20535 mkdirs failed for /user/cni/.sparkStaging/application_1459074732364_1192326, error 13 org.apache.hadoop.security.AccessControlException: User cni(user id 5689) has been denied access to create application_1459074732364_1192326 at com.mapr.fs.MapRFileSystem.makeDir(MapRFileSystem.java:1100) at com.mapr.fs.MapRFileSystem.mkdirs(MapRFileSystem.java:1120) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1851) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:631) at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:224) at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:384) at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:102) at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:58) at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:141) at org.apache.spark.SparkContext.(SparkContext.scala:381) at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1016) at $iwC$$iwC.(:9) at $iwC.(:18) at (:20) at .(:24) at .() at .(:7) at .() at $print() at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065) at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338) at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840) at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871) at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819) at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:856) at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:901) at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813) at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:123) at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:122) at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324) at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:122) at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64) at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:973) at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:157) at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64) at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:106) at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64) at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:990) at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944) at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944) at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135) at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:944) at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1058) at org.apache.spark.repl.Main$.main(Main.scala:31) at org.apache.spark.repl.Main.main(Main.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) java.lang.NullPointerException at org.apache.spark.sql.SQLContext.(SQLContext.scala:145) at org.apache.spark.sql.hive.HiveContext.(HiveContext.scala:49) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1027) at $iwC$$iwC.(:9) at $iwC.(:18) at (:20) at .(:24) at .() at .(:7) at .() at $print() at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065) at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338) at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840) at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871) at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819) at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:856) at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:901) at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813) at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:130) at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:122) at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324) at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:122) at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64) at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:973) at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:157) at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64) at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:106) at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64) at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:990) at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944) at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944) at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135) at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:944) at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1058) at org.apache.spark.repl.Main$.main(Main.scala:31) at org.apache.spark.repl.Main.main(Main.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) :10: error: not found: value sqlContext import sqlContext.implicits._ ^ :10: error: not found: value sqlContext import sqlContext.sql ^
Step 2:
I just ignored warning/errors above, and moved on with my code. I read that, sc will get created automatically if i use spark-shell, So coded as below.
<pre>
scala> val textFile = sc.textFile("README.md")
<console>:13: error: not found: value sc
val textFile = sc.textFile("README.md")
</pre>
第 3 步:正如它所说的 sc 未找到,尝试创建它。
scala> import org.apache.spark._
import org.apache.spark._
scala> import org.apache.spark.streaming._
import org.apache.spark.streaming._
scala> import org.apache.spark.streaming.StreamingContext._
import org.apache.spark.streaming.StreamingContext._
scala> val conf = new SparkConf().setMaster("local[2]").setAppName("NetworkWordCount").set("spark.ui.port", "44040" ).set("spark.driver.allowMultipleContexts", "true")
conf: org.apache.spark.SparkConf = org.apache.spark.SparkConf@1a58697d
scala> val ssc = new StreamingContext(conf, Seconds(2) )
16/04/22 08:19:18 WARN SparkContext: Another SparkContext is being constructed (or threw an exception in its constructor). This may indicate an error, since only one SparkContext may be running in this JVM (see SPARK-2243). The other SparkContext was created at:
org.apache.spark.SparkContext.<init>(SparkContext.scala:80)
org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1016)
$line3.$read$$iwC$$iwC.<init>(<console>:9)
$line3.$read$$iwC.<init>(<console>:18)
$line3.$read.<init>(<console>:20)
$line3.$read$.<init>(<console>:24)
$line3.$read$.<clinit>(<console>)
$line3.$eval$.<init>(<console>:7)
$line3.$eval$.<clinit>(<console>)
$line3.$eval.$print(<console>)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:856)
ssc: org.apache.spark.streaming.StreamingContext = org.apache.spark.streaming.StreamingContext@15492914
因为spark告诉我它是warning(当然也说了,可能表示error),所以忽略并继续创建RDD。同样,我不确定,这是错误/警告吗???
第 4 步
如下创建RDD。
<pre>
scala> var fil = ssc.textFile("/mapr/datalake/01.Call_ID.txt")
<console>:21: error: value textFile is not a member of org.apache.spark.streaming.StreamingContext
var fil = ssc.textFile("/mapr/datalake/01.Call_ID.txt")
^
</pre>
这是说我的 textFile 不是 streamingContext 的成员。所有这些我都快疯了。另外,我在一家公司工作,在公司的笔记本电脑(JFYI)上执行脚本。
最佳答案
我认为这一切都是由于缺乏权限造成的。假设您有正确的访问权限来使用您可以键入的集群
HADOOP_USER_NAME=hdfs spark-shell
这应该会覆盖您帐户的权限。
关于scala - <控制台> :22: error: not found: value sc,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36796894/
我是 Spark 的新手。有人可以清除我的疑问吗: 让我们假设下面是我的代码: a = sc.textFile(filename) b = a.filter(lambda x: len(x)>0 an
问题 每行输入都将包含一个字符串,后跟一个整数。每个字符串最多包含 10 个字母字符,每个整数都在 0 到 999 的范围内 输出代码 在输出的每一行中应该有两列:第一列包含字符串,并使用 15 个字
我希望从不同供应商、不同用途、不同 APDU 的一堆智能卡中读取一些基本信息。例如:国家身份识别智能卡、EMV(支付)、手机SIM、javacard等... 我编写了一个 Java 应用程序。让我将
我有一组我想读入 RDD 的日志文件。 这些日志文件都是压缩的 gzip 文件,文件名是 日期盖章。 我一直在用sc.wholeTextFiles()读入文件,似乎我遇到了 Java 堆内存问题。为了
我使用以下命令创建服务:sc create MyService binpath= "C:\Path\MyDriver.sys" 返回的消息是:[SC] CreateService SUCCESS,但是
我使用 Spring mvc,我的问题是关于 sendError方法来自 HttpServletResponse . 有人可以告诉我哪个最好: @RequestMapping(method = Req
class superClass {} class subClass extends superClass{} public class test { public static void m
我在ConcurrentHashMap中阅读了addCount函数的源代码, 但我不知道什么时候可以实现( sc == rs + 1 || sc == rs + MAX_RESIZERS)条件。 为什
我正在尝试使用 Sproutcore 呈现 UI,其中我有一个列表列表。有一个垂直的数据列表,其中每个关联的项目都有自己的项目列表。我已经根据我对 Sproutcore 的了解设置了我认为合适的模型。
当我通过 http://software.intel.com/en-us/android 安装 Intelhaxm 时并且我成功安装了,但是当我尝试使用管理员在命令提示符下验证 Intelhaxm 是
我试图在我的 MVC - Sitecore - 7.1 中的 v4.0.30319 项目中使用 Glass Mapper。 以下是我安装的 Glass Mapper 版本 Glass Mapper 版
在我通过取消注释文件中的以下行来更改 svnserve.conf 之后,我正在尝试使用 cmd 创建一个 svnserver, anon-access = read auth-access = wri
有没有办法让 java 使用 javax.smartcardio.TerminalFactory 识别我的 NFC 读卡器? 我正在使用 Ubuntu 13.04 并使用 pcsc_scan 我能够读
我正在使用 sc.exe 命令来安装 C# windows 服务。 C:Windows\System32> sc.exe Create "TestService1" binPath= "C:\Prog
sc-bytes 代表什么?每一个关于 sc-bytes 的讨论和博客都声明它代表从服务器到客户端的字节。 想知道sc-bytes是否代表总响应字节数, 例如,。当我们访问任何 URL 时,它是否包含
topLeftView: SC.ScrollView.design({ backgroundColor: "#DADFE6", childViews: 'labLabel labMem
我想将一些已解析的 JSON 数据从我的 Controller 推送到 SC.SourceListView (Sproutcore Showcase) .我使用 SC.TreeController并将
您对此有什么解决方案吗? 我想得到 SC.SceneView从下到上过渡,而不是从左到右或从右到左。 Doc在这里并没有说太多,但是由于原始转换必须在某个地方,也许我可以为此创建其方法的原型(prot
我正在基于SproutCore 1.8的项目中。我的应用程序必须在复杂的表格 View 中显示大量数据。实际上,表格 View 将成为应用程序界面的核心元素之一。另外,我可能需要大纲/树 View ,
我正在尝试在SproutCore应用程序中按部门显示联系人列表,并尝试将SC.ListView嵌套在另一个SC.ListView中无济于事。 我可以use SC.GridView to assist,
我是一名优秀的程序员,十分优秀!