- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
我想将一个 spark 应用程序(只是一个简单的 Hello World 应用程序)部署到我的 hadoop 集群。在我的 Windows 机器上使用 spark 提交,我使用 --master yarn 在客户端模式下执行应用程序。连接到 hadoop 集群是成功的,在集群上的日志文件中可以看到。 (hadoop conf文件已经从集群下载下来,保存在客户端windows机器上,环境变量已经设置好)。
使用 hadoop 2.7 和 spark 1.6
这是使用的 spark-submit 命令:
>spark-submit --master yarn --class "SimpleApp" ..\..\SimpleApp\target\scala-2.11\simple-project_2.11-1.0.jar --deploy-mode client
这是返回的错误信息:
16/08/10 10:52:45 ERROR SparkContext: Error initializing SparkContext.
java.io.IOException: No FileSystem for scheme: C
完整的错误信息:
Warning: Ignoring non-spark config property: fs.hdfs.impl=org.apache.hadoop.hdfs.DistributedFileSystem
16/08/10 10:52:32 INFO SparkContext: Running Spark version 1.6.2
16/08/10 10:52:32 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/08/10 10:52:32 INFO SecurityManager: Changing view acls to: sensored,sensored
16/08/10 10:52:32 INFO SecurityManager: Changing modify acls to: sensored,sensored
16/08/10 10:52:32 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(sensored,sensored); users with modify permissions: Set(sensored,sensored)
16/08/10 10:52:33 INFO Utils: Successfully started service 'sparkDriver' on port 52340.
16/08/10 10:52:33 INFO Slf4jLogger: Slf4jLogger started
16/08/10 10:52:33 INFO Remoting: Starting remoting
16/08/10 10:52:34 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@sensored,sensored]
16/08/10 10:52:34 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 52353.
16/08/10 10:52:34 INFO SparkEnv: Registering MapOutputTracker
16/08/10 10:52:34 INFO SparkEnv: Registering BlockManagerMaster
16/08/10 10:52:34 INFO DiskBlockManager: Created local directory at C:\Users\sensored,sensored\AppData\Local\Temp\blockmgr-35b65f9a-47a3-4d1e-940f-be8bc9c78b8f
16/08/10 10:52:34 INFO MemoryStore: MemoryStore started with capacity 511.1 MB
16/08/10 10:52:34 INFO SparkEnv: Registering OutputCommitCoordinator
16/08/10 10:52:34 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/08/10 10:52:34 INFO SparkUI: Started SparkUI at http://10.61.156.198:4040
16/08/10 10:52:34 INFO HttpFileServer: HTTP File server directory is C:\Users\sensored,sensored\AppData\Local\Temp\spark-ec3f2bdc-e11e-48b5-a3f6-7d8ae97b8a5e\httpd-e39c2eda-b9aa-40df-95c9-eae44b53c4ee
16/08/10 10:52:34 INFO HttpServer: Starting HTTP Server
16/08/10 10:52:34 INFO Utils: Successfully started service 'HTTP file server' on port 52356.
16/08/10 10:52:34 INFO SparkContext: Added JAR file:/C:/Users/sensored,sensored/Documents/spark-1.6.2-bin-hadoop2.6/bin/../../SimpleApp/target/scala-2.11/simple-project_2.11-1.0.jar at http://sensored,sensored:52356/jars
/simple-project_2.11-1.0.jar with timestamp 1470819154823
16/08/10 10:52:35 INFO TimelineClientImpl: Timeline service address: http://sensored,sensored:8188/ws/v1/timeline/
16/08/10 10:52:35 INFO RMProxy: Connecting to ResourceManager at sensored,sensored/sensored,sensored.188:8050
16/08/10 10:52:36 INFO Client: Requesting a new application from cluster with 2 NodeManagers
16/08/10 10:52:36 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (221184 MB per container)
16/08/10 10:52:36 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
16/08/10 10:52:36 INFO Client: Setting up container launch context for our AM
16/08/10 10:52:36 INFO Client: Setting up the launch environment for our AM container
16/08/10 10:52:36 INFO Client: Preparing resources for our AM container
16/08/10 10:52:36 WARN DomainSocketFactory: The short-circuit local reads feature cannot be used because UNIX Domain sockets are not available on Windows.
16/08/10 10:52:36 INFO Client: Uploading resource file:/C:/Users/sensored,sensored/Documents/spark-1.6.2-bin-hadoop2.6/lib/spark-assembly-1.6.2-hadoop2.6.0.jar -> hdfs://sensored,sensored:8020/user/sensored,sensored/.sparkSt
aging/application_1468399950865_0375/spark-assembly-1.6.2-hadoop2.6.0.jar
16/08/10 10:52:41 INFO Client: Uploading resource file:/C:/Users/sensored,sensored/AppData/Local/Temp/spark-ec3f2bdc-e11e-48b5-a3f6-7d8ae97b8a5e/__spark_conf__7713144618669536696.zip -> hdfs://sensored,sensored:80
20/user/sensored,sensored/.sparkStaging/application_1468399950865_0375/__spark_conf__7713144618669536696.zip
16/08/10 10:52:41 INFO SecurityManager: Changing view acls to: sensored,sensored
16/08/10 10:52:41 INFO SecurityManager: Changing modify acls to: sensored,sensored
16/08/10 10:52:41 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(sensored,sensored); users with modify permissions: Set(sensored,sensored)
16/08/10 10:52:41 INFO Client: Submitting application 375 to ResourceManager
16/08/10 10:52:41 INFO YarnClientImpl: Submitted application application_1468399950865_0375
16/08/10 10:52:42 INFO Client: Application report for application_1468399950865_0375 (state: ACCEPTED)
16/08/10 10:52:42 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1470819161610
final status: UNDEFINED
tracking URL: http://sensored,sensored:8088/proxy/application_1468399950865_0375/
user: sensored,sensored
16/08/10 10:52:43 INFO Client: Application report for application_1468399950865_0375 (state: ACCEPTED)
16/08/10 10:52:44 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null)
16/08/10 10:52:44 INFO YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> sensored,sensored, PROXY_URI_BASES -> http://sensored,sensored:8088/proxy/application_1468399950865_0375), /proxy/application_1468399950865_0375
16/08/10 10:52:44 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
16/08/10 10:52:44 INFO Client: Application report for application_1468399950865_0375 (state: RUNNING)
16/08/10 10:52:44 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 10.3.131.188
ApplicationMaster RPC port: 0
queue: default
start time: 1470819161610
final status: UNDEFINED
tracking URL: http://sensored,sensored:8088/proxy/application_1468399950865_0375/
user: sensored,sensored
16/08/10 10:52:44 INFO YarnClientSchedulerBackend: Application application_1468399950865_0375 has started running.
16/08/10 10:52:44 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 52383.
16/08/10 10:52:44 INFO NettyBlockTransferService: Server created on 52383
16/08/10 10:52:44 INFO BlockManagerMaster: Trying to register BlockManager
16/08/10 10:52:44 INFO BlockManagerMasterEndpoint: Registering block manager 10.61.156.198:52383 with 511.1 MB RAM, BlockManagerId(driver, sensored,sensored, 52383)
16/08/10 10:52:44 INFO BlockManagerMaster: Registered BlockManager
16/08/10 10:52:45 ERROR SparkContext: Error initializing SparkContext.
java.io.IOException: No FileSystem for scheme: C
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.spark.util.Utils$.getHadoopFileSystem(Utils.scala:1686)
at org.apache.spark.scheduler.EventLoggingListener.<init>(EventLoggingListener.scala:66)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:547)
at SimpleApp$.main(SimpleApp.scala:10)
at SimpleApp.main(SimpleApp.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/08/10 10:52:45 INFO SparkUI: Stopped Spark web UI at http://10.61.156.198:4040
16/08/10 10:52:45 INFO YarnClientSchedulerBackend: Shutting down all executors
16/08/10 10:52:45 INFO YarnClientSchedulerBackend: Interrupting monitor thread
16/08/10 10:52:45 INFO YarnClientSchedulerBackend: Asking each executor to shut down
16/08/10 10:52:45 INFO YarnClientSchedulerBackend: Stopped
16/08/10 10:52:45 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/08/10 10:52:45 INFO MemoryStore: MemoryStore cleared
16/08/10 10:52:45 INFO BlockManager: BlockManager stopped
16/08/10 10:52:45 INFO BlockManagerMaster: BlockManagerMaster stopped
16/08/10 10:52:45 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/08/10 10:52:45 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/08/10 10:52:45 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/08/10 10:52:45 INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" java.io.IOException: No FileSystem for scheme: C
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.spark.util.Utils$.getHadoopFileSystem(Utils.scala:1686)
at org.apache.spark.scheduler.EventLoggingListener.<init>(EventLoggingListener.scala:66)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:547)
at SimpleApp$.main(SimpleApp.scala:10)
at SimpleApp.main(SimpleApp.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/08/10 10:52:45 INFO ShutdownHookManager: Shutdown hook called
16/08/10 10:52:45 INFO ShutdownHookManager: Deleting directory C:\Users\sensored,sensored\AppData\Local\Temp\spark-ec3f2bdc-e11e-48b5-a3f6-7d8ae97b8a5e\httpd-e39c2eda-b9aa-40df-95c9-eae44b53c4ee
16/08/10 10:52:45 INFO ShutdownHookManager: Deleting directory C:\Users\sensored,sensored\AppData\Local\Temp\spark-ec3f2bdc-e11e-48b5-a3f6-7d8ae97b8a5e
最佳答案
改用spark 2.0后,问题没有出现了!
关于windows - 在 yarn cluster (linux) : Error no sheme for Filesystem "C" 上从客户端 (windows) 执行 spark,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38869077/
reqwest v0.9 将 serde v1.0 作为依赖项,因此实现 converting serde_json errors into reqwest error . 在我的代码中,我使用 se
我有这个代码: let file = FileStorage { // ... }; file.write("Test", bytes.as_ref()) .map_err(|e| Mu
我只是尝试用angular-cli创建一个新项目,然后运行服务器,但是它停止并显示一条有趣的消息:Error: No errors。 我以这种方式更新了(希望有帮助):npm uninstall -g
我从我的 javascript 发送交易 Metamask 打开传输对话框 我确定 i get an error message in metamask (inpage.js:1 MetaMask -
这个问题在这里已经有了答案: How do you define custom `Error` types in Rust? (3 个答案) How to get a reference to a
我想知道两者之间有什么大的区别 if let error = error{} vs if error != nil?或者只是人们的不同之处,比如他们如何用代码表达自己? 例如,如果我使用这段代码: u
当我尝试发送超过 50KB 的图像时,我在 Blazor 服务器应用程序上收到以下错误消息 Error: Connection disconnected with error 'Error: Serv
我有一个error-page指令,它将所有异常重定向到错误显示页面 我的web.xml: [...] java.lang.Exception /vi
我有这样的对象: address: { "phone" : 888, "value" : 12 } 在 WHERE 中我需要通过 address.value 查找对象,但是在 SQL 中有函数
每次我尝试编译我的代码时,我都会遇到大量错误。这不是我的代码的问题,因为它在另一台计算机上工作得很好。我尝试重新安装和修复,但这没有帮助。这是整个错误消息: 1>------ Build starte
在我的代码的类部分,如果我写一个错误,则在不应该的情况下,将有几行报告为错误。我将'| error'放在可以从错误中恢复的良好/安全位置,但是我认为它没有使用它。也许它试图在某个地方恢复中间表情? 有
我遇到了 csv 输入文件整体读取故障的问题,我可以通过在 read_csv 函数中添加 "error_bad_lines=False" 来删除这些问题来解决这个问题。 但是我需要报告这些造成问题的文
在 Spring 中,验证后我们在 controller 中得到一个 BindingResult 对象。 很简单,如果我收到验证错误,我想重新显示我的表单,并在每个受影响的字段上方显示错误消息。 因此
我不知道出了什么问题,因为我用 Java 编程了大约一年,从来没有遇到过这个错误。在一分钟前在 Eclipse 中编译和运行工作,现在我得到这个错误: #A fatal error has been
SELECT to_char(messages. TIME, 'YYYY/MM/DD') AS FullDate, to_char(messages. TIME, 'MM/DD
我收到这些错误: AnonymousPath\Anonymized.vb : error BC30037: Character is not valid. AnonymousPath\Anonymiz
我刚刚安装了 gridengine 并在执行 qstat 时出现错误: error: commlib error: got select error (Connection refused) erro
嗨,我正在学习 PHP,我从 CRUD 系统开始,我在 Windows 上安装了 WAMP 服务器,当我运行它时,我收到以下错误消息。 SCREAM: Error suppression ignore
我刚刚开始一个新项目,我正在学习核心数据教程,可以找到:https://www.youtube.com/watch?v=zZJpsszfTHM 我似乎无法弄清楚为什么会抛出此错误。我有一个名为“Exp
当我使用 Jenkins 运行新构建时,出现以下错误: "FilePathY\XXX.cpp : fatal error C1853: 'FilePathZ\XXX.pch' precompiled
我是一名优秀的程序员,十分优秀!