- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我目前正在试验 Apache Spark。一切似乎都运行良好,因为所有各种组件都已启动并正在运行(即 HDFS、Spark、Yarn 等)。在其中任何一个的启动过程中似乎没有任何错误。我在 Vagrant VM 中运行它,并且 Spark/HDFS/Yarn 是 dockerized。
tl;博士 :通过 Yarn 提交作业会导致 There are 1 datanode(s) running and 1 node(s) are excluded in this operation
.
提交我的申请:$ spark-submit --master yarn --class org.apache.spark.examples.SparkPi --driver-memory 512m --executor-memory 512m --executor-cores 1 /Users/foobar/Downloads/spark-3.0.0-preview2-bin-hadoop3.2/examples/jars/spark-examples_2.12-3.0.0-preview2.jar 10
结果如下:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/05/03 17:45:26 INFO SparkContext: Running Spark version 2.4.5
20/05/03 17:45:26 INFO SparkContext: Submitted application: Spark Pi
20/05/03 17:45:26 INFO SecurityManager: Changing view acls to: foobar
20/05/03 17:45:26 INFO SecurityManager: Changing modify acls to: foobar
20/05/03 17:45:26 INFO SecurityManager: Changing view acls groups to:
20/05/03 17:45:26 INFO SecurityManager: Changing modify acls groups to:
20/05/03 17:45:26 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(foobar); groups with view permissions: Set(); users with modify permissions: Set(foobar); groups with modify permissions: Set()
20/05/03 17:45:26 INFO Utils: Successfully started service 'sparkDriver' on port 52142.
20/05/03 17:45:26 INFO SparkEnv: Registering MapOutputTracker
20/05/03 17:45:27 INFO SparkEnv: Registering BlockManagerMaster
20/05/03 17:45:27 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/05/03 17:45:27 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/05/03 17:45:27 INFO DiskBlockManager: Created local directory at /private/var/folders/1x/h0q3vtw17ddbys9bjcf41mtr0000gn/T/blockmgr-1a34b35e-d5c2-4c11-a637-364f86818b1a
20/05/03 17:45:27 INFO MemoryStore: MemoryStore started with capacity 93.3 MB
20/05/03 17:45:27 INFO SparkEnv: Registering OutputCommitCoordinator
20/05/03 17:45:27 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/05/03 17:45:27 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://foobars-mbp.box:4040
20/05/03 17:45:27 INFO SparkContext: Added JAR file:/Users/foobar/Downloads/spark-3.0.0-preview2-bin-hadoop3.2/examples/jars/spark-examples_2.12-3.0.0-preview2.jar at spark://foobars-mbp.box:52142/jars/spark-examples_2.12-3.0.0-preview2.jar with timestamp 1588545927208
20/05/03 17:45:27 INFO RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
20/05/03 17:45:27 INFO Client: Requesting a new application from cluster with 1 NodeManagers
20/05/03 17:45:27 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
20/05/03 17:45:27 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
20/05/03 17:45:27 INFO Client: Setting up container launch context for our AM
20/05/03 17:45:27 INFO Client: Setting up the launch environment for our AM container
20/05/03 17:45:27 INFO Client: Preparing resources for our AM container
20/05/03 17:45:27 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
20/05/03 17:45:29 INFO Client: Uploading resource file:/private/var/folders/1x/h0q3vtw17ddbys9bjcf41mtr0000gn/T/spark-5467a437-f3e2-4c23-9a15-9051aa89e222/__spark_libs__8443981124167043301.zip -> hdfs://0.0.0.0:9000/user/foobar/.sparkStaging/application_1588537985407_0007/__spark_libs__8443981124167043301.zip
20/05/03 17:46:29 INFO DFSClient: Exception in createBlockOutputStream
org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/192.168.16.6:9866]
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:534)
at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1533)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1309)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1262)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:448)
20/05/03 17:46:29 INFO DFSClient: Abandoning BP-1700972659-172.30.0.2-1588486994156:blk_1073741833_1009
20/05/03 17:46:29 INFO DFSClient: Excluding datanode DatanodeInfoWithStorage[192.168.16.6:9866,DS-6d0dcfb4-265a-4a8f-a86c-35fcc6e8ca70,DISK]
20/05/03 17:46:29 WARN DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/foobar/.sparkStaging/application_1588537985407_0007/__spark_libs__8443981124167043301.zip could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2121)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:295)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2702)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:875)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:561)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
at org.apache.hadoop.ipc.Client.call(Client.java:1475)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy13.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1455)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1251)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:448)
20/05/03 17:46:29 INFO Client: Deleted staging directory hdfs://0.0.0.0:9000/user/foobar/.sparkStaging/application_1588537985407_0007
20/05/03 17:46:29 ERROR SparkContext: Error initializing SparkContext.
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/foobar/.sparkStaging/application_1588537985407_0007/__spark_libs__8443981124167043301.zip could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2121)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:295)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2702)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:875)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:561)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
at org.apache.hadoop.ipc.Client.call(Client.java:1475)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy13.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1455)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1251)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:448)
20/05/03 17:46:29 INFO SparkUI: Stopped Spark web UI at http://foobars-mbp.box:4040
20/05/03 17:46:29 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
20/05/03 17:46:29 INFO YarnClientSchedulerBackend: Stopped
20/05/03 17:46:29 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/05/03 17:46:29 INFO MemoryStore: MemoryStore cleared
20/05/03 17:46:29 INFO BlockManager: BlockManager stopped
20/05/03 17:46:29 INFO BlockManagerMaster: BlockManagerMaster stopped
20/05/03 17:46:29 WARN MetricsSystem: Stopping a MetricsSystem that is not running
20/05/03 17:46:29 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/05/03 17:46:29 INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/foobar/.sparkStaging/application_1588537985407_0007/__spark_libs__8443981124167043301.zip could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2121)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:295)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2702)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:875)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:561)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
at org.apache.hadoop.ipc.Client.call(Client.java:1475)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy13.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1455)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1251)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:448)
20/05/03 17:46:29 INFO ShutdownHookManager: Shutdown hook called
20/05/03 17:46:29 INFO ShutdownHookManager: Deleting directory /private/var/folders/1x/h0q3vtw17ddbys9bjcf41mtr0000gn/T/spark-e77adcce-715f-43d1-a01e-d4141349ed13
20/05/03 17:46:29 INFO ShutdownHookManager: Deleting directory /private/var/folders/1x/h0q3vtw17ddbys9bjcf41mtr0000gn/T/spark-5467a437-f3e2-4c23-9a15-9051aa89e222
Exception in thread "main" org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/foobar/.sparkStaging/application_1588537985407_0007/__spark_libs__8443981124167043301.zip could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
HDFS
发生这种情况时,我看到了
Spark
文件已上传:
hdfs@243579e354c0:/app$ hadoop fs -ls /user/foobar/.sparkStaging
WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.
Found 2 items
drwx------ - foobar hadoop 0 2020-05-03 22:43 /user/foobar/.sparkStaging/application_1588537985407_0006
drwx------ - foobar hadoop 0 2020-05-03 22:45 /user/foobar/.sparkStaging/application_1588537985407_0007
stdout
的日志为执行人。然而在
stderr
我看到以下内容:
Spark Executor Command: "/usr/local/openjdk-8/bin/java" "-cp" "/app/config/:/app/spark/jars/*" "-Xmx1024M" "-Dspark.driver.port=51462" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler@foobars-mbp.box:51462" "--executor-id" "0" "--hostname" "192.168.16.10" "--cores" "8" "--app-id" "app-20200503204833-0000" "--worker-url" "spark://Worker@192.168.16.10:41839"
========================================
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/05/03 20:48:34 INFO CoarseGrainedExecutorBackend: Started daemon with process name: 84@ad4c05fe6b8a
20/05/03 20:48:34 INFO SignalUtils: Registered signal handler for TERM
20/05/03 20:48:34 INFO SignalUtils: Registered signal handler for HUP
20/05/03 20:48:34 INFO SignalUtils: Registered signal handler for INT
20/05/03 20:48:35 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/05/03 20:48:35 INFO SecurityManager: Changing view acls to: hdfs,foobar
20/05/03 20:48:35 INFO SecurityManager: Changing modify acls to: hdfs,foobar
20/05/03 20:48:35 INFO SecurityManager: Changing view acls groups to:
20/05/03 20:48:35 INFO SecurityManager: Changing modify acls groups to:
20/05/03 20:48:35 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hdfs, foobar); groups with view permissions: Set(); users with modify permissions: Set(hdfs, foobar); groups with modify permissions: Set()
Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1748)
at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:61)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:257)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:247)
at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
Caused by: org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:227)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:101)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.$anonfun$run$3(CoarseGrainedExecutorBackend.scala:277)
at scala.runtime.java8.JFunction1$mcVI$sp.apply(JFunction1$mcVI$sp.java:23)
at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:877)
at scala.collection.immutable.Range.foreach(Range.scala:158)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:876)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.$anonfun$run$1(CoarseGrainedExecutorBackend.scala:275)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:62)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:61)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
... 4 more
Caused by: java.io.IOException: Failed to connect to foobars-mbp.box:51462
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:253)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:195)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:204)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:202)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:198)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.UnknownHostException: foobars-mbp.box
at java.net.InetAddress.getAllByName0(InetAddress.java:1281)
at java.net.InetAddress.getAllByName(InetAddress.java:1193)
at java.net.InetAddress.getAllByName(InetAddress.java:1127)
at java.net.InetAddress.getByName(InetAddress.java:1077)
at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:146)
at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:143)
at java.security.AccessController.doPrivileged(Native Method)
at io.netty.util.internal.SocketUtils.addressByName(SocketUtils.java:143)
at io.netty.resolver.DefaultNameResolver.doResolve(DefaultNameResolver.java:43)
at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:63)
at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:55)
at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:57)
at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:32)
at io.netty.resolver.AbstractAddressResolver.resolve(AbstractAddressResolver.java:108)
at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:202)
at io.netty.bootstrap.Bootstrap.access$000(Bootstrap.java:48)
at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:182)
at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:168)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:604)
at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetSuccess(AbstractChannel.java:985)
at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:505)
at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:416)
at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:475)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:510)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:518)
at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 more
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://namenode:9000</value>
</property>
</configuration>
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/app/data/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/app/data/datanode</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
</configuration>
spark.master yarn
spark.driver.memory 512m
spark.executor.memory 1g
spark.yarn.archive hdfs:///user/foo/spark-libs.jar
<configuration>
<property>
<name>yarn.resourcemanager.address</name>
<value>resource-manager:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>resource-manager:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>resource-manager:8031</value>
</property>
<property>
<name>yarn.acl.enable</name>
<value>0</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>resource-manager</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
docker exec -it spark-master /bin/bash
spark-submit --master yarn --class org.apache.spark.examples.SparkPi /app/spark/examples/jars/spark-examples_2.12-3.0.0-preview2.jar 10
Pi is roughly 3.141983141983142
最佳答案
原来是网络问题。如果您仔细查看问题中最初发布的内容,您会在日志中看到以下错误,这是我最初错过的错误:
org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/192.168.16.6:9866]
192.168.16.6
是 Docker 容器,就像在 Docker 网络内部看到的那样。当我从 Docker 容器外部提交此文件时,该 IP 将永远无法路由。那么这里发生了什么:
hostname
通过在我的
docker-compose.yml
中将以下内容添加到该容器的定义中:
hostname: hadoop
/etc/hosts
中添加了以下内容文件:
10.0.2.2 hadoop
10.0.2.2
是我的
localhost
在 Docker 网络之外 - 我的实际主机。
<configuration>
<property>
<name>dfs.client.use.datanode.hostname</name>
<value>true</value>
</property>
<property>
<name>dfs.datanode.use.datanode.hostname</name>
<value>true</value>
</property>
</configuration>
spark-submit
在此之后工作。
network_mode: host
到我的
docker-compose.yml
中的必需容器.这样做使上述所有内容完全无关紧要。这可能仅适用于非生产环境 - 在我的情况下是 - 所以它工作正常。
关于java - Hadoop + Spark : There are 1 datanode(s) running and 1 node(s) are excluded in this operation,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/61432807/
这个问题在这里已经有了答案: Why don't Java's +=, -=, *=, /= compound assignment operators require casting? (11 个
我搜索了很多,但没有一个链接能帮助我解决这个问题。我得到了 ORA-21500: internal error code, arguments: [%s], [%s], [%s], [%s], [%s
我正在做 RegexOne 正则表达式教程,它有一个 question关于编写正则表达式以删除不必要的空格。 教程中提供的解决方案是 We can just skip all the starting
([\s\S]+|\s?) 中 |\s? 的目的或作用是什么?如果没有它,表达式会不会与 ([\s\S]+) 相同? 最佳答案 这不是完全相同的。 ([\s\S]+|\s?) 会匹配空字符串,而 ([
这个正则表达式有一组还是两组? 我正在尝试使用第二组访问 bookTitle 但出现错误: Pattern pattern = Pattern.compile("^\\s*(.*?)\\s+-\\s+
在 C 中给定一个字符串指针 s,下面的迭代会做什么?即它以什么方式遍历字符串? for (++s ; *s; ++s); 最佳答案 for (++s ; *s;++s) 表示 将指针 s 递增到字符
我正在用一个 node.js 应用程序解析一个大列表并有这段代码 sizeCode = dbfr.CN_DESC.split('\s+-\s*|\s*-\s+') 这似乎不起作用,因为它返回了 [ '
我正在编写一个简单的字符串连接程序。 该程序按照我发布的方式运行。但是,我首先使用以下代码编写它来查找字符串的结尾: while (*s++) ; 但是,这个方法并没有奏效。我传递给它的字符串
这个问题已经有答案了: What does (?和aramchand来自Mohandas Karamchand G 因此,在使用这些匹配来分割字符串后,您最终会得到 {"M", "K", "G"} 注
我正在尝试转换 Map到 List使用 lambda。 本质上,我想将键和值与 '=' 连接起来之间。这看起来微不足道,但我找不到如何去做。 例如 Map map = new HashMap<>();
我正在经历 K & R,并且在递增指针时遇到困难。练习 5.3(第 107 页)要求您使用指针编写一个 strcat 函数。 在伪代码中,该函数执行以下操作: 将 2 个字符串作为输入。 找到字符串
在下面的代码中,pS 和 s.pS 在最后一行是否保证相等?也就是说,在语句S s = S();中,是否可以确定不会构造一个临时的S? #include using namespace std; s
演示示例代码: public void ReverseString(char[] s) { for(int i = 0, j = s.Length-1; i < j; i++, j--){
我一直在寻找类似于 .NET examples 中的示例的 PowerShell 脚本.取一个 New-TimeSpan 并显示为 1 天 2 小时 3 分钟 4 秒。排除其零的地方,在需要的地方添加
def func(s): s = s + " is corrected" return s string_list = ["She", "He"] for s in string_li
我是 python 的新手。当我在互联网上搜索 lambda 时。我在 lambda_functions 中找到了这个声明. processFunc = collapse and (lambda s:
我最近开始学习正则表达式,并试图为上面的问题写一个正则表达式。如果限制只放在一个字母上(例如不超过 2 个“b”),这并不困难。 那么答案就是:a* c*(b|ε)a* c*(b|ε)a* c* 但是
当我运行 npm install 时出现以下错误,但我无法修复它。 我试过:npm install -g windows-build-tools 也没有修复这个错误 ERR! configure
有很多有趣的haskell网上可以找到片段。 This post可以在 this (awesome) Stack Overflow question 下找到. The author写道: discou
我知道以下三行代码旨在将字符串提取到$ value中并将其存储在$ header中。但是我不知道$value =~ s/^\s+//;和$value =~ s/\s+$//;之间有什么区别。 $val
我是一名优秀的程序员,十分优秀!