gpt4 book ai didi

apache-spark - Kubernetes 1.9 无法初始化 SparkContext

转载 作者:行者123 更新时间:2023-12-04 20:29:51 28 4
gpt4 key购买 nike

试图 catch 关于如何在 Kubernetes 1.9.3 集群上部署作业的 Spark 2.3 文档:http://spark.apache.org/docs/latest/running-on-kubernetes.html

Kubernetes 1.9.3 集群在离线裸机服务器上运行正常,安装方式为kubeadm .以下命令用于提交作业( SparkPi 示例作业):

/opt/spark/bin/spark-submit --master k8s://https://k8s-master:6443 --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.SparkPi --conf spark.executor.instances=2 --conf spark.kubernetes.container.image=spark:v2.3.0 local:///opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar 

这是我们都喜欢的堆栈跟踪:
++ id -u
+ myuid=0
++ id -g
+ mygid=0
++ getent passwd 0
+ uidentry=root:x:0:0:root:/root:/bin/ash
+ '[' -z root:x:0:0:root:/root:/bin/ash ']'
+ SPARK_K8S_CMD=driver
+ '[' -z driver ']'
+ shift 1
+ SPARK_CLASSPATH=':/opt/spark/jars/*'
+ env
+ grep SPARK_JAVA_OPT_
+ sed 's/[^=]*=\(.*\)/\1/g'
+ readarray -t SPARK_JAVA_OPTS
+ '[' -n /opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar:/opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar ']'
+ SPARK_CLASSPATH=':/opt/spark/jars/*:/opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar:/opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar'
+ '[' -n '' ']'
+ case "$SPARK_K8S_CMD" in
+ CMD=(${JAVA_HOME}/bin/java "${SPARK_JAVA_OPTS[@]}" -cp "$SPARK_CLASSPATH" -Xms$SPARK_DRIVER_MEMORY -Xmx$SPARK_DRIVER_MEMORY -Dspark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS $SPARK_DRIVER_CLASS $SPARK_DRIVER_ARGS)
+ exec /sbin/tini -s -- /usr/lib/jvm/java-1.8-openjdk/bin/java -Dspark.kubernetes.driver.pod.name=spark-pi-b6f8a60df70a3b9d869c4e305518f43a-driver -Dspark.driver.port=7078 -Dspark.submit.deployMode=cluster -Dspark.master=k8s://https://k8s-master:6443 -Dspark.kubernetes.executor.podNamePrefix=spark-pi-b6f8a60df70a3b9d869c4e305518f43a -Dspark.driver.blockManager.port=7079 -Dspark.app.id=spark-7077ad8f86114551b0ae04ae63a74d5a -Dspark.driver.host=spark-pi-b6f8a60df70a3b9d869c4e305518f43a-driver-svc.default.svc -Dspark.app.name=spark-pi -Dspark.kubernetes.container.image=spark:v2.3.0 -Dspark.jars=/opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar,/opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar -Dspark.executor.instances=2 -cp ':/opt/spark/jars/*:/opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar:/opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar' -Xms1g -Xmx1g -Dspark.driver.bindAddress=10.244.1.17 org.apache.spark.examples.SparkPi
2018-03-07 12:39:35 INFO SparkContext:54 - Running Spark version 2.3.0
2018-03-07 12:39:36 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2018-03-07 12:39:36 INFO SparkContext:54 - Submitted application: Spark Pi
2018-03-07 12:39:36 INFO SecurityManager:54 - Changing view acls to: root
2018-03-07 12:39:36 INFO SecurityManager:54 - Changing modify acls to: root
2018-03-07 12:39:36 INFO SecurityManager:54 - Changing view acls groups to:
2018-03-07 12:39:36 INFO SecurityManager:54 - Changing modify acls groups to:
2018-03-07 12:39:36 INFO SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
2018-03-07 12:39:36 INFO Utils:54 - Successfully started service 'sparkDriver' on port 7078.
2018-03-07 12:39:36 INFO SparkEnv:54 - Registering MapOutputTracker
2018-03-07 12:39:36 INFO SparkEnv:54 - Registering BlockManagerMaster
2018-03-07 12:39:36 INFO BlockManagerMasterEndpoint:54 - Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
2018-03-07 12:39:36 INFO BlockManagerMasterEndpoint:54 - BlockManagerMasterEndpoint up
2018-03-07 12:39:36 INFO DiskBlockManager:54 - Created local directory at /tmp/blockmgr-7f5370ad-b495-4943-ad75-285b7ead3e5b
2018-03-07 12:39:36 INFO MemoryStore:54 - MemoryStore started with capacity 408.9 MB
2018-03-07 12:39:36 INFO SparkEnv:54 - Registering OutputCommitCoordinator
2018-03-07 12:39:36 INFO log:192 - Logging initialized @1936ms
2018-03-07 12:39:36 INFO Server:346 - jetty-9.3.z-SNAPSHOT
2018-03-07 12:39:36 INFO Server:414 - Started @2019ms
2018-03-07 12:39:36 INFO AbstractConnector:278 - Started ServerConnector@4215838f{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
2018-03-07 12:39:36 INFO Utils:54 - Successfully started service 'SparkUI' on port 4040.
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5b6813df{/jobs,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@495083a0{/jobs/json,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5fd62371{/jobs/job,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@2b62442c{/jobs/job/json,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@66629f63{/stages,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@841e575{/stages/json,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@27a5328c{/stages/stage,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@6b5966e1{/stages/stage/json,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@65e61854{/stages/pool,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@1568159{/stages/pool/json,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@4fcee388{/storage,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@6f80fafe{/storage/json,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@3af17be2{/storage/rdd,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@f9879ac{/storage/rdd/json,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@37f21974{/environment,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5f4d427e{/environment/json,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@6e521c1e{/executors,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@224b4d61{/executors/json,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5d5d9e5{/executors/threadDump,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@303e3593{/executors/threadDump/json,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@4ef27d66{/static,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@62dae245{/,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@4b6579e8{/api,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@3954d008{/jobs/job/kill,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@2f94c4db{/stages/stage/kill,null,AVAILABLE,@Spark}
2018-03-07 12:39:36 INFO SparkUI:54 - Bound SparkUI to 0.0.0.0, and started at http://spark-pi-b6f8a60df70a3b9d869c4e305518f43a-driver-svc.default.svc:4040
2018-03-07 12:39:36 INFO SparkContext:54 - Added JAR /opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar at spark://spark-pi-b6f8a60df70a3b9d869c4e305518f43a-driver-svc.default.svc:7078/jars/spark-examples_2.11-2.3.0.jar with timestamp 1520426376949
2018-03-07 12:39:37 WARN KubernetesClusterManager:66 - The executor's init-container config map is not specified. Executors will therefore not attempt to fetch remote or submitted dependencies.
2018-03-07 12:39:37 WARN KubernetesClusterManager:66 - The executor's init-container config map key is not specified. Executors will therefore not attempt to fetch remote or submitted dependencies.
2018-03-07 12:39:42 ERROR SparkContext:91 - Error initializing SparkContext.
org.apache.spark.SparkException: External scheduler cannot be instantiated
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2747)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:492)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2486)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921)
at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31)
at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [get] for kind: [Pod] with name: [spark-pi-b6f8a60df70a3b9d869c4e305518f43a-driver] in namespace: [default] failed.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:62)
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:71)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:228)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:184)
at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.<init>(KubernetesClusterSchedulerBackend.scala:70)
at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterManager.createSchedulerBackend(KubernetesClusterManager.scala:120)
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2741)
... 8 more
Caused by: java.net.UnknownHostException: kubernetes.default.svc: Try again
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at okhttp3.Dns$1.lookup(Dns.java:39)
at okhttp3.internal.connection.RouteSelector.resetNextInetSocketAddress(RouteSelector.java:171)
at okhttp3.internal.connection.RouteSelector.nextProxy(RouteSelector.java:137)
at okhttp3.internal.connection.RouteSelector.next(RouteSelector.java:82)
at okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:171)
at okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:121)
at okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:100)
at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:120)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at io.fabric8.kubernetes.client.utils.HttpClientUtils$2.intercept(HttpClientUtils.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:185)
at okhttp3.RealCall.execute(RealCall.java:69)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:377)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:343)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:312)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:295)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:783)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:217)
... 12 more
2018-03-07 12:39:42 INFO AbstractConnector:318 - Stopped Spark@4215838f{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
2018-03-07 12:39:42 INFO SparkUI:54 - Stopped Spark web UI at http://spark-pi-b6f8a60df70a3b9d869c4e305518f43a-driver-svc.default.svc:4040
2018-03-07 12:39:42 INFO MapOutputTrackerMasterEndpoint:54 - MapOutputTrackerMasterEndpoint stopped!
2018-03-07 12:39:42 INFO MemoryStore:54 - MemoryStore cleared
2018-03-07 12:39:42 INFO BlockManager:54 - BlockManager stopped
2018-03-07 12:39:42 INFO BlockManagerMaster:54 - BlockManagerMaster stopped
2018-03-07 12:39:42 WARN MetricsSystem:66 - Stopping a MetricsSystem that is not running
2018-03-07 12:39:42 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:54 - OutputCommitCoordinator stopped!
2018-03-07 12:39:42 INFO SparkContext:54 - Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: External scheduler cannot be instantiated
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2747)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:492)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2486)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921)
at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31)
at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [get] for kind: [Pod] with name: [spark-pi-b6f8a60df70a3b9d869c4e305518f43a-driver] in namespace: [default] failed.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:62)
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:71)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:228)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:184)
at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.<init>(KubernetesClusterSchedulerBackend.scala:70)
at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterManager.createSchedulerBackend(KubernetesClusterManager.scala:120)
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2741)
... 8 more
Caused by: java.net.UnknownHostException: kubernetes.default.svc: Try again
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at okhttp3.Dns$1.lookup(Dns.java:39)
at okhttp3.internal.connection.RouteSelector.resetNextInetSocketAddress(RouteSelector.java:171)
at okhttp3.internal.connection.RouteSelector.nextProxy(RouteSelector.java:137)
at okhttp3.internal.connection.RouteSelector.next(RouteSelector.java:82)
at okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:171)
at okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:121)
at okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:100)
at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:120)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at io.fabric8.kubernetes.client.utils.HttpClientUtils$2.intercept(HttpClientUtils.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:185)
at okhttp3.RealCall.execute(RealCall.java:69)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:377)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:343)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:312)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:295)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:783)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:217)
... 12 more
2018-03-07 12:39:42 INFO ShutdownHookManager:54 - Shutdown hook called
2018-03-07 12:39:42 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-64fe7ad8-669f-4591-a3f6-67440d450a44

因此显然 Kubernetes Scheduler Backend 无法联系 pod,因为它无法解析 kubernetes.default.svc .嗯……为什么?

我还用 spark 配置了 RBAC文档中提到的服务帐户,但发生了同样的问题。 (也试过不同的命名空间,同样的问题)

以下是来自 kube-dns 的日志:
I0306 16:04:04.170889       1 dns.go:555] Could not find endpoints for service "spark-pi-b9e8b4c66fe83c4d94a8d46abc2ee8f5-driver-svc" in namespace "default". DNS records will be created once endpoints show up.
I0306 16:04:29.751201 1 dns.go:555] Could not find endpoints for service "spark-pi-0665ad323820371cb215063987a31e05-driver-svc" in namespace "default". DNS records will be created once endpoints show up.
I0306 16:06:26.414146 1 dns.go:555] Could not find endpoints for service "spark-pi-2bf24282e8033fa9a59098616323e267-driver-svc" in namespace "default". DNS records will be created once endpoints show up.
I0307 08:16:17.404971 1 dns.go:555] Could not find endpoints for service "spark-pi-3887031e031732108711154b2ec57d28-driver-svc" in namespace "default". DNS records will be created once endpoints show up.
I0307 08:17:11.682218 1 dns.go:555] Could not find endpoints for service "spark-pi-3d84127226393fc99e2fe035db56bfb5-driver-svc" in namespace "default". DNS records will be created once endpoints show up.

我真的不明白为什么会出现这些错误。

最佳答案

尝试使用 Calico 之外的一种方法更改 pod 网络,检查 kube-dns 是否运行良好。

要创建自定义服务帐户,用户可以使用 kubectl create serviceaccount 命令。例如,以下命令会创建一个名为 spark 的服务帐户:

$ kubectl create serviceaccount spark

要授予服务帐户 Role 或 ClusterRole,需要 RoleBinding 或 ClusterRoleBinding。要创建 RoleBinding 或 ClusterRoleBinding,用户可以使用 kubectl create rolebinding(或 ClusterRoleBinding 的 clusterrolebinding)命令。例如,以下命令在默认命名空间中创建一个 edit ClusterRole 并将其授予上面创建的 spark 服务帐户:
$ kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount=default:spark --namespace=default

根据部署的 Kubernetes 的版本和设置,此默认服务帐户可能具有或不具有允许驱动程序 pod 在默认 Kubernetes RBAC 策略下创建 pod 和服务的角色。有时,用户可能需要指定已授予正确角色的自定义服务帐户。 Spark on Kubernetes 支持通过配置属性 spark.kubernetes.authenticate.driver.serviceAccountName= 指定驱动程序 Pod 使用的自定义服务帐户。例如,要使驱动程序 pod 使用 spark 服务帐户,用户只需在 spark-submit 命令中添加以下选项:
spark-submit --master k8s://https://192.168.1.5:6443 --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.SparkPi --conf spark.executor.instances=5 --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark  --conf spark.kubernetes.container.image=leeivan/spark:latest local:///opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar

关于apache-spark - Kubernetes 1.9 无法初始化 SparkContext,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49152771/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com