- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试在kubernetes上运行spark(将minikube与VirtualBox或docker驱动程序一起使用,我在两者中都进行了测试),但现在出现了一个我不知道如何解决的错误。
错误是“SparkException:无法实例化外部调度程序”。我是Kubernetes Realm 的新手,所以我真的不知道这是否是新手错误,但是尝试自己解决我失败了。
请帮我。
在接下来的几行中,按照命令和错误进行操作。
我使用以下spark提交命令:
spark-submit --master k8s://https://192.168.99.102:8443 \
--deploy-mode cluster \
--name spark-pi \
--class org.apache.spark.examples.SparkPi \
--conf spark.executor.instances=2 \
--executor-memory 1024m \
--conf spark.kubernetes.container.image=spark:latest \
local:///opt/spark/examples/jars/spark-examples_2.12-3.0.0.jar
我在 pod 里遇到了这个错误:
20/06/23 15:24:56 INFO SparkContext: Submitted application: Spark Pi
20/06/23 15:24:56 INFO SecurityManager: Changing view acls to: 185,luan
20/06/23 15:24:56 INFO SecurityManager: Changing modify acls to: 185,luan
20/06/23 15:24:56 INFO SecurityManager: Changing view acls groups to:
20/06/23 15:24:56 INFO SecurityManager: Changing modify acls groups to:
20/06/23 15:24:56 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(185, luan); groups with view permissions: Set(); users with modify permissions: Set(185, luan); groups with modify permissions: Set()
20/06/23 15:24:57 INFO Utils: Successfully started service 'sparkDriver' on port 7078.
20/06/23 15:24:57 INFO SparkEnv: Registering MapOutputTracker
20/06/23 15:24:57 INFO SparkEnv: Registering BlockManagerMaster
20/06/23 15:24:57 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/06/23 15:24:57 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/06/23 15:24:57 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
20/06/23 15:24:57 INFO DiskBlockManager: Created local directory at /var/data/spark-4f7b787b-ec75-4ae5-b703-f9f90ef130cb/blockmgr-1ef6d02a-48f6-4bd7-9d7d-fe2518850f5e
20/06/23 15:24:57 INFO MemoryStore: MemoryStore started with capacity 413.9 MiB
20/06/23 15:24:57 INFO SparkEnv: Registering OutputCommitCoordinator
20/06/23 15:24:57 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/06/23 15:24:57 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://spark-pi-a8278472e1c83236-driver-svc.default.svc:4040
20/06/23 15:24:57 INFO SparkContext: Added JAR local:///opt/spark/examples/jars/spark-examples_2.12-3.0.0.jar at file:/opt/spark/examples/jars/spark-examples_2.12-3.0.0.jar with timestamp 1592925897650
20/06/23 15:24:57 WARN SparkContext: The jar local:///opt/spark/examples/jars/spark-examples_2.12-3.0.0.jar has been added already. Overwriting of added jars is not supported in the current version.
20/06/23 15:24:57 INFO SparkKubernetesClientFactory: Auto-configuring K8S client using current context from users K8S config file
20/06/23 15:24:58 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: External scheduler cannot be instantiated
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2934)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:528)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2555)
at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$1(SparkSession.scala:930)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921)
at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:30)
at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/pods/spark-pi-a8278472e1c83236-driver. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods "spark-pi-a8278472e1c83236-driver" is forbidden: User "system:serviceaccount:default:default" cannot get resource "pods" in API group "" in the namespace "default".
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:568)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:505)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:471)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:430)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:395)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:376)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:845)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:214)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:168)
at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.$anonfun$driverPod$1(ExecutorPodsAllocator.scala:59)
at scala.Option.map(Option.scala:230)
at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.<init>(ExecutorPodsAllocator.scala:58)
at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterManager.createSchedulerBackend(KubernetesClusterManager.scala:113)
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2928)
... 19 more
20/06/23 15:24:58 INFO SparkUI: Stopped Spark web UI at http://spark-pi-a8278472e1c83236-driver-svc.default.svc:4040
20/06/23 15:24:58 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/06/23 15:24:58 INFO MemoryStore: MemoryStore cleared
20/06/23 15:24:58 INFO BlockManager: BlockManager stopped
20/06/23 15:24:58 INFO BlockManagerMaster: BlockManagerMaster stopped
20/06/23 15:24:58 WARN MetricsSystem: Stopping a MetricsSystem that is not running
20/06/23 15:24:58 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/06/23 15:24:58 INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: External scheduler cannot be instantiated
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2934)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:528)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2555)
at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$1(SparkSession.scala:930)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921)
at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:30)
at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/pods/spark-pi-a8278472e1c83236-driver. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods "spark-pi-a8278472e1c83236-driver" is forbidden: User "system:serviceaccount:default:default" cannot get resource "pods" in API group "" in the namespace "default".
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:568)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:505)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:471)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:430)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:395)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:376)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:845)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:214)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:168)
at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.$anonfun$driverPod$1(ExecutorPodsAllocator.scala:59)
at scala.Option.map(Option.scala:230)
at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.<init>(ExecutorPodsAllocator.scala:58)
at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterManager.createSchedulerBackend(KubernetesClusterManager.scala:113)
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2928)
... 19 more
20/06/23 15:24:58 INFO ShutdownHookManager: Shutdown hook called
20/06/23 15:24:58 INFO ShutdownHookManager: Deleting directory /var/data/spark-4f7b787b-ec75-4ae5-b703-f9f90ef130cb/spark-616edc5e-b42d-4c77-9f11-8465b4d69642
20/06/23 15:24:58 INFO ShutdownHookManager: Deleting directory /tmp/spark-71e3bd59-3b7d-4d72-a442-b0ad0c7092fb
谢谢!
最佳答案
基于日志文件:
Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods "spark-pi-a8278472e1c83236-driver" is forbidden: User "system:serviceaccount:default:default" cannot get resource "pods" in API group "" in the namespace "default".
看来
default:default
服务帐户没有编辑权限。您可以运行此命令以创建添加权限的ClusterRoleBinding。
$ kubectl create clusterrolebinding default \
--clusterrole=edit --serviceaccount=default:default --namespace=default
您可以看看这个
cheat sheet。
关于apache-spark - 为什么无法在minikube/kubernetes上实例化运行外部Spark的外部调度程序?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/62543646/
我有一个带有一些功能的perl对象。每个功能从主程序中调用一次。我想并行运行某些功能以节省时间。由于某些功能取决于先前功能的结果,因此我无法将它们全部一起运行。 我想到了这样的事情: 对于每个函数,保
首先,我的代码在这里: import schedule # see https://github.com/dbader/schedule import crawler def job(): p
从 11 月 1 日开始,我必须使用quartz调度程序每4个月安排一次任务。我使用 cronExpression 来实现同样的目的。但 cronExpression 每年都会重置。所以我的任务将在
我有以下代码块,它调用两个请求,但略有延迟。 final ActorRef actor1 = getContext().actorOf( ActorClass.prop
考虑到 Linux 的情况,我们为每个用户堆栈都有一个内核堆栈,据我所知,每当发生上下文切换时,我们都会切换到当前进程的内核模式。 这里我们保存当前进程的当前状态,寄存器,程序数据等,然后调度器(不确
我有将东西移植到 OpenBSD 的奇怪爱好。我知道它有 pthreads 问题,但在 2013 年 5 月发布版本之前我不会升级。我使用的是 5.0,我对 pthreads 还很陌生。我已经学习了
给定一组任务: T1(20,100) T2(30,250) T3(100,400) (execution time, deadline=peroid) 现在我想将截止日期限制为 Di = f * Pi
使用 Django 开发一个小型日程安排 Web 应用程序,在该应用程序中,人们被分配特定的时间与他们的上级会面。员工存储为模型,与表示时间范围和他们有空的星期几的模型具有 OneToMany 关系。
我想了解贪婪算法调度问题的工作原理。 所以我一直在阅读和谷歌搜索一段时间,因为我无法理解贪心算法调度问题。 我们有 n 个作业要安排在单个资源上。作业 (i) 有一个请求的开始时间 s(i) 和结束时
这是流行的 El Goog 问题的变体。 考虑以下调度问题:有 n 个作业,i = 1..n。有 1 台 super 计算机和无限的 PC。每个作业都需要先经过 super 计算机的预处理,然后再在P
假设我有一个需要运行多次的蜘蛛 class My_spider(Scrapy.spider): #spider def 我想做这样的事 while True: runner = Cra
我已将 podAntiAffinity 添加到我的 DeploymentConfig 模板中。 但是,pod 被安排在我预计会被规则排除的节点上。 我如何查看 kubernetes 调度程序的日志以了
我已经使用 React - Redux - Typescript 堆栈有一段时间了,到目前为止我很喜欢它。但是,由于我对 Redux 很陌生,所以我一直在想这个特定的话题。 调度 Redux 操作(和
我想按照预定的计划(例如,周一至周五,美国东部时间晚上 9 点至 5 点)运行单个 Azure 实例以减少账单,并且想知道最好的方法是什么。 问题的两个部分: 能否使用服务管理 API [1] 按预定
假设最小模块安装(为了简单起见),Drupal 的 index.php 中两个顶级功能的核心“职责”是什么? ? drupal_bootstrap(DRUPAL_BOOTSTRAP_FULL); me
我正在尝试使用 Racket(以前称为 PLT Scheme)连接 URL 调度。我查看了教程和服务器文档。我不知道如何将请求路由到相同的 servlet。 具体例子: #lang 方案 (需要网络服
我想在 Airflow (v1.9.0) 上运行计划。 我的DAG需要在每个月底运行,但我不知道如何编写设置。 my_dag = DAG(dag_id=DAG_ID, cat
我正在尝试在“httpTrigger”类型函数的 function.json 中设置计划字段,但计时器功能似乎未运行。我的目标是拥有一个甚至可以在需要时进行调度和手动启动的功能,而不必仅为了调度而添加
我正在尝试制定每周、每月的 Airflow 计划,但不起作用。有人可以报告可能发生的情况吗?如果我每周、每月进行安排,它就会保持静止,就好像它被关闭一样。没有错误信息,只是不执行。我发送了一个代码示例
我希望每两周自动更新一次我的表格。我希望我的函数能够被 firebase 调用。 这可能吗? 我正在使用 Angular 2 Typescript 和 Firebase。 最佳答案 仅通过fireba
我是一名优秀的程序员,十分优秀!