gpt4 book ai didi

apache-spark - 预期的 HTTP 101 响应,但为 '403 Forbidden'

转载 作者:行者123 更新时间:2023-12-02 12:32:33 26 4
gpt4 key购买 nike

spark version:2.3.3
kubernetes version :v1.15.3

I'm getting the below exception while running spark code with kubernetes.
Even though I assigned the role and rolebinding and trying, still giving same exception. Please suggest solution if anyone had got such kind of exception.

2019-09-11 10:35:54 WARN KubernetesClusterManager:66 - The executor's init-container config map is not specified. Executors will therefore not attempt to fetch remote or submitted dependencies.
2019-09-11 10:35:54 WARN KubernetesClusterManager:66 - The executor's init-container config map key is not specified. Executors will therefore not attempt to fetch remote or submitted dependencies.
2019-09-11 10:35:57 WARN WatchConnectionManager:185 - Exec Failure: HTTP 403, Status: 403 -
java.net.ProtocolException: Expected HTTP 101 response but was '403 Forbidden'
at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:216)
at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:183)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:141)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-09-11 10:35:57 ERROR SparkContext:91 - Error initializing SparkContext.
io.fabric8.kubernetes.client.KubernetesClientException:
at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:188)
at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:543)
at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:185)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:141)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-09-11 10:35:57 INFO AbstractConnector:318 - Stopped Spark@7c351808{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
2019-09-11 10:35:57 INFO SparkUI:54 - Stopped Spark web UI at http://spark-pi-8ee39f55094a39cc9f6d34d8739549d2-driver-svc.default.svc:4040
2019-09-11 10:35:57 INFO KubernetesClusterSchedulerBackend:54 - Shutting down all executors
2019-09-11 10:35:57 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint:54 - Asking each executor to shut down
2019-09-11 10:35:57 INFO KubernetesClusterSchedulerBackend:54 - Closing kubernetes client
2019-09-11 10:35:57 INFO MapOutputTrackerMasterEndpoint:54 - MapOutputTrackerMasterEndpoint stopped!
2019-09-11 10:35:57 INFO MemoryStore:54 - MemoryStore cleared
2019-09-11 10:35:57 INFO BlockManager:54 - BlockManager stopped
2019-09-11 10:35:57 INFO BlockManagerMaster:54 - BlockManagerMaster stopped
2019-09-11 10:35:57 WARN MetricsSystem:66 - Stopping a MetricsSystem that is not running
2019-09-11 10:35:57 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:54 - OutputCommitCoordinator stopped!
2019-09-11 10:35:57 INFO SparkContext:54 - Successfully stopped SparkContext
Exception in thread "main" io.fabric8.kubernetes.client.KubernetesClientException:
at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:188)
at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:543)
at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:185)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:141)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-09-11 10:35:57 INFO ShutdownHookManager:54 - Shutdown hook called


I had created role and rolebinding and tried but it can't help me.
Even I did reset kubernetes and tried again by reseting it but still facing same issue.
I can't find out the solution for this on google.

Below spark submit command I'm using :

nohup bin/spark-submit --master k8s://https://192.168.154.58:6443 --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.JavaSparkPi --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark --conf spark.executor.instances=1 --conf spark.kubernetes.container.image=innoeye123/spark:latest local:///opt/spark/examples/jars/spark-examples_2.11-2.3.3.jar > tool.log &

/*
* 获得 Apache 软件基金会 (ASF) 的一项或多项许可
* 贡献者许可协议(protocol)。请参阅分发的 NOTICE 文件
* 本作品获取有关版权所有权的更多信息。
* ASF 根据 Apache 许可证 2.0 版将此文件许可给您
*(“许可证”);除非符合以下规定,否则您不得使用此文件
* 许可证。您可以在以下网址获取许可证的副本
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* 除非适用法律要求或书面同意,否则软件
*根据许可分发是在“原样”基础上分发的,
* 不提供任何明示或暗示的保证或条件。
* 请参阅许可证以了解特定语言的管理权限和
* 许可证下的限制。
*/
package org.apache.spark.examples;

import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.SparkSession;

import java.util.ArrayList;
import java.util.List;

/**
* Computes an approximation to pi
* Usage: JavaSparkPi [partitions]
*/
public final class JavaSparkPi {

public static void main(String[] args) throws Exception {
SparkSession spark = SparkSession
.builder()
.appName("JavaSparkPi")
.getOrCreate();

JavaSparkContext jsc = new JavaSparkContext(spark.sparkContext());

int slices = (args.length == 1) ? Integer.parseInt(args[0]) : 2;
int n = 100000 * slices;
List<Integer> l = new ArrayList<>(n);
for (int i = 0; i < n; i++) {
l.add(i);
}

JavaRDD<Integer> dataSet = jsc.parallelize(l, slices);

int count = dataSet.map(integer -> {
double x = Math.random() * 2 - 1;
double y = Math.random() * 2 - 1;
return (x * x + y * y <= 1) ? 1 : 0;
}).reduce((integer, integer2) -> integer + integer2);

System.out.println("Pi is roughly " + 4.0 * count / n);

spark.stop();
}
}


Expected result : spark-submit command should run smoothly and terminate it successfully by creating a successful pod.

最佳答案

看起来这是一个报告的问题 SPARK-28921受影响的 Spark 版本

  • 2.3.0
  • 2.3.1
  • 2.3.3
  • 2.4.0
  • 2.4.1
  • 2.4.2
  • 2.4.3
  • 2.4.4

  • 检查您是否使用上述之一

    对此的修复可在
  • 2.4.5
  • 3.0.0

  • 您可能需要升级

    关于apache-spark - 预期的 HTTP 101 响应,但为 '403 Forbidden',我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57887672/

    26 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com