gpt4 book ai didi

apache-spark - Spark SQL thrift server 不能在集群模式下运行?

转载 作者:行者123 更新时间:2023-12-05 05:26:08 25 4
gpt4 key购买 nike

在 Spark 1.2.0 中,当我尝试以集群模式启动 Spark SQL thrift 服务器时,我得到以下输出:

Spark assembly has been built with Hive, including Datanucleus jars on classpath
Spark Command: /usr/java/latest/bin/java -cp ::/home/tpanning/Projects/spark/spark-1.2.0-bin-hadoop2.4/sbin/../conf:/home/tpanning/Projects/spark/spark-1.2.0-bin-hadoop2.4/lib/spark-assembly-1.2.0-hadoop2.4.0.jar:/home/tpanning/Projects/spark/spark-1.2.0-bin-hadoop2.4/lib/datanucleus-core-3.2.10.jar:/home/tpanning/Projects/spark/spark-1.2.0-bin-hadoop2.4/lib/datanucleus-rdbms-3.2.9.jar:/home/tpanning/Projects/spark/spark-1.2.0-bin-hadoop2.4/lib/datanucleus-api-jdo-3.2.6.jar -XX:MaxPermSize=128m -Xms512m -Xmx512m org.apache.spark.deploy.SparkSubmit --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --deploy-mode cluster --master spark://xd-spark.xdata.data-tactics-corp.com:7077 spark-internal
========================================

Jar url 'spark-internal' is not in valid format.
Must be a jar file path in URL format (e.g. hdfs://host:port/XX.jar, file:///XX.jar)

Usage: DriverClient [options] launch <active-master> <jar-url> <main-class> [driver options]
Usage: DriverClient kill <active-master> <driver-id>

Options:
-c CORES, --cores CORES Number of cores to request (default: 1)
-m MEMORY, --memory MEMORY Megabytes of memory to request (default: 512)
-s, --supervise Whether to restart the driver on failure
-v, --verbose Print more debugging output

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties

“spark-internal”参数似乎是一个特殊的标志,告诉 spark-submit 要运行的类是 Spark 库的一部分,因此它不需要分发 jar。但出于某种原因,这似乎在这里不起作用。

最佳答案

我将其归档为 SPARK-5176它将通过解释 Thrift 服务器无法在集群模式下运行的错误消息来解决。

关于apache-spark - Spark SQL thrift server 不能在集群模式下运行?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/27956082/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com