gpt4 book ai didi

python - Spark 1.5.2 + Hadoop 2.6.2 spark-submit 和 pyspark 不使用独立的所有节点

转载 作者:可可西里 更新时间:2023-11-01 15:12:18 29 4
gpt4 key购买 nike

我在独立模式下运行 spark-submit 或 pyspark 时遇到问题,如下所示:

spark/bin/pyspark --master spark://<SPARK_IP>:<SPARK_PORT>

这通常会使用所有节点(至少在以前的版本中)在 UI 中创建一个正在运行的 Spark 应用程序。

出于某种原因,这样做只会在主节点上运行它,尽管 UI 显示所有节点都连接到主节点。从节点上的日志中没有错误。任何人都知道可能出了什么问题?作为引用,我的 spark-env.sh 具有以下配置:

export HADOOP_CONF_DIR=/mnt/hadoop/etc/hadoop
export SPARK_PUBLIC_DNS=<PUBLIC_DNS>
export SPARK_MASTER_IP=<PRIVATE_DNS>
export SPARK_MASTER_PORT=7077
export SPARK_CLASSPATH=$SPARK_CLASSPATH:/mnt/hadoop/share/hadoop/tools/lib/*
export SPARK_JAVA_OPTS="-Djava.io.tmpdir=/mnt/persistent/hadoop"
export SPARK_TMP_DIR="/mnt/persistent/hadoop"
export SPARK_MASTER_OPTS="-Djava.io.tmpdir=/mnt/persistent/hadoop"
export SPARK_WORKER_OPTS="-Djava.io.tmpdir=/mnt/persistent/hadoop"
export SPARK_DRIVER_MEMORY=5g
export SPARK_EXECUTOR_OPTS="-Djava.io.tmpdir=/mnt/persistent/hadoop"
export SPARK_EXECUTOR_INSTANCES=2
export SPARK_EXECUTOR_MEMORY=23g

这是尝试启动 PySpark 后弹出的内容:

Python 2.7.6 (default, Jun 22 2015, 17:58:13) 
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
15/12/24 01:36:38 INFO spark.SparkContext: Running Spark version 1.5.2
15/12/24 01:36:38 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/12/24 01:36:38 WARN spark.SparkConf:
SPARK_JAVA_OPTS was detected (set to '-Djava.io.tmpdir=/mnt/persistent/hadoop').
This is deprecated in Spark 1.0+.

Please instead use:
- ./spark-submit with conf/spark-defaults.conf to set defaults for an application
- ./spark-submit with --driver-java-options to set -X options for a driver
- spark.executor.extraJavaOptions to set -X options for executors
- SPARK_DAEMON_JAVA_OPTS to set java options for standalone daemons (master or worker)

15/12/24 01:36:38 WARN spark.SparkConf: Setting 'spark.executor.extraJavaOptions' to '-Djava.io.tmpdir=/mnt/persistent/hadoop' as a work-around.
15/12/24 01:36:38 WARN spark.SparkConf: Setting 'spark.driver.extraJavaOptions' to '-Djava.io.tmpdir=/mnt/persistent/hadoop' as a work-around.
15/12/24 01:36:38 WARN spark.SparkConf:
SPARK_CLASSPATH was detected (set to ':/mnt/hadoop/share/hadoop/tools/lib/*').
This is deprecated in Spark 1.0+.

Please instead use:
- ./spark-submit with --driver-class-path to augment the driver classpath
- spark.executor.extraClassPath to augment the executor classpath

15/12/24 01:36:38 WARN spark.SparkConf: Setting 'spark.executor.extraClassPath' to ':/mnt/hadoop/share/hadoop/tools/lib/*' as a work-around.
15/12/24 01:36:38 WARN spark.SparkConf: Setting 'spark.driver.extraClassPath' to ':/mnt/hadoop/share/hadoop/tools/lib/*' as a work-around.
15/12/24 01:36:38 INFO spark.SecurityManager: Changing view acls to: ubuntu
15/12/24 01:36:38 INFO spark.SecurityManager: Changing modify acls to: ubuntu
15/12/24 01:36:38 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ubuntu); users with modify permissions: Set(ubuntu)
15/12/24 01:36:39 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/12/24 01:36:39 INFO Remoting: Starting remoting
15/12/24 01:36:40 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@xx.xx.xx.xx:34065]
15/12/24 01:36:40 INFO util.Utils: Successfully started service 'sparkDriver' on port 34065.
15/12/24 01:36:40 INFO spark.SparkEnv: Registering MapOutputTracker
15/12/24 01:36:40 INFO spark.SparkEnv: Registering BlockManagerMaster
15/12/24 01:36:40 INFO storage.DiskBlockManager: Created local directory at /mnt/persistent/hadoop/blockmgr-16d59ac7-dc2d-4cf7-ad52-91ff1035a86d
15/12/24 01:36:40 INFO storage.MemoryStore: MemoryStore started with capacity 2.6 GB
15/12/24 01:36:40 INFO spark.HttpFileServer: HTTP File server directory is /mnt/persistent/hadoop/spark-c6ea28f7-13dc-4799-aea7-0638cff35936/httpd-006916ff-7f84-4ad9-8fb5-bce471d73d5a
15/12/24 01:36:40 INFO spark.HttpServer: Starting HTTP Server
15/12/24 01:36:40 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/12/24 01:36:40 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:50882
15/12/24 01:36:40 INFO util.Utils: Successfully started service 'HTTP file server' on port 50882.
15/12/24 01:36:40 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/12/24 01:36:40 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/12/24 01:36:40 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
15/12/24 01:36:40 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
15/12/24 01:36:40 INFO ui.SparkUI: Started SparkUI at http://xx.xx.xx.xx:4040
15/12/24 01:36:40 WARN metrics.MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
15/12/24 01:36:40 INFO client.AppClient$ClientEndpoint: Connecting to master spark://xx.xx.xx.xx:7077...
15/12/24 01:36:41 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20151224013641-0001
15/12/24 01:36:41 INFO client.AppClient$ClientEndpoint: Executor added: app-20151224013641-0001/0 on worker-20151224013503-xx.xx.xx.xx-40801 (xx.xx.xx.xx:40801) with 4 cores
15/12/24 01:36:41 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20151224013641-0001/0 on hostPort xx.xx.xx.xx:40801 with 4 cores, 23.0 GB RAM
15/12/24 01:36:41 INFO client.AppClient$ClientEndpoint: Executor updated: app-20151224013641-0001/0 is now LOADING
15/12/24 01:36:41 INFO client.AppClient$ClientEndpoint: Executor updated: app-20151224013641-0001/0 is now RUNNING
15/12/24 01:36:41 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 58297.
15/12/24 01:36:41 INFO netty.NettyBlockTransferService: Server created on 58297
15/12/24 01:36:41 INFO storage.BlockManagerMaster: Trying to register BlockManager
15/12/24 01:36:41 INFO storage.BlockManagerMasterEndpoint: Registering block manager xx.xx.xx.xx:58297 with 2.6 GB RAM, BlockManagerId(driver, xx.xx.xx.xx, 58297)
15/12/24 01:36:41 INFO storage.BlockManagerMaster: Registered BlockManager
15/12/24 01:36:41 INFO cluster.SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 1.5.2
/_/

Using Python version 2.7.6 (default, Jun 22 2015 17:58:13)
SparkContext available as sc, HiveContext available as sqlContext.
>>> 15/12/24 01:36:44 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@xx.xx.xx.xx:38929/user/Executor#412940208]) with ID 0
15/12/24 01:36:44 INFO storage.BlockManagerMasterEndpoint: Registering block manager xx.xx.xx.xx:44977 with 11.9 GB RAM, BlockManagerId(0, xx.xx.xx.xx, 44977)

提前致谢, jack

最佳答案

我遇到了类似的问题,主人默默地忽略了一些奴隶。它归结为以下内容:

如果应用程序需要为其执行器提供一些资源,而某些从属服务器无法满足这些资源,这些从属服务器会在没有警告的情况下自动排除。

例如,如果应用程序需要一个 6 核和 11g RAM 的执行器,而一个从机只提供 3 个核,那么从机不会从该应用程序中获取任何任务。如果应用程序设置中未指定内核数,则应用程序将使用每个从站允许的最大内核数。然而,这不适用于内存。

关于python - Spark 1.5.2 + Hadoop 2.6.2 spark-submit 和 pyspark 不使用独立的所有节点,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34445632/

29 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com