gpt4 book ai didi

apache-spark - 在 rapidminer : error occurred during submitting or starting the spark job 上运行 Spark

转载 作者:可可西里 更新时间:2023-11-01 16:35:34 24 4
gpt4 key购买 nike

我正在使用 rapidminer 从大型数据集中提取规则。Radoop 是 hadoop 生态系统的扩展,而 sparkRM 运算符允许进行 fp-growth,从从 hive 检索数据到探索分析。我正在尝试:-Windows 8.1-hadoop 6.2- Spark 1.5- hive 2.1我已将 spark-default-conf 配置如下:

# spark.master                     yarn
# spark.eventLog.enabled true
# spark.eventLog.dir hdfs://namenode:8021/directory
# spark.serializer org.apache.spark.serializer.KryoSerializer
# spark.driver.memory 2G
# spark.driver.cores 1
# spark.yarn.driver.memoryOverhead 384MB
# spark.yarn.am.memory 1G
# spark.yarn.am.cores 1
# spark.yarn.am.memoryOverhead 384MB
# spark.executor.memory 1G
# spark.executor.instances 1
# spark.executor.cores 1
# spark.yarn.executor.memoryOverhead 384MB
# spark.executor.extraJavaOptions -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"

我拥有的 yarn-site Xml 文件:

<property>
<name>yarn.resourcemanager.schedular.address</name>
<value>localhost:8030</value>
</property>

<property>
<name>yarn.resourcemanager.admin.address</name>
<value>localhost:8033</value>
</property>

<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>localhost:8031</value>
</property>

<property>
<name>yarn.resourcemanager.resource.cpu-vcores</name>
<value>2</value>
</property>

<property>
<name>yarn.resourcemanager.resource.memory-mb</name>
<value>2048</value>
</property>

<property>
<name>yarn.resourcemanager.hostname</name>
<value>localhost</value>
</property>

<property>
<name>yarn.resourcemanager.address</name>
<value>localhost:8032</value>
</property>

<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>localhost:8088</value>
</property>

<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

<property>
<name>yarn.nodemanager.log-dirs</name>
<value>/E:/tweets/hadoopConf/userlog</value>
<final>true</final>
</property>

<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/E:/tweets/hadoopConf/temp/nm-localdir</value>
</property>

<property>
<name>yarn.nodemanager.delete.debug-delay-sec</name>
<value>600</value>
</property>

<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>2048</value>
</property>

<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>512</value>
</property>

<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>2048</value>
</property>

<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>1</value>
</property>

<property>
<name>yarn.scheduler.minimum-allocation-vcores</name>
<value>1</value>
</property>

<property>
<name>yarn.scheduler.maximum-allocation-vcores</name>
<value>3</value>
</property>

<property>
<name>yarn.application.classpath</name>
<value>
/tweets/hadoop/,
/tweets/hadoop/share/hadoop/common/*,
/tweets/hadoop/share/hadoop/common/lib/*,
/tweets/hadoop/share/hadoop/hdfs/*,
/tweets/hadoop/share/hadoop/hdfs/lib/*,
/tweets/hadoop/share/hadoop/mapreduce/*,
/tweets/hadoop/share/hadoop/mapreduce/lib/*,
/tweets/hadoop/share/hadoop/yarn/*,
/tweets/hadoop/share/hadoop/yarn/lib/*
/C:/spark/lib/spark-assembly-1.5.0-hadoop2.6.0.jar
</value>
</property>
</configuration>

Hadoop 快速连接测试成功完成。当我运行 rapidminer 进程时,它因错误而结束:

Process failed before getting into running state. this indicates that an error occurred during submitting or starting the spark job or writing the process output or the exception to the disc. Please check the logs of the spark job on the YARN Resource Manager interface for more information about the error.

在 localhost:8088 我有这个诊断信息 enter image description here

这是作业的调度程序 enter image description here

我是 Hadoop 和 spark 的新手,我无法有效地配置内存。

最佳答案

此错误消息描述提交的作业无法在超时之前分配所需的集群资源(vcore、内存),因此它无法运行(请求的资源可能多于可用资源,否则它可能会永远等待) .根据您的 yarn-site.xml 的内容,我假设集群部署在 localhost 上。在这种情况下,您可以在 http://localhost:8088/cluster/scheduler 页面(又名 YARN 资源管理器界面)上检查 spark-on-yarn 作业的可用资源。在 radoop 进程执行期间,您可以在那里检查相应的 yarn/spark 应用程序日志,以获取有关请求的资源数量和类型的更多信息。有了这些信息,您可以微调您的集群,可能沿着允许应用程序使用更多资源的路线。

我还建议查看 Radoop 文档以检查哪种资源分配既适合您的用例又适合您的系统。 Radoop 能够使用不同的资源分配策略来执行其 spark 作业。这些策略描述了 radoop 可以从 YARN 请求资源以执行 spark 作业的方式。通过调整此设置,您可能能够适应集群端的可用资源。 You can read more about these policies here.

关于apache-spark - 在 rapidminer : error occurred during submitting or starting the spark job 上运行 Spark ,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53814223/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com