gpt4 book ai didi

hadoop - 由于错误 JA017,Oozie 工作流失败

转载 作者:可可西里 更新时间:2023-11-01 14:51:41 26 4
gpt4 key购买 nike

我正在使用 Apache Oozie 4.3.0Hadoop 2.7.3

我开发了一个非常简单的 Oozie 工作流,它只有一个 sqoop 操作来将系统事件导出到 MySQL 表。

<workflow-app name="WorkflowWithSqoopAction" xmlns="uri:oozie:workflow:0.1">
<start to="sqoopAction"/>
<action name="sqoopAction">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<command>export --connect jdbc:mysql://localhost/airawat --username devUser --password myPwd --table eventsgranularreport --direct --enclosed-by '\"' --export-dir /user/hive/warehouse/eventsgranularreport </command>
</sqoop>
<ok to="end"/>
<error to="killJob"/>
</action>
<kill name="killJob">
<message>"Killed job due to error: ${wf:errorMessage(wf:lastErrorNode())}"</message>
</kill>
<end name="end" />
</workflow-app>

我将应用程序部署在 HDFS 中,如下所示:

hdfs dfs -ls -R /oozieProject | awk '{ print $8 }'

/oozieProject/workflowSqoopAction
/oozieProject/workflowSqoopAction/README.md
/oozieProject/workflowSqoopAction/job.properties
/oozieProject/workflowSqoopAction/workflow.xml

hdfs dfs -ls -d /oozieProject

drwxr-xr-x - sergio supergroup 0 2017-04-15 14:08 /oozieProject

我在 job.properties 中包含了以下配置:

#*****************************
# job.properties
#*****************************

nameNode=hdfs://localhost:9000
jobTracker=localhost:8032
queueName=default

mapreduce.job.user.name=sergio
user.name=sergio
oozie.libpath=${nameNode}/oozieProject/share/lib
oozie.use.system.libpath=true
oozie.wf.rerun.failnodes=true

oozieProjectRoot=${nameNode}/oozieProject
appPath=${oozieProjectRoot}/workflowSqoopAction
oozie.wf.application.path=${appPath}

然后我将作业发送到 Oozie 服务器并开始执行它:

oozie job -oozie http://localhost:11000/oozie -config /home/sergio/git/hadoop_samples/hadoop_examples/src/main/java/org/sanchez/sergio/hadoop_examples/oozie/workflowSqoopAction/job.properties -submit

oozie job -oozie http://localhost:11000/oozie -start 0000001-170415112256550-oozie-serg-W

不久之后在 Oozie 的 Web 控制台中看到工作失败:

enter image description here

sqoopAction中出现如下错误信息:

JA017: Could not lookup launched hadoop Job ID [job_local245204272_0008] which was associated with  action [0000001-170415112256550-oozie-serg-W@sqoopAction].  Failing this action!

谁能指导我解决这个错误?

运行恶魔:

jps

2576
6130 ResourceManager
3267 DataNode
10102 JobHistoryServer
3129 NameNode
24650 Jps
6270 NodeManager
3470 SecondaryNameNode
4190 Bootstrap

最佳答案

您在 hadoop 中缺少一些配置属性。我也在使用 hadoop-2.7.3 和 Oozie-4.3,并且在过去 5 天遇到了同样的问题。

如前所述配置几个属性并在我的本地运行:

yarn 网站.xml:

<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>

mapred-site.xml:

<property>
<name>mapreduce.jobtracker.address</name>
<value>HOST:PORT</value>
</property>

<property>
<name>mapreduce.jobtracker.http.address</name>
<value>HOST:PORT</value>
</property>

<property>
<name>mapreduce.tasktracker.report.address</name>
<value>127.0.0.1:0</value>
</property>

<property>
<name>mapreduce.tasktracker.http.address</name>
<value>0.0.0.0:50060</value>
</property>

<property>
<name>mapreduce.job.queuename</name>
<value>default</value>
</property>

<property>
<name> mapreduce.framework.name</name>
<value>yarn</value>
</property>

<property>
<name>mapreduce.jobhistory.address</name>
<value>localhost:10020</value>
</property>

<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>localhost:19888</value>
</property>

根据您的设置将属性值替换为实际值。现在重启 yarn、hadoop 和 Oozie。

祝你好运。

关于hadoop - 由于错误 JA017,Oozie 工作流失败,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43426691/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com