- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试从OOzie运行mapreduce程序。但是低于错误JA017: Unknown hadoop job [job_local100982864_0001] associated with action [0000000-191002180059803-oozie-hdus-W@RunMapreduceJob]. Failing this action
!
这是workflow.xml
<workflow-app xmlns="uri:oozie:workflow:0.4" name="simple-Workflow">
<start to="RunMapreduceJob" />
<action name="RunMapreduceJob">
<map-reduce>
<job-tracker>localhost:8088</job-tracker>
<name-node>hdfs://localhost:9000</name-node>
<prepare>
<delete path="hdfs://localhost:9000/dataoutput"/>
</prepare>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>default</value>
</property>
<property>
<name>mapred.mapper.class</name>
<value>MovieReviewsHadoop.DataDividerByUser.DataDividerMapper</value>
</property>
<property>
<name>mapred.reducer.class</name>
<value>MovieReviewsHadoop.DataDividerByUser.DataDividerReducer</value>
</property>
<property>
<name>mapred.output.key.class</name>
<value>org.apache.hadoop.io.IntWritable</value>
</property>
<property>
<name>mapred.output.value.class</name>
<value>org.apache.hadoop.io.Text</value>
</property>
<property>
<name>mapred.input.dir</name>
<value>/data</value>
</property>
<property>
<name>mapred.output.dir</name>
<value>/dataoutput</value>
</property>
</configuration>
</map-reduce>
<ok to="end" />
<error to="fail" />
</action>
<kill name="fail">
<message>Mapreduce program Failed</message>
</kill>
<end name="end" />
</workflow-app>
package MovieReviewsHadoop;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import java.io.IOException;
public class DataDividerByUser {
public static class DataDividerMapper extends Mapper<LongWritable, Text, IntWritable, Text> {
// MAP_Method:divide data by user
@Override
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
//input user,movie,rating
String[] user_movie_rating = value.toString().trim().split(","); // we have 3 list here
int userID = Integer.parseInt(user_movie_rating[0]);
String movieID = user_movie_rating[1];
String rating_score = user_movie_rating[2];
context.write(new IntWritable(userID), new Text(movieID + ':' + rating_score));
}
}
public static class DataDividerReducer extends Reducer<IntWritable, Text, IntWritable, Text> {
// reduce method
@Override
public void reduce(IntWritable key, Iterable<Text> values, Context context)
throws IOException, InterruptedException {
//merge data for one user
StringBuilder strblder = new StringBuilder();
while (values.iterator().hasNext()){
strblder.append("," + values.iterator().next());
}
// key-value pair: key = userID value = movie1: rating_score, movie2: rating_score.....
context.write(key, new Text(strblder.toString().replaceFirst(",", "")));
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "Movie");
job.setMapperClass(DataDividerMapper.class);
job.setReducerClass(DataDividerReducer.class);
job.setJarByClass(DataDividerByUser.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
job.setOutputKeyClass(IntWritable.class);
job.setOutputValueClass(Text.class);
TextInputFormat.setInputPaths(job, new Path(args[0]));
TextOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
}
}
job.properties
文件。
nameNode=hdfs://localhost:9000
jobTracker=localhost:8088
queueName=default
oozie.use.system.libpath=true
oozie.wf.application.path=${nameNode}/Config
2019-10-08 16:49:26,519 INFO ActionStartXCommand:541 - SERVER[localhost] USER[hduser] GROUP[-] TOKEN[] APP[map-reduce-wf] JOB[0000005-191006102551747-oozie-hdus-W] ACTION[0000005-191006102551747-oozie-hdus-W@:start:] Start action [0000005-191006102551747-oozie-hdus-W@:start:] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10]
2019-10-08 16:49:26,520 INFO ActionStartXCommand:541 - SERVER[localhost] USER[hduser] GROUP[-] TOKEN[] APP[map-reduce-wf] JOB[0000005-191006102551747-oozie-hdus-W] ACTION[0000005-191006102551747-oozie-hdus-W@:start:] [***0000005-191006102551747-oozie-hdus-W@:start:***]Action status=DONE
2019-10-08 16:49:26,520 INFO ActionStartXCommand:541 - SERVER[localhost] USER[hduser] GROUP[-] TOKEN[] APP[map-reduce-wf] JOB[0000005-191006102551747-oozie-hdus-W] ACTION[0000005-191006102551747-oozie-hdus-W@:start:] [***0000005-191006102551747-oozie-hdus-W@:start:***]Action updated in DB!
2019-10-08 16:49:26,625 INFO ActionStartXCommand:541 - SERVER[localhost] USER[hduser] GROUP[-] TOKEN[] APP[map-reduce-wf] JOB[0000005-191006102551747-oozie-hdus-W] ACTION[0000005-191006102551747-oozie-hdus-W@mr-node] Start action [0000005-191006102551747-oozie-hdus-W@mr-node] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10]
2019-10-08 16:49:27,201 WARN MapReduceActionExecutor:544 - SERVER[localhost] USER[hduser] GROUP[-] TOKEN[] APP[map-reduce-wf] JOB[0000005-191006102551747-oozie-hdus-W] ACTION[0000005-191006102551747-oozie-hdus-W@mr-node] Exception in check(). Message[JA017: Unknown hadoop job [job_local1373301427_0005] associated with action [0000005-191006102551747-oozie-hdus-W@mr-node]. Failing this action!]
org.apache.oozie.action.ActionExecutorException: JA017: Unknown hadoop job [job_local1373301427_0005] associated with action [0000005-191006102551747-oozie-hdus-W@mr-node]. Failing this action!
at org.apache.oozie.action.hadoop.JavaActionExecutor.check(JavaActionExecutor.java:1201)
at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1136)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:228)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63)
at org.apache.oozie.command.XCommand.call(XCommand.java:281)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:323)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:252)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:174)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-10-08 16:49:27,201 WARN ActionStartXCommand:544 - SERVER[localhost] USER[hduser] GROUP[-] TOKEN[] APP[map-reduce-wf] JOB[0000005-191006102551747-oozie-hdus-W] ACTION[0000005-191006102551747-oozie-hdus-W@mr-node] Error starting action [mr-node]. ErrorType [FAILED], ErrorCode [JA017], Message [JA017: Unknown hadoop job [job_local1373301427_0005] associated with action [0000005-191006102551747-oozie-hdus-W@mr-node]. Failing this action!]
org.apache.oozie.action.ActionExecutorException: JA017: Unknown hadoop job [job_local1373301427_0005] associated with action [0000005-191006102551747-oozie-hdus-W@mr-node]. Failing this action!
at org.apache.oozie.action.hadoop.JavaActionExecutor.check(JavaActionExecutor.java:1201)
at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1136)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:228)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63)
at org.apache.oozie.command.XCommand.call(XCommand.java:281)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:323)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:252)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:174)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-10-08 16:49:27,202 WARN ActionStartXCommand:544 - SERVER[localhost] USER[hduser] GROUP[-] TOKEN[] APP[map-reduce-wf] JOB[0000005-191006102551747-oozie-hdus-W] ACTION[0000005-191006102551747-oozie-hdus-W@mr-node] Failing Job due to failed action [mr-node]
2019-10-08 16:49:27,203 WARN LiteWorkflowInstance:544 - SERVER[localhost] USER[hduser] GROUP[-] TOKEN[] APP[map-reduce-wf] JOB[0000005-191006102551747-oozie-hdus-W] ACTION[0000005-191006102551747-oozie-hdus-W@mr-node] Workflow Failed. Failing node [mr-node]
2019-10-08 16:49:27,276 INFO KillXCommand:541 - SERVER[localhost] USER[hduser] GROUP[-] TOKEN[] APP[map-reduce-wf] JOB[0000005-191006102551747-oozie-hdus-W] ACTION[-] STARTED WorkflowKillXCommand for jobId=0000005-191006102551747-oozie-hdus-W
2019-10-08 16:49:27,312 INFO KillXCommand:541 - SERVER[localhost] USER[hduser] GROUP[-] TOKEN[] APP[map-reduce-wf] JOB[0000005-191006102551747-oozie-hdus-W] ACTION[-] ENDED WorkflowKillXCommand for jobId=0000005-191006102551747-oozie-hdus-W
2019-10-08 16:49:27,613 INFO CallbackServlet:541 - SERVER[localhost] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000005-191006102551747-oozie-hdus-W] ACTION[0000005-191006102551747-oozie-hdus-W@mr-node] callback for action [0000005-191006102551747-oozie-hdus-W@mr-node]
2019-10-08 16:49:27,619 ERROR CompletedActionXCommand:538 - SERVER[localhost] USER[-] GROUP[-] TOKEN[] APP[-] JOB[0000005-191006102551747-oozie-hdus-W] ACTION[0000005-191006102551747-oozie-hdus-W@mr-node] XException,
org.apache.oozie.command.CommandException: E0800: Action it is not running its in [FAILED] state, action [0000005-191006102551747-oozie-hdus-W@mr-node]
at org.apache.oozie.command.wf.CompletedActionXCommand.eagerVerifyPrecondition(CompletedActionXCommand.java:77)
at org.apache.oozie.command.XCommand.call(XCommand.java:251)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:174)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
workflow.xml
<workflow-app xmlns="uri:oozie:workflow:0.2" name="map-reduce-wf">
<start to="mr-node"/>
<action name="mr-node">
<map-reduce>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path="${nameNode}/user/${wf:user()}/${examplesRoot}/output-data/${outputDir}"/>
</prepare>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
<property>
<name>mapred.mapper.class</name>
<value>MovieReviewsHadoop.DataDividerByUser$DataDividerMapper</value>
</property>
<property>
<name>mapred.reducer.class</name>
<value>MovieReviewsHadoop.DataDividerByUser$DataDividerReducer</value>
</property>
<property>
<name>mapred.map.tasks</name>
<value>1</value>
</property>
<property>
<name>mapred.input.dir</name>
<value>/user/${wf:user()}/${examplesRoot}/input-data/text</value>
</property>
<property>
<name>mapred.output.dir</name>
<value>/user/${wf:user()}/${examplesRoot}/output-data/${outputDir}</value>
</property>
</configuration>
</map-reduce>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Map/Reduce failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>
nameNode=hdfs://localhost:9000
jobTracker=localhost:8088
queueName=default
examplesRoot=examples
user.name=hduser
oozie.wf.application.path=${nameNode}/user/${user.name}/${examplesRoot}/apps/map-reduce
outputDir=map-reduce
https://prnt.sc/pgq4ea
<map-reduce xmlns="uri:oozie:workflow:0.2">
<job-tracker>localhost:8088</job-tracker>
<name-node>hdfs://localhost:9000</name-node>
<prepare>
<delete path="hdfs://localhost:9000/user/hduser/examples/output-data/map-reduce" />
</prepare>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>default</value>
</property>
<property>
<name>mapred.mapper.class</name>
<value>MovieReviewsHadoop.DataDividerByUser$DataDividerMapper</value>
</property>
<property>
<name>mapred.reducer.class</name>
<value>MovieReviewsHadoop.DataDividerByUser$DataDividerReducer</value>
</property>
<property>
<name>mapred.map.tasks</name>
<value>1</value>
</property>
<property>
<name>mapred.input.dir</name>
<value>/user/hduser/examples/input-data/text</value>
</property>
<property>
<name>mapred.output.dir</name>
<value>/user/hduser/examples/output-data/map-reduce</value>
</property>
</configuration>
</map-reduce>
最佳答案
我认为是有原因的(不过,在您在下面澄清我的问题之前,我将不删除先前的答案):
由于您已经在单个DataDividerByUser
类中将mapper和reducer类创建为静态类,因此需要在工作流中将它们指定为:
<property>
<name>mapreduce.map.class</name>
<value>MovieReviewsHadoop.DataDividerByUser$DataDividerMapper</value>
</property>
<property>
<name>mapreduce.reduce.class</name>
<value>MovieReviewsHadoop.DataDividerByUser$DataDividerReducer</value>
</property>
.
替换
$
关于hadoop - 与 Action 相关的未知Hadoop作业。操作失败,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58201810/
我希望使用 API 根据处理 Q 的大小更改运行的 Web 作业实例的数量,我知道我可以在门户中设置规则,但最短聚合时间为 60 分钟,并且我如果我们突然遇到大量工作,不希望系统在扩展之前等待 60
假设我有一个 spark 应用程序并且有两个操作导致两个 spark 作业。 //spark Application //Spark Job1 .... erro
大家好! 作为我对Java的自学的一部分,我正在尝试完成可用的Java初学者分配之一here(非常古老的东西-2001) 问题是我不知道如何应对这个挑战:(我将不胜感激任何建议,因为该解决方案不再可用
我一直在使用 HADOOP 1.2.1 服务器,并在那里执行许多 pig 作业。最近,我考虑将我的 Hadoop 服务器更改为 HADOOP 2.2.0。所以我在 HADOOP 2.2.0 中尝试了一
好的,我修复了静态错误。现在我只是想找出为什么每个对象都得到相同的条目(即相同的名字、年龄、体重等)。这是代码: package classlab3b; import classlab3B.BodyM
我的家庭作业中的一个问题需要一些帮助,我已经尝试了大约一个小时,但无法运行。 列出购买商品数量超过每位顾客平均商品数量的顾客 表格如下: Customer(Cnum, CustomerName, Ad
Kubernetes Jobs重复创建 Pod,直到指定数量的容器成功终止。作业通常与更高级别的CronJob机制一起使用,该机制会按循环计划自动启动新作业。 定期使用 Jobs 和 CronJobs
我有以下工作类(我已经删除了实际的工作代码): @On("0 0 1 * * ?") public class DailyJob extends Job { @Override pub
假设您将 cron 作业配置为每分钟运行一次以做某事。如果实际任务运行时间超过一分钟会发生什么? cron 会创建另一个作业实例/线程吗?还是 cron 会等待并确保上一次运行完成? 谢谢! 最佳答案
我们正在使用 TeamCity 7 并想知道是否可以仅在前一个步骤失败时才运行步骤?我们在构建步骤配置中的选项让您可以选择仅在所有步骤都成功时执行,即使步骤失败,或者始终运行它。 有没有办法仅在前一个
我在 oracle 中编写作业以执行存储过程,但是当时机成熟时,它不会无缘无故地发生任何事情。 是否有某种日志可以让我查看是否发生了错误或其他事情? 我使用 dbms_job 包来创建作业 恩克斯。
我正在用 Java 创建一个用于文件共享的 p2p 应用程序。每个对等节点都将在我的机器上的不同端口上运行并监听请求。但我遇到的问题是,当创建 PeerNode 实例时,我的代码会进入无限循环。以下是
我正在尝试创建一个队列,但当我运行 php artisanqueue:work 时它不起作用,我在终端中得到的只是 [2017-11-30 19:56:27] Processing: App\Jobs
我正在使用PHP库phpseclib0.2.2将SSH自动化到我的一台服务器中。我将其设置为每5分钟运行一次的cron任务。 在设置完它并确保其运行等情况下注销后,我看到了以下内容: $ logout
有没有办法获取多分支管道作业扫描收集到的所有分支的名称? 我想设置一个依赖于现有构建作业的夜间构建,因此需要检查多分支作业是否包含某些特定分支。另一种方法是检查现有作业。 最佳答案 我通过使用 Jen
我在编程方面还很陌生,我不太确定如何完成分配给我的学校作业。 Write a function void print_min(unsigned char a, short b,int c),which
我的作业有问题,需要帮助! 问题 1: 完成下面的 Java 方法,以便 raiseToPower(x,n) 将数字 x 提高到整数 n 次方(即计算值 xn )。请记住 x-n = 1/xn,x0
我正在做一项家庭作业,该作业有四个文本字段和一个文本区域,以及一个将文本字段和文本区域保存到文本文件的按钮,每行一个元素。然后,应出现一个对话框通知用户文件已保存。当对话框关闭时,它应该清空文本字段和
我需要运行一个名为ArrayHolder的java程序,它将运行两个线程。 ArrayHolder 将有一个 Array。 ThreadSeven 会用 7 覆盖该 Array 的每个元素,并用 1
在我的程序中,应该读取学生姓名、ID 号和 GPA,将其分配给指定的学生,然后打印出来。一切都编译正常,但出现错误 Error: Could not find or load main class L
我是一名优秀的程序员,十分优秀!