- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
这个程序是用 Cloudera 编写的。这是我创建的驱动程序类。
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
public class WordCount2
{
public static void main(String[] args) throws Exception
{
if(args.length < 2)
{
System.out.println("Enter input and output path correctly ");
System.exit(-1);//exit if error occurs
}
Configuration conf = new Configuration();
@SuppressWarnings("deprecation")
Job job = new Job(conf,"WordCount2");
//Define MapReduce job
//
//job.setJobName("WordCount2");// job name created
job.setJarByClass(WordCount2.class); //Jar file will be created
//Set input/ouptput paths
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job,new Path(args[1]));
//Set input/output Format
job.setInputFormatClass(TextInputFormat.class);// input format is of TextInput Type
job.setOutputFormatClass(TextOutputFormat.class); // output format is of TextOutputType
//set Mapper and Reducer class
job.setMapperClass(WordMapper.class);
job.setReducerClass(WordReducer.class);
//Set output key-value types
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
//submit job
System.exit(job.waitForCompletion(true)?0:1);// If job is completed exit successfully, else throw error
}
}
下面是Mapper类的代码。
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.Mapper;
public class WordMapper extends Mapper<LongWritable, Text, Text, IntWritable>
{
@Override
public void map(LongWritable key, Text value,Context context)
throws IOException, InterruptedException
{
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while(tokenizer.hasMoreTokens())
{
String word= tokenizer.nextToken();
context.write(new Text(word), new IntWritable(1));
}
}
}
//------------Reducer类------------
import java.io.IOException;
import java.util.Iterator;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
public class WordReducer extends Reducer <Text,IntWritable,Text,IntWritable>
{
public void reduce(Text key,Iterator<IntWritable> values,Context context)
throws IOException, InterruptedException
{
int sum = 0;
while(values.hasNext())
{
sum += values.next().get();
}
context.write(key, new IntWritable(sum));
}
}
下面是命令行日志
[cloudera@quickstart workspace]$ hadoop jar wordcount2.jar WordCount2 /user/training/soni.txt /user/training/sonioutput2
18/04/23 07:17:23 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
18/04/23 07:17:24 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
18/04/23 07:17:25 INFO input.FileInputFormat: Total input paths to process : 1
18/04/23 07:17:25 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:952)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:690)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:879)
18/04/23 07:17:26 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:952)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:690)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:879)
18/04/23 07:17:26 INFO mapreduce.JobSubmitter: number of splits:1
18/04/23 07:17:26 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1523897572171_0005
18/04/23 07:17:27 INFO impl.YarnClientImpl: Submitted application application_1523897572171_0005
18/04/23 07:17:27 INFO mapreduce.Job: The url to track the job: http://quickstart.cloudera:8088/proxy/application_1523897572171_0005/
18/04/23 07:17:27 INFO mapreduce.Job: Running job: job_1523897572171_0005
18/04/23 07:17:45 INFO mapreduce.Job: Job job_1523897572171_0005 running in uber mode : false
18/04/23 07:17:45 INFO mapreduce.Job: map 0% reduce 0%
18/04/23 07:18:01 INFO mapreduce.Job: map 100% reduce 0%
18/04/23 07:18:16 INFO mapreduce.Job: map 100% reduce 100%
18/04/23 07:18:17 INFO mapreduce.Job: Job job_1523897572171_0005 completed successfully
18/04/23 07:18:17 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=310
FILE: Number of bytes written=251053
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=250
HDFS: Number of bytes written=188
HDFS: Number of read operations=6
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=14346
Total time spent by all reduces in occupied slots (ms)=12546
Total time spent by all map tasks (ms)=14346
Total time spent by all reduce tasks (ms)=12546
Total vcore-milliseconds taken by all map tasks=14346
Total vcore-milliseconds taken by all reduce tasks=12546
Total megabyte-milliseconds taken by all map tasks=14690304
Total megabyte-milliseconds taken by all reduce tasks=12847104
Map-Reduce Framework
Map input records=7
Map output records=29
Map output bytes=246
Map output materialized bytes=310
Input split bytes=119
Combine input records=0
Combine output records=0
Reduce input groups=19
Reduce shuffle bytes=310
Reduce input records=29
Reduce output records=29
Spilled Records=58
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=1095
CPU time spent (ms)=4680
Physical memory (bytes) snapshot=407855104
Virtual memory (bytes) snapshot=3016044544
Total committed heap usage (bytes)=354553856
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=131
File Output Format Counters
Bytes Written=188
[cloudera@quickstart workspace]$
下面是Input Data present输入文件soni.txt:
Hi How are you
I am fine
What about you
What are you doing these days
How is your job going
How is your family
My family is great
在 part-r-00000 文件中收到以下输出:
family 1
family 1
fine 1
going 1
great 1
is 1
is 1
is 1
job 1
these 1
you 1
you 1
you 1
your 1
your 1
但是,我认为这不应该是正确的输出。它应该给出准确的字数。
最佳答案
您的 reduce
方法签名是错误的,因此它永远不会被调用。您需要从 Reducer
类中覆盖这个:
protected void reduce(KEYIN key, Iterable<VALUEIN> values, Context context) throws IOException, InterruptedException;
它是一个 Iterable 而不是 Iterator
试试这个:
@Override
protected void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int sum = 0;
for (IntWritable value : values) {
sum += value.get();
}
context.write(key, new IntWritable(sum));
}
关于java - Cloudera 中的 WordCount 作业成功但 reducer 的输出与 mapper 的输出相同,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49985930/
是否可以在 impala 中同时执行多个查询?如果是,impala 是如何处理的? 最佳答案 我当然会自己做一些测试,但我无法执行多个查询:我正在使用 Impala 连接,并从 .sql 文件中读取查
我一直在寻找使用 Hortonworks 2.1 安装可用的 Storm,但为了避免在 Cloudera 安装(其中包含 Spark)之外安装 Hortonworks,我试图找到一种在 Clouder
正常情况下,我可以show partitions 在 hive 中。但是当它是 Parquet 表时,hive 无法理解它。我可以转到 hdfs 并检查目录结构,但这并不理想。有没有更好的方法来做到这
我想知道用于停止和启动 cloudera CDH5.2 集群的命令行。 原因,我正在编写一个自动化脚本来运行一些基准测试,并希望在开始每个基准测试之前停止和启动集群。 我已经看到停止 CDH 集群并不
由于嵌入式 PostgreSQL 数据库的问题,我无法访问 Cloudera Manager UI。 Web UI 向我显示: HTTP ERROR 500 Problem accessing /cm
我最近下载了 Cloudera CDH 5.3,现在我需要访问 HUE Web UI 门户。当我提供属于 Cloudera admin/admin 的默认用户名和密码时,它不起作用。我现在无法登录 H
我有两台装有 CentOS 6.5 的 PC client86-101.aihs.net 80.94.86.101 client86-103.aihs.net 80.94.86.103 clouder
当我显示表格时,我在 Impala 中看到一个表格“测试”; 我想复制“test”表,使其完全相同,但命名为“test_copy”。我可以执行 impala 查询来执行此操作吗?如果没有,我该怎么做?
如何在不使用 Cloudera 管理器的情况下在 100 节点集群上安装 Cloudera CDH?在集群中的每个节点上手动安装和配置 CDH 是一项艰巨的任务。使用哪些工具和技术来自动化生产中的任务
我启动了cloudera-scm-agent,但它失败了。我看到来自 /opt/cm-5.7.0/log/cloudera-scm-agent 的日志。它说 /opt/cm-5.7.0/lib64/c
我正在使用 ubuntu 12.04 64 位,我成功地安装并运行了单节点示例 hadoop 程序。 我在我的 ubuntu 上安装 cloudera 管理器时出现以下错误 Refreshing re
我已经关注了这里的博客(如下所述)并下载了包裹并按照要求放置。请告诉我是否有人已安装以及步骤。 (https://www.cloudera.com/documentation/spark2/lates
我正在尝试使用 Cloudera Manager 3.7.x 在具有 RHEL 5.5 的集群上安装 CDH3U5。但是由于以下错误,安装失败 - Error Downloading Packages
我正在尝试在 Ubuntu 12.04 LTS(64 位)中使用 Cloudera Manager 安装 CDH5。我正在按照 Cloudera link 中提到的步骤进行操作.我无法为安装 CDH5
我正在使用 Cloudera Quickstart VM 5.3.0(在 Windows 7 上的 Virtual Box 4.3 中运行)并且我想学习 Spark(在 YARN 上)。 我启动了 C
Cloudera 声称拥有快速启动方法。我注意到这对我不起作用。 当我调用 Spark-shell 时,我得到: ... WARN metastore.ObjectStore: Version inf
我有cloudera 5.7,我也有Cloudera Manager。 在目录/etc/hadoop下,我看到了三个子目录。 /etc/hadoop/conf /etc/hadoop/conf.clo
我是 Docker 和 Hadoop 系统的新手。我已经在 Ubuntu 16.04 中安装了 Docker,并在一个新的 Docker 容器中运行来自 Cloudera 的 Hadoop 镜像。但是
我必须区分 Cloudera 包裹过程和编写 cooking 书或厨师的食谱以在集群中安装包。 因此,我正在寻找 Parcel 与 Chef 之间的优势和劣势,反之亦然。 最佳答案 如果您使用的是 C
我对 impala 中的数据局部性有疑问,假设我有 10 个数据节点的集群(每个数据节点上都有 impalad),如果我在 impala 中执行查询 SELECT * FROM big_table w
我是一名优秀的程序员,十分优秀!