- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在使用Hadoop Map-Reduce API,并执行一个简单的任务来计算arounf 20MB大型输入文件中指定的一年中的最高模板度。
一切运行正常,即Mapper任务运行正常,Reducer任务运行正常,输出文件也生成良好。
但是问题在于,我在Hadoop UI页面上看不到任何内容,在“作业”选项卡,“作业进度”甚至“作业历史”中都看不到。
这是我的wordcound java文件:
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
public class WordCount extends Configured implements Tool{
/**
* Main function which calls the run method and passes the args using ToolRunner
* @param args Two arguments input and output file paths
* @throws Exception
*/
public static void main(String[] args) throws Exception{
int exitCode = ToolRunner.run(new WordCount(), args);
System.exit(exitCode);
}
/**
* Run method which schedules the Hadoop Job
* @param args Arguments passed in main function
*/
public int run(String[] args) throws Exception {
if (args.length != 2) {
System.err.printf("Usage: %s needs two arguments <input> <output> files\n",
getClass().getSimpleName());
return -1;
}
//Initialize the Hadoop job and set the jar as well as the name of the Job
Job job = new Job();
job.setJarByClass(WordCount.class);
job.setJobName("WordCounter");
//Add input and output file paths to job based on the arguments passed
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setOutputFormatClass(TextOutputFormat.class);
//Set the MapClass and ReduceClass in the job
job.setMapperClass(MapClass.class);
job.setReducerClass(ReduceClass.class);
//Wait for the job to complete and print if the job was successful or not
int returnValue = job.waitForCompletion(true) ? 0:1;
if(job.isSuccessful()) {
System.out.println("Job was successful");
} else if(!job.isSuccessful()) {
System.out.println("Job was not successful");
}
return returnValue;
}
}
hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/Users/bng/Documnents/hDir/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/Users/bng/Documnents/hDir/hdfs/data</value >
</property>
</configuration>
core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost/</value>
</property>
<property>
<name>dfs.http.address</name>
<value>50070</value>
</property>
</configuration>
mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
yarn-site.xml
<?xml version="1.0"?>
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>localhost</value>
</property>
</configuration>
0 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
202 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - session.id is deprecated. Instead, use dfs.metrics.session-id
203 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Initializing JVM Metrics with processName=JobTracker, sessionId=
402 [main] WARN org.apache.hadoop.mapreduce.JobSubmitter - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
408 [main] WARN org.apache.hadoop.mapreduce.JobSubmitter - No job jar file set. User classes may not be found. See Job or Job#setJar(String).
419 [main] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
471 [main] INFO org.apache.hadoop.mapreduce.JobSubmitter - number of splits:1
593 [main] INFO org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_local1149743576_0001
764 [main] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://localhost:8080/
766 [Thread-10] INFO org.apache.hadoop.mapred.LocalJobRunner - OutputCommitter set in config null
766 [main] INFO org.apache.hadoop.mapreduce.Job - Running job: job_local1149743576_0001
774 [Thread-10] INFO org.apache.hadoop.mapred.LocalJobRunner - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
815 [Thread-10] INFO org.apache.hadoop.mapred.LocalJobRunner - Waiting for map tasks
816 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.LocalJobRunner - Starting task: attempt_local1149743576_0001_m_000000_0
859 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.yarn.util.ProcfsBasedProcessTree - ProcfsBasedProcessTree currently is supported only on Linux.
860 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.Task - Using ResourceCalculatorProcessTree : null
865 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Processing split: file:/Users/bng/Downloads/hadoop-2.6.4/files/input.txt:0+366
995 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - (EQUATOR) 0 kvi 26214396(104857584)
995 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - mapreduce.task.io.sort.mb: 100
995 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - soft limit at 83886080
998 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - bufstart = 0; bufvoid = 104857600
998 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - kvstart = 26214396; length = 6553600
1003 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
1010 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.LocalJobRunner -
1011 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Starting flush of map output
1011 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Spilling map output
1011 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - bufstart = 0; bufend = 594; bufvoid = 104857600
1011 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - kvstart = 26214396(104857584); kvend = 26214172(104856688); length = 225/6553600
1020 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Finished spill 0
1024 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.Task - Task:attempt_local1149743576_0001_m_000000_0 is done. And is in the process of committing
1032 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.LocalJobRunner - map
1032 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.Task - Task 'attempt_local1149743576_0001_m_000000_0' done.
1033 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.LocalJobRunner - Finishing task: attempt_local1149743576_0001_m_000000_0
1033 [Thread-10] INFO org.apache.hadoop.mapred.LocalJobRunner - map task executor complete.
1035 [Thread-10] INFO org.apache.hadoop.mapred.LocalJobRunner - Waiting for reduce tasks
1035 [pool-3-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - Starting task: attempt_local1149743576_0001_r_000000_0
1041 [pool-3-thread-1] INFO org.apache.hadoop.yarn.util.ProcfsBasedProcessTree - ProcfsBasedProcessTree currently is supported only on Linux.
1041 [pool-3-thread-1] INFO org.apache.hadoop.mapred.Task - Using ResourceCalculatorProcessTree : null
1044 [pool-3-thread-1] INFO org.apache.hadoop.mapred.ReduceTask - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@6a57da8b
1058 [pool-3-thread-1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - MergerManager: memoryLimit=1336252800, maxSingleShuffleLimit=334063200, mergeThreshold=881926912, ioSortFactor=10, memToMemMergeOutputsThreshold=10
1060 [EventFetcher for fetching Map Completion Events] INFO org.apache.hadoop.mapreduce.task.reduce.EventFetcher - attempt_local1149743576_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
1092 [localfetcher#1] INFO org.apache.hadoop.mapreduce.task.reduce.LocalFetcher - localfetcher#1 about to shuffle output of map attempt_local1149743576_0001_m_000000_0 decomp: 710 len: 714 to MEMORY
1108 [localfetcher#1] INFO org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput - Read 710 bytes from map-output for attempt_local1149743576_0001_m_000000_0
1141 [localfetcher#1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - closeInMemoryFile -> map-output of size: 710, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->710
1142 [EventFetcher for fetching Map Completion Events] INFO org.apache.hadoop.mapreduce.task.reduce.EventFetcher - EventFetcher is interrupted.. Returning
1143 [pool-3-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - 1 / 1 copied.
1143 [pool-3-thread-1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
1160 [pool-3-thread-1] INFO org.apache.hadoop.mapred.Merger - Merging 1 sorted segments
1160 [pool-3-thread-1] INFO org.apache.hadoop.mapred.Merger - Down to the last merge-pass, with 1 segments left of total size: 702 bytes
1162 [pool-3-thread-1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - Merged 1 segments, 710 bytes to disk to satisfy reduce memory limit
1163 [pool-3-thread-1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - Merging 1 files, 714 bytes from disk
1165 [pool-3-thread-1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - Merging 0 segments, 0 bytes from memory into reduce
1165 [pool-3-thread-1] INFO org.apache.hadoop.mapred.Merger - Merging 1 sorted segments
1166 [pool-3-thread-1] INFO org.apache.hadoop.mapred.Merger - Down to the last merge-pass, with 1 segments left of total size: 702 bytes
1167 [pool-3-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - 1 / 1 copied.
1186 [pool-3-thread-1] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
1193 [pool-3-thread-1] INFO org.apache.hadoop.mapred.Task - Task:attempt_local1149743576_0001_r_000000_0 is done. And is in the process of committing
1195 [pool-3-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - 1 / 1 copied.
1195 [pool-3-thread-1] INFO org.apache.hadoop.mapred.Task - Task attempt_local1149743576_0001_r_000000_0 is allowed to commit now
1196 [pool-3-thread-1] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - Saved output of task 'attempt_local1149743576_0001_r_000000_0' to file:/Users/bng/Downloads/hadoop-2.6.4/files/output/_temporary/0/task_local1149743576_0001_r_000000
1197 [pool-3-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - reduce > reduce
1197 [pool-3-thread-1] INFO org.apache.hadoop.mapred.Task - Task 'attempt_local1149743576_0001_r_000000_0' done.
1197 [pool-3-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - Finishing task: attempt_local1149743576_0001_r_000000_0
1197 [Thread-10] INFO org.apache.hadoop.mapred.LocalJobRunner - reduce task executor complete.
1772 [main] INFO org.apache.hadoop.mapreduce.Job - Job job_local1149743576_0001 running in uber mode : false
1774 [main] INFO org.apache.hadoop.mapreduce.Job - map 100% reduce 100%
1775 [main] INFO org.apache.hadoop.mapreduce.Job - Job job_local1149743576_0001 completed successfully
1784 [main] INFO org.apache.hadoop.mapreduce.Job - Counters: 30
File System Counters
FILE: Number of bytes read=2542
FILE: Number of bytes written=495530
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
Map-Reduce Framework
Map input records=1
Map output records=57
Map output bytes=594
Map output materialized bytes=714
Input split bytes=119
Combine input records=0
Combine output records=0
Reduce input groups=47
Reduce shuffle bytes=714
Reduce input records=57
Reduce output records=47
Spilled Records=114
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=10
Total committed heap usage (bytes)=468713472
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=366
File Output Format Counters
Bytes Written=430
Job was successful
最佳答案
问题是通过STS / eclipse运行时的作业配置。
因此,在运行方法中添加了作业配置,并配置了 yarn 资源管理器和defaultFS,如下所示:
public int run(String[] args) throws Exception {
if (args.length != 2) {
System.err.printf("Usage: %s needs two arguments <input> <output> files\n",
getClass().getSimpleName());
return -1;
}
Configuration configuration = getConf();
configuration.set("fs.defaultFS", "hdfs://172.20.12.168");
configuration.set("mapreduce.jobtracker.address", "localhost:54311");
configuration.set("mapreduce.framework.name", "yarn");
configuration.set("yarn.resourcemanager.address", "127.0.0.1:8032");
//Initialize the Hadoop job and set the jar as well as the name of the Job
Job job = new Job();
job.setJarByClass(WordCount.class);
job.setJobName("WordCounter");
//Add input and output file paths to job based on the arguments passed
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setOutputFormatClass(TextOutputFormat.class);
//Set the MapClass and ReduceClass in the job
job.setMapperClass(MapClass.class);
job.setReducerClass(ReduceClass.class);
//Wait for the job to complete and print if the job was successful or not
int returnValue = job.waitForCompletion(true) ? 0:1;
if(job.isSuccessful()) {
System.out.println("Job was successful");
} else if(!job.isSuccessful()) {
System.out.println("Job was not successful");
}
return returnValue;
}
关于hadoop - Hadoop UI未显示作业选项卡,作业进度和作业历史记录,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44045121/
在 Android 的 API > 19 中是否有任何方法可以获取可移动 SD 卡的路径? 与外部 SD 卡一样,我们有 Environment.getExternalStorageDirectory
一些 Android 设备有 microSD(或其他存储卡)插槽,通常安装为 /storage/sdcard1 据我所知,自 Android 4.4 起 Google 限制了对此内存的访问,并在 An
我使用 Java Card 2.1.2 SDK 和 GPShell 作为与设备通信的方式在 Java Card 上构建一个项目。我从 GpShell 测试了 helloworld 示例,并成功发送了
我开发了一个应用程序,它有一个来电接收器,它适用于所有手机。一位用户有一部双 SIM 卡安卓手机。该应用程序适用于第一张 SIM 卡。但是当有人调用他的第二张 SIM 卡时,我们的应用程序不会被调用。
我有一个带预览的文件输入。 这是笔 Codepen 我想强制高度,我无法理解我该怎么做。我想将此组件的高度固定为 300px(示例),我还需要保持加载图像的正确纵横比,用灰色背景填充空白。现在我保持宽
关闭。这个问题不符合Stack Overflow guidelines .它目前不接受答案。 想改进这个问题?将问题更新为 on-topic对于堆栈溢出。 6年前关闭。 Improve this qu
我正在使用此代码访问 SD card : import os from os.path import join from jnius import autoclass #from android.pe
我正在为数据记录设备编写固件。它以 20 Hz 的频率从传感器读取数据并将数据写入 SD 卡。但是,向SD卡写入数据的时间并不一致(大约200-300 ms)。因此,一种解决方案是以一致的速率将数据写
我正在使用以下代码将视频放到网站上,但是在垂直方向上,手机屏幕上只能看到视频的左半部分 我不是网络开发人员。有人可以告诉我确切的内容吗,如何使其正确放置在手机屏幕上? 是在youtube iframe
我正在使用 Vuetify 1.5 和 Vuetify 网格系统来设置我的布局。现在我有一个组件 HelloWorld我将其导入到我的 Parent 中成分。我已经在我的 HelloWorld 中设置
我使用 python 制作了一个简单的二十一点游戏。我制作了游戏的其余部分,但我正在努力放入 ASCII 卡,所以这只是代码的一小部分。我尝试将 * len(phand) 放在附加行的末尾。虽然这确实
我正在使用玩家卡设置 Twitter 卡。它可以在预览工具中运行,但文档说它需要在“twitter.com 现代桌面浏览器? native iOs 和 Android Twitter 应用程序?mob
任何旧的 GSM 兼容 SIM 卡(3G USIM 的奖励)。 我想我需要一些硬件?谁能为业余爱好者推荐一些便宜的东西,以及一些更专业的东西? 我认为会有一个带有硬件的 API 的完整文档,所以也许这
我使用 python 制作了一个简单的二十一点游戏。我制作了游戏的其余部分,但我正在努力放入 ASCII 卡,所以这只是代码的一小部分。我尝试将 * len(phand) 放在附加行的末尾。虽然这确实
我记得前一段时间读到有 cpu 卡供系统添加额外的处理能力来进行大规模并行化。任何人都有这方面的经验和任何资源来研究项目的硬件和软件方面吗?这项技术是否不如传统集群?它更注重功率吗? 最佳答案 有两个
我检查外部存储是否已安装并且可用于读/写,然后从中读取。我使用的是确切的官方 Android 示例代码 ( from here )。 它说外部存储未安装。 getExternalFilesDir(nu
在 Android 2.1 及更低版本中,Android 应用程序可以请求下载到 SD 卡上吗?另外我想知道应用程序是否可以请求一些包含视频的文件夹下载到 SD 卡上?以及如何做到这一点? 提前致谢。
我们编写了一个 Windows 设备驱动程序来访问我们的自定义 PCI 卡。驱动程序使用 CreateFile 获取卡的句柄。 我们最近在一次安装中遇到了问题,卡似乎停止工作了。我们尝试更换卡(更换似
有些新设备(例如 Samsung Galaxy)带有两个 SD 卡。我想知道是否有任何方法可以确定设备是否有两张 SD 卡或一张 SD 卡。谢谢 最佳答案 我认为唯一的方法是使用 检查可用根的列表 F
我正在尝试将文件读/写到 SD 卡。我已经尝试在我的真实手机和 Eclipse 中的模拟器上执行此操作。在这两种设备上,对/mnt/sdcard/或/sdcard 的权限仅为“d--------”,我
我是一名优秀的程序员,十分优秀!