- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
在更改 mapred-site.xml 中的属性后,我给出了一个 tar.bz2 文件、.gz 和 tar.gz 文件作为输入。以上似乎都没有奏效。我假设这里发生的是 hadoop 作为输入读取的记录乱序,即。输入的一列是字符串,另一列是整数,但是由于一些乱序数据从压缩文件中读取它时,在某些时候hadoop将字符串部分读取为整数并生成非法格式异常。我只是个菜鸟。我想知道是配置有问题还是我的代码有问题。
core-site.xml中的属性是
<property>
<name>io.compression.codecs</name>
<value>org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apac\
he.hadoop.io.compress.SnappyCodec</value>
<description>A list of the compression codec classes that can be used for compression/decompression.</description>
</property>
mapred-site.xml 中的属性是
<property>
<name>mapred.compress.map.output</name>
<value>true</value>
</property>
<property>
<name>mapred.map.output.compression.codec</name>
<value>org.apache.hadoop.io.compress.BZip2Codec</value>
</property>
<property>
<name>mapred.output.compression.type</name>
<value>BLOCK</value>
</property>
这是我的代码
package org.myorg;
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.util.NativeCodeLoader;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.compress.CompressionCodec;
import org.apache.hadoop.io.compress.CompressionInputStream;
import org.apache.hadoop.io.compress.CompressionOutputStream;
import org.apache.hadoop.io.compress.Decompressor;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.io.compress.GzipCodec;
import org.apache.hadoop.io.compress.*;
import org.apache.hadoop.io.compress.BZip2Codec;
public class MySort{
public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable Marks = new IntWritable();
private Text name = new Text();
String one,two;
int num;
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
one=tokenizer.nextToken();
name.set(one);
if(tokenizer.hasMoreTokens())
two=tokenizer.nextToken();
num=Integer.parseInt(two);
Marks.set(num);
context.write(name, Marks);
}
}
}
public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
context.write(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
// conf.set("mapreduce.job.inputformat.class", "com.wizecommerce.utils.mapred.TextInputFormat");
// conf.set("mapreduce.job.outputformat.class", "com.wizecommerce.utils.mapred.TextOutputFormat");
// conf.setBoolean("mapreduce.map.output.compress",true);
conf.setBoolean("mapred.output.compress",true);
//conf.setBoolean("mapreduce.output.fileoutputformat.compress",false);
//conf.setBoolean("mapreduce.map.output.compress",true);
conf.set("mapred.output.compression.type", "BLOCK");
//conf.setClass("mapreduce.map.output.compress.codec", BZip2Codec.class, CompressionCodec.class);
// conf.setClass("mapred.map.output.compression.codec", GzipCodec.class, CompressionCodec.class);
conf.setClass("mapred.map.output.compression.codec", BZip2Codec.class, CompressionCodec.class);
Job job = new Job(conf, "mysort");
job.setJarByClass(org.myorg.MySort.class);
job.setJobName("mysort");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
// FileInputFormat.setCompressInput(job,true);
FileOutputFormat.setCompressOutput(job, true);
//FileOutputFormat.setOutputCompressorClass(job, GzipCodec.class);
// conf.set("mapred.output.compression.type", CompressionType.BLOCK.toString());
FileOutputFormat.setOutputCompressorClass(job, BZip2Codec.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
}
}
这些都是放在一个makefile中的所有命令
run: all
-sudo ./a.out
sudo chmod 777 -R Data
-sudo rm data.tar.bz2
sudo tar -cvjf data.tar.bz2 Data/data.txt
sudo javac -classpath /home/hduser/12115_Select_Query/hadoop-core-1.1.2.jar -d mysort MySort.java
sudo jar -cvf mysort.jar -C mysort/ .
-hadoop fs -rmr MySort/output
-hadoop fs -rmr MySort/input
hadoop fs -mkdir MySort/input
hadoop fs -put data.tar.bz2 MySort/input
hadoop jar mysort.jar org.myorg.MySort MySort/input/ MySort/output
-sudo rm /home/hduser/Out/sort.txt
hadoop fs -copyToLocal MySort/output/part-r-00000 /home/hduser/Out/sort.txt
sudo gedit /home/hduser/Out/sort.txt
all: rdata.c
-sudo rm a.out
-gcc rdata.c -o a.out
exec: run
.PHONY: exec run
命令:
hadoop jar mysort.jar org.myorg.MySort MySort/input/ MySort/output
这是输出:
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
14/06/25 11:20:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/06/25 11:20:28 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/06/25 11:20:29 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
14/06/25 11:20:29 INFO input.FileInputFormat: Total input paths to process : 1
14/06/25 11:20:29 INFO mapreduce.JobSubmitter: number of splits:1
14/06/25 11:20:29 INFO Configuration.deprecation: mapred.output.compress is deprecated. Instead, use mapreduce.output.fileoutputformat.compress
14/06/25 11:20:29 INFO Configuration.deprecation: mapred.map.output.compression.codec is deprecated. Instead, use mapreduce.map.output.compress.codec
14/06/25 11:20:29 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1403675322820_0001
14/06/25 11:20:30 INFO impl.YarnClientImpl: Submitted application application_1403675322820_0001
14/06/25 11:20:30 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1403675322820_0001/
14/06/25 11:20:30 INFO mapreduce.Job: Running job: job_1403675322820_0001
14/06/25 11:20:52 INFO mapreduce.Job: Job job_1403675322820_0001 running in uber mode : false
14/06/25 11:20:52 INFO mapreduce.Job: map 0% reduce 0%
14/06/25 11:21:10 INFO mapreduce.Job: Task Id : attempt_1403675322820_0001_m_000000_0, Status : FAILED
Error: java.lang.NumberFormatException: For input string: "0ustar"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.parseInt(Integer.java:615)
at org.myorg.MySort$Map.map(MySort.java:36)
at org.myorg.MySort$Map.map(MySort.java:23)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
14/06/25 11:21:29 INFO mapreduce.Job: Task Id : attempt_1403675322820_0001_m_000000_1, Status : FAILED
Error: java.lang.NumberFormatException: For input string: "0ustar"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.parseInt(Integer.java:615)
at org.myorg.MySort$Map.map(MySort.java:36)
at org.myorg.MySort$Map.map(MySort.java:23)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
14/06/25 11:21:49 INFO mapreduce.Job: Task Id : attempt_1403675322820_0001_m_000000_2, Status : FAILED
Error: java.lang.NumberFormatException: For input string: "0ustar"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.parseInt(Integer.java:615)
at org.myorg.MySort$Map.map(MySort.java:36)
at org.myorg.MySort$Map.map(MySort.java:23)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
14/06/25 11:22:10 INFO mapreduce.Job: map 100% reduce 100%
14/06/25 11:22:10 INFO mapreduce.Job: Job job_1403675322820_0001 failed with state FAILED due to: Task failed task_1403675322820_0001_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
14/06/25 11:22:10 INFO mapreduce.Job: Counters: 9
Job Counters
Failed map tasks=4
Launched map tasks=4
Other local map tasks=3
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=69797
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=69797
Total vcore-seconds taken by all map tasks=69797
Total megabyte-seconds taken by all map tasks=71472128
我也试过用这个:
hadoop jar /usr/local/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.3.0.jar -Dmapred.output.compress=true -Dmapred.compress.map.output=true -Dmapred.output.compression.codec=org.apache.hadoop.io.compress.BZip2Codec -Dmapred.reduce.tasks=0 -input MySort/input/data.txt -output MySort/zip1
创建压缩文件成功
hadoop fs -ls MySort/zip1
Found 3 items
-rw-r--r-- 1 hduser supergroup 0 2014-06-25 10:43 MySort/zip1/_SUCCESS
-rw-r--r-- 1 hduser supergroup 42488018 2014-06-25 10:43 MySort/zip1/part-00000.bz2
-rw-r--r-- 1 hduser supergroup 42504084 2014-06-25 10:43 MySort/zip1/part-00001.bz2
然后运行这个:
hadoop jar mysort.jar org.myorg.MySort MySort/input/ MySort/zip1
还是不行。我在这里遗漏了什么吗?
当我运行它而不使用压缩文件 bz2 并直接将文本文件 Data/data.txt 传递给它时它工作正常,即将它上传到 hdfs 中的 MySort/input (hadoop fs -put Data/data.txt MySort/input) .
感谢任何帮助
最佳答案
我为此做了一些工作。我使用了 Tool Runner。
package org.myorg;
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.util.NativeCodeLoader;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.compress.CompressionCodec;
import org.apache.hadoop.io.compress.CompressionInputStream;
import org.apache.hadoop.io.compress.CompressionOutputStream;
import org.apache.hadoop.io.compress.Decompressor;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.io.compress.GzipCodec;
import org.apache.hadoop.io.compress.*;
import org.apache.hadoop.io.compress.BZip2Codec;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
public class ToolMapReduce extends Configured implements Tool
{
public static class Map extends Mapper<LongWritable, Text, Text, IntWritable>
{
private final static IntWritable Marks = new IntWritable();
private Text name = new Text();
String one,two;
int num;
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException
{
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens())
{
one=tokenizer.nextToken();
name.set(one);
if(tokenizer.hasMoreTokens())
two=tokenizer.nextToken();
num=Integer.parseInt(two);
Marks.set(num);
context.write(name, Marks);
}
}
}
public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable>
{
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException
{
int sum = 0;
for (IntWritable val : values)
{
sum += val.get();
}
context.write(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception
{
int res = ToolRunner.run(new Configuration(), new ToolMapReduce(), args);
System.exit(res);
}
public int run(String[] args) throws Exception
{
Configuration conf = this.getConf();
//Configuration conf = new Configuration();
//conf.setOutputFormat(SequenceFileOutputFormat.class);
//SequenceFileOutputFormat.setOutputCompressionType(conf, CompressionType.BLOCK);
//SequenceFileOutputFormat.setCompressOutput(conf, true);
//conf.set("mapred.output.compress","true");
// conf.set("mapred.output.compression","org.apache.hadoop.io.compress.SnappyCodec");
//conf.set("mapred.output.compression.codec","org.apache.hadoop.io.compress.SnappyCodec");
// conf.set("mapreduce.job.inputformat.class", "com.wizecommerce.utils.mapred.TextInputFormat");
// conf.set("mapreduce.job.outputformat.class", "com.wizecommerce.utils.mapred.TextOutputFormat");
// conf.setBoolean("mapreduce.map.output.compress",true);
conf.setBoolean("mapred.output.compress",true);
//conf.setBoolean("mapreduce.output.fileoutputformat.compress",false);
//conf.setBoolean("mapreduce.map.output.compress",true);
conf.set("mapred.output.compression.type", "BLOCK");
//conf.setClass("mapreduce.map.output.compress.codec", BZip2Codec.class, CompressionCodec.class);
// conf.setClass("mapred.map.output.compression.codec", GzipCodec.class, CompressionCodec.class);
conf.setClass("mapred.map.output.compression.codec", GzipCodec.class, CompressionCodec.class);
Job job = new Job(conf, "mysort");
job.setJarByClass(org.myorg.ToolMapReduce.class);
//job.setJarByClass(org.myorg.MySort.class);
job.setJobName("mysort");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
// FileInputFormat.setCompressInput(job,true);
FileOutputFormat.setCompressOutput(job, true);
//FileOutputFormat.setOutputCompressorClass(job, GzipCodec.class);
// conf.set("mapred.output.compression.type", CompressionType.BLOCK.toString());
FileOutputFormat.setOutputCompressorClass(job, GzipCodec.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
return job.waitForCompletion(true) ? 0 : 1;
//job.waitForCompletion(true);
}
}
关于hadoop - 以压缩文件作为输入运行 hadoop。 hadoop 读取的数据输入不按顺序。数字格式异常,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/24401674/
好的,所以我想从批处理文件运行我的整个工作环境... 我想要实现什么...... 打开新的 powershell,打开我的 API 文件夹并从该文件夹运行 VS Code 编辑器(cd c:\xy;
我正在查看 Cocoa Controls 上的示例并下载了一些演示。我遇到的问题是一些例子,比如 BCTabBarController ,不会在我的设备上构建或启动。当我打开项目时,它看起来很正常,没
我刚刚开始学习 C 语言(擅长 Java 和 Python)。 当编写 C 程序(例如 hello world)时,我在 ubuntu cmd 行上使用 gcc hello.c -o hello 编译
我在 php 脚本从 cron 开始运行到超时后注意到了这个问题,但是当它从命令行手动运行时这不是问题。 (对于 CLI,PHP 默认的 max_execution_time 是 0) 所以我尝试运行
我可以使用命令行运行测试 > ./node_modules/.bin/wdio wdio.conf.js 但是如果我尝试从 IntelliJ 的运行/调试配置运行它,我会遇到各种不同的错误。 Fea
Error occurred during initialization of VM. Could not reserve enough space for object heap. Error: C
将 Anaconda 安装到 C:\ 后,我无法打开 jupyter 笔记本。无论是在带有 jupyter notebook 的 Anaconda Prompt 中还是在导航器中。我就是无法让它工作。
我遇到一个问题,如果我双击我的脚本 (.py),或者使用 IDLE 打开它,它将正确编译并运行。但是,如果我尝试在 Windows 命令行中运行脚本,请使用 C:\> "C:\Software_Dev
情况 我正在使用 mysql 数据库。查询从 phpmyadmin 和 postman 运行 但是当我从 android 发送请求时(它返回零行) 我已经记录了从 android 发送的电子邮件是正确
所以这个有点奇怪 - 为什么从 Java 运行 .exe 文件会给出不同的输出而不是直接运行 .exe。 当 java 在下面的行执行时,它会调用我构建的可与 3CX 电话系统配合使用的 .exe 文
这行代码 Environment.Is64BitProcess 当我的应用单独运行时评估为真。 但是当它在我的 Visual Studio 单元测试中运行时,相同的表达式的计算结果为 false。 我
关闭。这个问题是opinion-based .它目前不接受答案。 想要改进这个问题? 更新问题,以便 editing this post 可以用事实和引用来回答它. 关闭 8 年前。 Improve
我写了一个使用 libpq 连接到 PostgreSQL 数据库的演示。 我尝试通过包含将 C 文件连接到 PostgreSQL #include 在我将路径添加到系统变量 I:\Program F
如何从 Jenkins 运行 Android 模拟器来运行我的测试?当我在 Execiute Windows bath 命令中写入时,运行模拟器的命令: emulator -avd Tester 然后
我已经配置好东西,这样我就可以使用 ssl 登录和访问在 nginx 上运行的 errbit 我的问题是我不知道如何设置我的 Rails 应用程序的 errbit.rb 以便我可以运行测试 nginx
我编写了 flutter 应用程序,我通过 xcode 打开了 ios 部分并且应用程序正在运行,但是当我通过 flutter build ios 通过 vscode 运行应用程序时,我得到了这个错误
我有一个简短的 python 脚本,它使用日志记录模块和 configparser 模块。我在Win7下使用PyCharm 2.7.1和Python 3.3。 当我使用 PyCharm 运行我的脚本时
我在这里遇到了一些难题。 我的开发箱是 64 位的,windows 7。我所有的项目都编译为“任何 CPU”。该项目引用了 64 位版本的第 3 方软件 当我运行不使用任何 Web 引用的单元测试时,
当我注意到以下问题时,我正在做一些 C++ 练习。给定的代码将不会在 Visual Studio 2013 或 Qt Creator 5.4.1 中运行/编译 报错: invalid types 'd
假设我有一个 easteregg.py 文件: from airflow import DAG from dateutil import parser from datetime import tim
我是一名优秀的程序员,十分优秀!