- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试使用命令将数据从本地目录加载到Pig
records =将'/ home / hduser / Downloads / 1901'加载为(year:chararray,temperature:int,quality:int);
转储记录;
这很好。
当我尝试下一个命令时
filtered_records =按温度过滤记录!= 9999 AND(质量== 0或质量== 1或质量== 4或质量== 5或质量== 9);
转储filterd_records;
这显示了带有某些线程的以下消息
2015-05-05 19:59:50,998 [main] INFO org.apache.pig.impl.logicalLayer.optimizer.PruneColumns - No column pruned for records
2015-05-05 19:59:50,999 [main] INFO org.apache.pig.impl.logicalLayer.optimizer.PruneColumns - No map keys pruned for records
2015-05-05 19:59:51,102 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Initializing JVM Metrics with processName=JobTracker, sessionId=
2015-05-05 19:59:51,294 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - (Name: Store(file:/tmp/temp138869802/tmp-1519406150:org.apache.pig.builtin.BinStorage) - 1-90 Operator Key: 1-90)
2015-05-05 19:59:51,380 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
2015-05-05 19:59:51,393 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
2015-05-05 19:59:51,457 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2015-05-05 19:59:51,460 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2015-05-05 19:59:51,465 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2015-05-05 19:59:52,893 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job
2015-05-05 19:59:53,042 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2015-05-05 19:59:53,043 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
2015-05-05 19:59:53,047 [Thread-4] WARN org.apache.hadoop.mapred.JobClient - Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
2015-05-05 19:59:53,259 [Thread-4] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2015-05-05 19:59:53,323 [Thread-4] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2015-05-05 19:59:53,377 [Thread-4] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2015-05-05 19:59:53,385 [Thread-4] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
2015-05-05 19:59:53,549 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
2015-05-05 19:59:53,860 [Thread-13] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2015-05-05 19:59:53,869 [Thread-13] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2015-05-05 19:59:53,870 [Thread-13] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
2015-05-05 19:59:53,953 [Thread-13] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2015-05-05 19:59:53,957 [Thread-13] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2015-05-05 19:59:53,996 [Thread-13] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2015-05-05 19:59:54,024 [Thread-13] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2015-05-05 19:59:54,323 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_local_0001
2015-05-05 19:59:54,893 [Thread-13] INFO org.apache.hadoop.mapred.TaskRunner - Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
2015-05-05 19:59:54,901 [Thread-13] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2015-05-05 19:59:54,901 [Thread-13] INFO org.apache.hadoop.mapred.LocalJobRunner -
2015-05-05 19:59:54,902 [Thread-13] INFO org.apache.hadoop.mapred.TaskRunner - Task attempt_local_0001_m_000000_0 is allowed to commit now
2015-05-05 19:59:54,907 [Thread-13] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2015-05-05 19:59:54,957 [Thread-13] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - Saved output of task 'attempt_local_0001_m_000000_0' to file:/tmp/temp138869802/tmp-1519406150
2015-05-05 19:59:54,958 [Thread-13] INFO org.apache.hadoop.mapred.LocalJobRunner -
2015-05-05 19:59:54,958 [Thread-13] INFO org.apache.hadoop.mapred.TaskRunner - Task 'attempt_local_0001_m_000000_0' done.
2015-05-05 19:59:58,829 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2015-05-05 19:59:58,830 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Successfully stored result in: "file:/tmp/temp138869802/tmp-1519406150"
2015-05-05 19:59:58,833 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Records written : 0
2015-05-05 19:59:58,837 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Bytes written : 0
2015-05-05 19:59:58,837 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Spillable Memory Manager spill count : 0
2015-05-05 19:59:58,837 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Proactive spill count : 0
2015-05-05 19:59:58,837 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Success!
2015-05-05 19:59:58,920 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2015-05-05 19:59:58,930 [main] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2015-05-05 19:59:58,933 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
最佳答案
我认为您应该定义您的加载程序,例如:
records = load '/home/hduser/Downloads/1901' Using PigStorage(',') as (year:chararray,temperature:int, quality:int);
关于hadoop - pig 用线程运行,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/30056170/
我想在嵌入式 pig 程序中执行一个 pig 脚本文件,如下所示 ----testPig.pig----- A = load '/user/biadmin/student' using PigStor
我正在使用 CurrentTime(),它是一种日期时间数据类型。但是,我需要它作为字符数组。我有以下几点: A = LOAD ... B = FOREACH A GENERATE CurrentTi
我有一个 hadoop 集群。 安装了 pig : 但是在 Hue (3.7) 中看不到 Pig 编辑器: 我该如何解决? 最佳答案 你能检查一下你的hue.ini文件吗 在解释器部分,如下图 # O
我在源文本文件中有一些日期值,如下面的第 3 列 123|text|2000-02-05 01:00:00-0500|true 如何将它们转换为 Pig 拉丁语中相应的 long 值?谢谢。 a =
看来我做不到 dump (limit A 10); 尽管 B = limit A 10; dump B; 似乎工作。 为什么?这似乎违反直觉。 最佳答案 确实是你不能这样做。 我相信为什么的问题不在范
A = load 'a.txt' as (id, a1); B = load 'b.txt as (id, b1); C = join A by id, B by id; D = foreach C
假设我有一个输入文件作为 map 。 sample.txt [1#"anything",2#"something",3#"anotherthing"] [2#"kish"] [3#"mad"] [4#
我正在尝试用 PIG 减去两个日期。 我有这样的数据: key_one, activation_date , deactivation_date (1456,2010-06-14 00:00:00,2
我正在与 pig 一起加载以逗号分隔的文件/文件夹 hadoop 范围。( this question on how to load multiple files in pig 问题是每个文件夹都有不
我一直认为 '' 和 "" 在 pig 中是一样的,但今天我得到了 Unexpected character '"' 出错 register datafu-pig-1.2.1.jar define C
我有一个运行 Hadoop 0.20.2 和 Pig 0.10 的集群。我有兴趣向 Pig 的源代码添加一些日志,并在集群上运行我自己的 Pig 版本。 我做了什么: 使用'ant'命令构建项目 得到
我无能为力地试图解决这个问题。我的脚本和 UDF 可以在 Pig 0.8.1 上完美运行,但是当我尝试在 Pig 0.10.0 上运行时,我得到: ERROR org.apache.pig.tools
目前我正在执行我的脚本: /usr/bin/pig /somepath/myscript.pig 出于某种原因,pig 总是卡在这个阶段。 2014-01-28 16:49:31,328 [main]
我有一个要加载到 Pig Engine 上的文本文件, 文本文件在单独的行中有名称,数据但有错误......特殊字符......像这样: Ja@@$s000on J@@a%^ke T!!ina M
我有一个用例,我需要计算两个字段的不同数量。 sample : x = LOAD 'testdata' using PigStorage('^A') as (a,b,c,d); y = GROUP x
我是 Pig 的新手,在解析我的输入并将其转换为我可以使用的格式时遇到了问题。输入文件包含具有固定字段和 KV 对的行,如下所示: FF1|FF2|FF3|FF4|KVP1|KVP2|...|KVPn
我有一个每天创建的文件文件夹,所有文件都存储相同类型的信息。我想制作一个脚本,加载最新的 10 个,将它们联合起来,然后在它们上运行一些其他代码。由于 pig 已经有一个 ls 方法,我想知道是否有一
我正在使用 Pig 0.11.0 排名函数并为我的数据中的每个 id 生成排名。 我需要以特定方式对我的数据进行排名。我希望每个新 ID 的排名都重置并从 1 开始。 是否可以直接使用 rank 函数
我有一个 (t,a,b) 形式的元组集合,我想在 Pig 中按 b 对它们进行分组。一旦分组,我想从每组中的元组中过滤出 b 并为每组生成一袋过滤后的元组。 例如,假设我们有 (1,2,1) (2,0
-- do something store result into '$RESULT.tmp'; rmf $RESULT mv $RESULT.tmp $RESULT 如果在 rmf $RESULT
我是一名优秀的程序员,十分优秀!