gpt4 book ai didi

hadoop - org.apache.pig.piggybank.storage.apachelog.CombinedLogLoader不会以组合文件格式加载文件

转载 作者:行者123 更新时间:2023-12-02 21:27:34 26 4
gpt4 key购买 nike

我有一个存储在HDFS中的Apache组合日志文件。这是前五行的示例:

123.125.67.216 - - [02/Jan/2012:00:48:27 -0800] "GET /wiki/Dekart HTTP/1.1" 200 4512 "-" "Mozilla/5.0 (Windows NT 5.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.2" 
209.85.238.130 - - [02/Jan/2012:00:48:27 -0800] "GET /w/index.php?title=Special:RecentChanges&feed=atom HTTP/1.1" 304 260 "-" "Feedfetcher-Google; (+http://www.google.com/feedfetcher.html; 4 subscribers; feed-id=11568779694056348047)"
123.125.67.213 - - [02/Jan/2012:00:48:33 -0800] "GET / HTTP/1.1" 301 433 "-" "Mozilla/5.0 (Windows NT 5.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.2"
123.125.67.214 - - [02/Jan/2012:00:48:33 -0800] "GET /wiki/Main_Page HTTP/1.1" 200 8647 "-" "Mozilla/5.0 (Windows NT 5.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.2"

我正在尝试使用piggybank中的CombinedLogLoader用Apache Pig加载此文件。这应该工作。这是我的示例代码:
grunt> raw = LOAD 'log' USING org.apache.pig.piggybank.storage.apachelog.CombinedLogLoader AS (remoteAddr, remoteLogname, user, time, method, uri, proto, status, bytes, referer, userAgent);
16/02/15 21:39:38 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
grunt> dump raw;

即使文件有数千个,我也得到0条记录。

以下是我的完整输出。我究竟做错了什么?
162493 [main] INFO  org.apache.pig.tools.pigstats.ScriptState  - Pig features used in the script: UNKNOWN
16/02/15 21:39:40 INFO pigstats.ScriptState: Pig features used in the script: UNKNOWN
16/02/15 21:39:40 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
162551 [main] WARN org.apache.pig.data.SchemaTupleBackend - SchemaTupleBackend has already been initialized
16/02/15 21:39:40 WARN data.SchemaTupleBackend: SchemaTupleBackend has already been initialized
162551 [main] INFO org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, PartitionFilterOptimizer, PredicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
16/02/15 21:39:40 INFO optimizer.LogicalPlanOptimizer: {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, PartitionFilterOptimizer, PredicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
162559 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
16/02/15 21:39:40 INFO mapReduceLayer.MRCompiler: File concatenation threshold: 100 optimistic? false
162562 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
16/02/15 21:39:40 INFO mapReduceLayer.MultiQueryOptimizer: MR plan size before optimization: 1
162562 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
16/02/15 21:39:40 INFO mapReduceLayer.MultiQueryOptimizer: MR plan size after optimization: 1
16/02/15 21:39:40 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
16/02/15 21:39:40 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
162586 [main] INFO org.apache.pig.tools.pigstats.mapreduce.MRScriptState - Pig script settings are added to the job
16/02/15 21:39:40 INFO mapreduce.MRScriptState: Pig script settings are added to the job
162586 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
162587 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - This job cannot be converted run in-process
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: This job cannot be converted run in-process
162611 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/lib/pig/lib/piggybank.jar to DistributedCache through /tmp/temp2003065886/tmp2039083441/piggybank.jar
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: Added jar file:/usr/lib/pig/lib/piggybank.jar to DistributedCache through /tmp/temp2003065886/tmp2039083441/piggybank.jar
162651 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/lib/pig/pig-0.14.0-amzn-0-core-h2.jar to DistributedCache through /tmp/temp2003065886/tmp551968774/pig-0.14.0-amzn-0-core-h2.jar
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: Added jar file:/usr/lib/pig/pig-0.14.0-amzn-0-core-h2.jar to DistributedCache through /tmp/temp2003065886/tmp551968774/pig-0.14.0-amzn-0-core-h2.jar
162670 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/lib/pig/lib/automaton-1.11-8.jar to DistributedCache through /tmp/temp2003065886/tmp710362688/automaton-1.11-8.jar
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: Added jar file:/usr/lib/pig/lib/automaton-1.11-8.jar to DistributedCache through /tmp/temp2003065886/tmp710362688/automaton-1.11-8.jar
162689 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/lib/pig/lib/antlr-runtime-3.4.jar to DistributedCache through /tmp/temp2003065886/tmp-1076004022/antlr-runtime-3.4.jar
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: Added jar file:/usr/lib/pig/lib/antlr-runtime-3.4.jar to DistributedCache through /tmp/temp2003065886/tmp-1076004022/antlr-runtime-3.4.jar
162714 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/lib/hadoop/lib/guava-11.0.2.jar to DistributedCache through /tmp/temp2003065886/tmp1810740836/guava-11.0.2.jar
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: Added jar file:/usr/lib/hadoop/lib/guava-11.0.2.jar to DistributedCache through /tmp/temp2003065886/tmp1810740836/guava-11.0.2.jar
162737 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/lib/hadoop-mapreduce/joda-time-2.8.1.jar to DistributedCache through /tmp/temp2003065886/tmp-1238145114/joda-time-2.8.1.jar
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: Added jar file:/usr/lib/hadoop-mapreduce/joda-time-2.8.1.jar to DistributedCache through /tmp/temp2003065886/tmp-1238145114/joda-time-2.8.1.jar
162752 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: Setting up single store job
162753 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Key [pig.schematuple] is false, will not generate code.
16/02/15 21:39:40 INFO data.SchemaTupleFrontend: Key [pig.schematuple] is false, will not generate code.
162753 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Starting process to move generated code to distributed cacche
16/02/15 21:39:40 INFO data.SchemaTupleFrontend: Starting process to move generated code to distributed cacche
162753 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Setting key [pig.schematuple.classes] with classes to deserialize []
16/02/15 21:39:40 INFO data.SchemaTupleFrontend: Setting key [pig.schematuple.classes] with classes to deserialize []
162776 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
16/02/15 21:39:40 INFO mapReduceLayer.MapReduceLauncher: 1 map-reduce job(s) waiting for submission.
16/02/15 21:39:40 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
16/02/15 21:39:40 WARN mapreduce.JobResourceUploader: No job jar file set. User classes may not be found. See Job or Job#setJar(String).
16/02/15 21:39:40 INFO input.FileInputFormat: Total input paths to process : 1
162866 [JobControl] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
16/02/15 21:39:40 INFO util.MapRedUtil: Total input paths (combined) to process : 1
16/02/15 21:39:40 INFO mapreduce.JobSubmitter: number of splits:1
16/02/15 21:39:40 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1455560055771_0007
16/02/15 21:39:40 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources.
16/02/15 21:39:40 INFO impl.YarnClientImpl: Submitted application application_1455560055771_0007
16/02/15 21:39:40 INFO mapreduce.Job: The url to track the job: http://ip-172-31-42-90.ec2.internal:20888/proxy/application_1455560055771_0007/
163278 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_1455560055771_0007
16/02/15 21:39:40 INFO mapReduceLayer.MapReduceLauncher: HadoopJobId: job_1455560055771_0007
163278 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases raw
16/02/15 21:39:40 INFO mapReduceLayer.MapReduceLauncher: Processing aliases raw
163278 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: raw[2,6],null[-1,-1] C: R:
16/02/15 21:39:40 INFO mapReduceLayer.MapReduceLauncher: detailed locations: M: raw[2,6],null[-1,-1] C: R:
163283 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
16/02/15 21:39:40 INFO mapReduceLayer.MapReduceLauncher: 0% complete
163283 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Running jobs are [job_1455560055771_0007]
16/02/15 21:39:40 INFO mapReduceLayer.MapReduceLauncher: Running jobs are [job_1455560055771_0007]
177841 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 50% complete
16/02/15 21:39:55 INFO mapReduceLayer.MapReduceLauncher: 50% complete
177841 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Running jobs are [job_1455560055771_0007]
16/02/15 21:39:55 INFO mapReduceLayer.MapReduceLauncher: Running jobs are [job_1455560055771_0007]
16/02/15 21:39:56 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
16/02/15 21:39:56 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
16/02/15 21:39:56 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
16/02/15 21:39:56 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
16/02/15 21:39:56 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
16/02/15 21:39:56 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
178506 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
16/02/15 21:39:56 INFO mapReduceLayer.MapReduceLauncher: 100% complete
178506 [main] INFO org.apache.pig.tools.pigstats.mapreduce.SimplePigStats - Script Statistics:

HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.7.1-amzn-0 0.14.0-amzn-0 hadoop 2016-02-15 21:39:40 2016-02-15 21:39:56 UNKNOWN

Success!

Job Stats (time in seconds):
JobId Maps Reduces MaxMapTime MinMapTime AvgMapTime MedianMapTime MaxReduceTime MinReduceTime AvgReduceTime MedianReducetime Alias Feature Outputs
job_1455560055771_0007 1 0 5 5 5 5 0 0 0 0 raw MAP_ONLY hdfs://ip-172-31-42-90.ec2.internal:8020/tmp/temp2003065886/tmp1853785276,

Input(s):
Successfully read 0 records (10040153 bytes) from: "hdfs://ip-172-31-42-90.ec2.internal:8020/user/hadoop/log"

Output(s):
Successfully stored 0 records in: "hdfs://ip-172-31-42-90.ec2.internal:8020/tmp/temp2003065886/tmp1853785276"

Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0

Job DAG:
job_1455560055771_0007


16/02/15 21:39:56 INFO mapreduce.SimplePigStats: Script Statistics:

HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.7.1-amzn-0 0.14.0-amzn-0 hadoop 2016-02-15 21:39:40 2016-02-15 21:39:56 UNKNOWN

Success!

Job Stats (time in seconds):
JobId Maps Reduces MaxMapTime MinMapTime AvgMapTime MedianMapTime MaxReduceTime MinReduceTime AvgReduceTime MedianReducetime Alias Feature Outputs
job_1455560055771_0007 1 0 5 5 5 5 0 0 0 0 raw MAP_ONLY hdfs://ip-172-31-42-90.ec2.internal:8020/tmp/temp2003065886/tmp1853785276,

Input(s):
Successfully read 0 records (10040153 bytes) from: "hdfs://ip-172-31-42-90.ec2.internal:8020/user/hadoop/log"

Output(s):
Successfully stored 0 records in: "hdfs://ip-172-31-42-90.ec2.internal:8020/tmp/temp2003065886/tmp1853785276"

Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0

Job DAG:
job_1455560055771_0007


16/02/15 21:39:56 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
16/02/15 21:39:56 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
16/02/15 21:39:56 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
16/02/15 21:39:56 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
16/02/15 21:39:56 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
16/02/15 21:39:56 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
178606 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Success!
16/02/15 21:39:56 INFO mapReduceLayer.MapReduceLauncher: Success!
16/02/15 21:39:56 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
178607 [main] INFO org.apache.pig.data.SchemaTupleBackend - Key [pig.schematuple] was not set... will not generate code.
16/02/15 21:39:56 INFO data.SchemaTupleBackend: Key [pig.schematuple] was not set... will not generate code.
16/02/15 21:39:56 INFO input.FileInputFormat: Total input paths to process : 1
178616 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
16/02/15 21:39:56 INFO util.MapRedUtil: Total input paths to process : 1
grunt>

最佳答案

确保您的日志文件格式正确。我注意到您的日志文件在该行的边缘包含space。删除该space并执行相同的脚本。
请参阅我已执行您的脚本,结果如​​下。

脚本:

grunt> LOAD '~/temp1.log' USING org.apache.pig.piggybank.storage.apachelog.CombinedLogLoader AS (remoteAddr, remoteLogname, user, time, method, uri, proto, status, bytes, referer, userAgent);    
grunt> dump raw;

输出:
Input(s):
Successfully read 4 records (1090 bytes) from: "~/temp1.log"

Output(s):
Successfully stored 4 records (753 bytes) in: "hdfs://sandbox.hortonworks.com:8020/tmp/temp-53432852/tmp1982058168"

Counters:
Total records written : 4
Total bytes written : 753
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0

Job DAG:
job_1522732012453_0054


2018-04-03 10:44:33,436 [main] INFO org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
2018-04-03 10:44:33,436 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at sandbox.hortonworks.com/172.17.0.2:8050
2018-04-03 10:44:33,437 [main] INFO org.apache.hadoop.yarn.client.AHSProxy - Connecting to Application History server at sandbox.hortonworks.com/172.17.0.2:10200
2018-04-03 10:44:33,467 [main] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2018-04-03 10:44:33,620 [main] INFO org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
2018-04-03 10:44:33,620 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at sandbox.hortonworks.com/172.17.0.2:8050
2018-04-03 10:44:33,620 [main] INFO org.apache.hadoop.yarn.client.AHSProxy - Connecting to Application History server at sandbox.hortonworks.com/172.17.0.2:10200
2018-04-03 10:44:33,635 [main] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2018-04-03 10:44:33,797 [main] INFO org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
2018-04-03 10:44:33,797 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at sandbox.hortonworks.com/172.17.0.2:8050
2018-04-03 10:44:33,799 [main] INFO org.apache.hadoop.yarn.client.AHSProxy - Connecting to Application History server at sandbox.hortonworks.com/172.17.0.2:10200
2018-04-03 10:44:33,813 [main] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2018-04-03 10:44:33,929 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Success!
2018-04-03 10:44:33,939 [main] INFO org.apache.pig.data.SchemaTupleBackend - Key [pig.schematuple] was not set... will not generate code.
2018-04-03 10:44:33,955 [main] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2018-04-03 10:44:33,955 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1

(123.125.67.216,-,-,02/Jan/2012:00:48:27 -0800,GET,/wiki/Dekart,HTTP/1.1,200,4512,-,Mozilla/5.0 (Windows NT 5.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.2)
(209.85.238.130,-,-,02/Jan/2012:00:48:27 -0800,GET,/w/index.php?title=Special:RecentChanges&feed=atom,HTTP/1.1,304,260,-,Feedfetcher-Google; (+http://www.google.com/feedfetcher.html; 4 subscribers; feed-id=11568779694056348047))
(123.125.67.213,-,-,02/Jan/2012:00:48:33 -0800,GET,/,HTTP/1.1,301,433,-,Mozilla/5.0 (Windows NT 5.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.2)
(123.125.67.214,-,-,02/Jan/2012:00:48:33 -0800,GET,/wiki/Main_Page,HTTP/1.1,200,8647,-,Mozilla/5.0 (Windows NT 5.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.2)

关于hadoop - org.apache.pig.piggybank.storage.apachelog.CombinedLogLoader不会以组合文件格式加载文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35419646/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com