- android - RelativeLayout 背景可绘制重叠内容
- android - 如何链接 cpufeatures lib 以获取 native android 库?
- java - OnItemClickListener 不起作用,但 OnLongItemClickListener 在自定义 ListView 中起作用
- java - Android 文件转字符串
我正在尝试运行一个非常简单的 pig 脚本并不断遇到复杂情况。
脚本:
log = LOAD 'C:/Users/malanio/Documents/test.log' USING PigStorage(',') AS (user:chararray, some:long, some2:chararray);
DUMP log;
我正在加载的文件:
ravi,1,1
出现以下错误:
C:\Users\malanio\Documents>pig -x local testrun.pig
2014-06-12 14:46:22,939 [main] INFO org.apache.pig.Main - Apache Pig version 0.12.1 (r1585011) compiled Apr 05 2014, 01:41:34
2014-06-12 14:46:22,940 [main] INFO org.apache.pig.Main - Logging error messages to: C:\hadoop-2.4.0\logs\pig_1402598782937.log
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/hadoop-2.4.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/pig-0.12.1/pig-0.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2014-06-12 14:46:23,616 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file C:\Users\malanio/.pigbootup not found
2014-06-12 14:46:23,702 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2014-06-12 14:46:23,702 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2014-06-12 14:46:23,704 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: file:///
2014-06-12 14:46:24,275 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: UNKNOWN
2014-06-12 14:46:24,317 [main] INFO org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, NewPartitionFilterOptimizer, PartitionFilterOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter], RULES_DISABLED=[FilterLogicExpressionSimplifier]}
2014-06-12 14:46:24,470 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
2014-06-12 14:46:24,501 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
2014-06-12 14:46:24,501 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
2014-06-12 14:46:24,526 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - session.id is deprecated. Instead, use dfs.metrics.session-id
2014-06-12 14:46:24,527 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Initializing JVM Metrics with processName=JobTracker, sessionId=
2014-06-12 14:46:24,551 [main] WARN org.apache.pig.backend.hadoop20.PigJobControl - falling back to default JobControl (not using hadoop 0.20 ?)
java.lang.NoSuchFieldException: runnerState
at java.lang.Class.getDeclaredField(Class.java:1948)
at org.apache.pig.backend.hadoop20.PigJobControl.<clinit>(PigJobControl.java:51)
at org.apache.pig.backend.hadoop.executionengine.shims.HadoopShims.newJobControl(HadoopShims.java:98)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:289)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:191)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1324)
at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1309)
at org.apache.pig.PigServer.storeEx(PigServer.java:980)
at org.apache.pig.PigServer.store(PigServer.java:944)
at org.apache.pig.PigServer.openIterator(PigServer.java:857)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:774)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:372)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:198)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
at org.apache.pig.Main.run(Main.java:607)
at org.apache.pig.Main.main(Main.java:156)
2014-06-12 14:46:24,569 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job
2014-06-12 14:46:24,579 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.reduce.markreset.buffer.percent is deprecated. Instead, use mapreduce.reduce.markreset.buffer.percent
2014-06-12 14:46:24,581 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2014-06-12 14:46:24,584 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.output.compress is deprecated. Instead, use mapreduce.output.fileoutputformat.compress
2014-06-12 14:46:24,625 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job
2014-06-12 14:46:24,640 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Key [pig.schematuple] is false, will not generate code.
2014-06-12 14:46:24,642 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Starting process to move generated code to distributed cache
2014-06-12 14:46:24,645 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Distributed cache not supported or needed in local mode. Setting key [pig.schematuple.local.dir] with code temp directory: C:\Users\malanio\AppData\Local\Temp\1402598784640-0
2014-06-12 14:46:24,688 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
2014-06-12 14:46:24,693 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker.http.address is deprecated. Instead, use mapreduce.jobtracker.http.address
2014-06-12 14:46:24,704 [JobControl] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2014-06-12 14:46:24,714 [JobControl] ERROR org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl - Error while trying to run jobs.
java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.setupUdfEnvAndStores(PigOutputFormat.java:225)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.checkOutputSpecs(PigOutputFormat.java:186)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:458)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:343)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
at org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(JobControl.java:240)
at org.apache.pig.backend.hadoop20.PigJobControl.run(PigJobControl.java:121)
at java.lang.Thread.run(Thread.java:745)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:271)
2014-06-12 14:46:24,753 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
2014-06-12 14:46:24,764 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
2014-06-12 14:46:24,767 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job null has failed! Stop running all dependent jobs
2014-06-12 14:46:24,771 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2014-06-12 14:46:24,783 [main] ERROR org.apache.pig.tools.pigstats.SimplePigStats - ERROR 2997: Unable to recreate exception from backend error: Unexpected System Error Occured: java.lang.IncompatibleClassChang
eError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.setupUdfEnvAndStores(PigOutputFormat.java:225)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.checkOutputSpecs(PigOutputFormat.java:186)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:458)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:343)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
at org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(JobControl.java:240)
at org.apache.pig.backend.hadoop20.PigJobControl.run(PigJobControl.java:121)
at java.lang.Thread.run(Thread.java:745)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:271)
2014-06-12 14:46:24,821 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
2014-06-12 14:46:24,824 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Detected Local mode. Stats reported below may be incomplete
2014-06-12 14:46:24,831 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.4.0 0.12.1 malanio 2014-06-12 14:46:24 2014-06-12 14:46:24 UNKNOWN
Failed!
Failed Jobs:
JobId Alias Feature Message Outputs
N/A log MAP_ONLY Message: Unexpected System Error Occured: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.setupUdfEnvAndStores(PigOutputFormat.java:225)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.checkOutputSpecs(PigOutputFormat.java:186)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:458)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:343)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
at org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(JobControl.java:240)
at org.apache.pig.backend.hadoop20.PigJobControl.run(PigJobControl.java:121)
at java.lang.Thread.run(Thread.java:745)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:271)
file:/tmp/temp-590289635/tmp-804647280,
Input(s):
Failed to read data from "C:/Users/malanio/Documents/test.log"
Output(s):
Failed to produce result in "file:/tmp/temp-590289635/tmp-804647280"
Job DAG:
null
2014-06-12 14:46:24,939 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
2014-06-12 14:46:24,952 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias log
Details at logfile: C:\hadoop-2.4.0\logs\pig_1402598782937.log
当我注释掉 DUMP 行时,没有任何问题。只有当脚本试图转储数据时,它才会遇到麻烦。我在最新的 hadoop(2.4.0) 和最新的 pig (0.12.1) 上本地运行脚本。我对 pig 还是个新手,对此可能有一个简单的解释,但我似乎无法破译错误代码。我认为这可能与 pig jar 及其 API 有关。有什么建议吗?
最佳答案
When I comment out the DUMP line, there are no issues. It's only when the script is trying to dump the data that it runs into complications.
除非在最终结果/输出中需要,否则不会执行 pig 脚本中的语句。这就是为什么没有 'DUMP' 语句没有问题的原因。这与懒惰评估类似,但有区别。
关于hadoop - 尝试转储表时发生 Pig 错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/24192021/
下面的代码旨在在首次打开工作簿时运行。 Sub Auto_Open() Dim LastRow As Integer LastRow = Sheet6.UsedRange.Rows.Count Act
当我尝试操作我的代码时,除了弹出调试错误外,它执行得很好。错误信息在这里。 我的完整代码在这里。 #include using namespace std; class String { publi
The invocation of the constructor on type 'WpfApplication1.MainWindow' that matches the specified bi
我正在使用 BaseAdapter: public class MyAdapter extends BaseAdapter{ private final LayoutInflater mInflate
我想做网页抓取。我写了代码 var connection = require('./mysqlConnection'); var c = new Crawler({ maxConnections
我的系统中发生 Java 堆空间错误。我尝试了很多来自 Stack Overflow 的解决方案,但没有任何效果。当我工作时 当按下 OK 然后 (我的项目没有错误) 我的 eclipse.ini 是
环境: i5 750 DDR3 4GWin7 专业版 x64 sp1 DXSDK 9.0c 2010 年 6 月 GeForce GT240(驱动程序 275.33)512MB MSVC 2008 s
这段代码是我写的。 import socket host = 'localhost' port = 3794 s = socket.socket(socket.AF_INET, socket.SOCK
我正在尝试引用 UTC 时间间隔获取本地日期时间,我正在执行下面的代码。 var dtString =DateTime.UtcNow.ToString(@"yyyy-MM-ddTHH\:mm\:ss
我有一个非常简单的 C# 问题,它从库中加载 Windows WPF 窗口。这是代码: public partial class App : Application { public App(
我目前正在使用带有导航组件的底部导航,它工作正常但是当我们点击导航项 fragment 正在加载然后闪烁正在发生,即使当前选择的项目也会发生闪烁。它在加载 fragment 时发生。我的应用程序屏幕背
我是新来的 kotlin , 当我开始 Null Safety 时,我对下面的情况感到困惑. There's some data inconsistency with regard to initia
我有一个框,其中包含同时发生的两个独立的 css 转换。 当转换发生时,图标下方的标题和段落文本移动位置 参见 JS Fiddle:http://jsfiddle.net/Lsnbpt8r/ 这是我的
在为黑莓 10 构建电话间隙应用程序时,我遇到了异常情况。 [BUILD] Populating application source [BUILD] Parsing config.xml [
这个问题在这里已经有了答案: How to properly stop the Thread in Java? (8 个回答) 3年前关闭。 我看过How to properly stop the T
我试图弄清楚发生 fatal error 时如何刷新页面。基本上我正在访问图像 api 并将图像复制到我的服务器。我还每次都创建照片的缩略图版本。我会每隔一段时间收到一条错误消息,指出我的脚本试图分配
我正在尝试使用断言函数检查元素是否在屏幕上。我在我的测试应用程序 (AndroidDriver) 中使用 Appium 和 Java。我期望的是,如果元素在屏幕上,则返回 1;如果不在屏幕上,则返回
我正在开发图像上传系统。我使用 CommonsMultipartResolver 设置 maxUploadSize。当我尝试上传超过最大尺寸的图像文件时,会发生 MaxUploadSizeExcced
我有以下代码和@ComponentScan(basePackages = "com.project.shopping"),包结构为 com.project.shopping.Controller co
我尝试运行此程序作为测试,但收到错误“发生了 JNI 错误,请检查您的安装并重试”,然后是“发生了 Java 异常”。关于如何解决这个问题有什么想法吗? package java; public cl
我是一名优秀的程序员,十分优秀!