- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我刚刚在我的 Hadoop 集群中安装了 Hive,并将我的数据加载到 Hive 表中。当我发出 select *
它工作得很好,但是当我发出 select * from table where column1 in (select max(column1) from table );
它卡住了。请帮我。
这是我的 hive 日志
2017-02-17 07:42:28,116 INFO [main]: SessionState (SessionState.java:printInfo(951)) -
Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-1.2.1.jar!/hive-log4j.properties
2017-02-17 07:42:28,438 WARN [main]: util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2017-02-17 07:42:28,560 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:newRawStore(589)) - 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
2017-02-17 07:42:28,710 INFO [main]: metastore.ObjectStore (ObjectStore.java:initialize(289)) - ObjectStore, initialize called
2017-02-17 07:42:30,831 INFO [main]: metastore.ObjectStore (ObjectStore.java:getPMF(370)) - Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
2017-02-17 07:42:33,354 INFO [main]: metastore.MetaStoreDirectSql (MetaStoreDirectSql.java:<init>(139)) - Using direct SQL, underlying DB is DERBY
.....
2017-02-17 07:43:04,861 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>
2017-02-17 07:43:04,927 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>
2017-02-17 07:43:04,953 INFO [main]: parse.ParseDriver (ParseDriver.java:parse(185)) - Parsing command: select consume_date,hour_id,fromdate,company_name,b03 from consumes where b03 in (select max(b03) from consumes)
2017-02-17 07:43:05,527 INFO [main]: parse.ParseDriver (ParseDriver.java:parse(209)) - Parse Completed
2017-02-17 07:43:05,528 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=parse start=1487346184927 end=1487346185528 duration=601 from=org.apache.hadoop.hive.ql.Driver>
2017-02-17 07:43:05,530 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>
2017-02-17 07:43:05,576 INFO [main]: parse.CalcitePlanner (SemanticAnalyzer.java:analyzeInternal(10127)) - Starting Semantic Analysis
2017-02-17 07:43:05,579 INFO [main]: parse.CalcitePlanner (SemanticAnalyzer.java:genResolvedParseTree(10074)) - Completed phase 1 of Semantic Analysis
2017-02-17 07:43:05,579 INFO [main]: parse.CalcitePlanner (SemanticAnalyzer.java:getMetaData(1552)) - Get metadata for source tables
2017-02-17 07:43:05,579 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(746)) - 0: get_table : db=default tbl=consumes
2017-02-17 07:43:05,580 INFO [main]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(371)) - ugi=linux ip=unknown-ip-addr cmd=get_table : db=default tbl=consumes
2017-02-17 07:43:06,076 INFO [main]: parse.CalcitePlanner (SemanticAnalyzer.java:getMetaData(1704)) - Get metadata for subqueries
2017-02-17 07:43:06,092 INFO [main]: parse.CalcitePlanner (SemanticAnalyzer.java:getMetaData(1728)) - Get metadata for destination tables
2017-02-17 07:43:06,096 ERROR [main]: hdfs.KeyProviderCache (KeyProviderCache.java:createKeyProviderURI(87)) - Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !!
2017-02-17 07:43:06,129 INFO [main]: ql.Context (Context.java:getMRScratchDir(330)) - New scratch dir is hdfs://hadoopmaster:9000/tmp/hive/linux/d79925b9-fb4a-41c8-b45e-cc42db800405/hive_2017-02-17_07-43-04_926_1561320960043112851-1
2017-02-17 07:43:06,131 INFO [main]: parse.CalcitePlanner (SemanticAnalyzer.java:genResolvedParseTree(10078)) - Completed getting MetaData in Semantic Analysis
2017-02-17 07:43:06,252 INFO [main]: parse.BaseSemanticAnalyzer (CalcitePlanner.java:canCBOHandleAst(388)) - Not invoking CBO because the statement has too few joins
2017-02-17 07:43:06,450 INFO [main]: parse.CalcitePlanner (SemanticAnalyzer.java:getMetaData(1552)) - Get metadata for source tables
2017-02-17 07:43:06,451 INFO [main]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(746)) - 0: get_table : db=default tbl=consumes
2017-02-17 07:43:06,454 INFO [main]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(371)) - ugi=linux ip=unknown-ip-addr cmd=get_table : db=default tbl=consumes
2017-02-17 07:43:06,488 INFO [main]: parse.CalcitePlanner (SemanticAnalyzer.java:getMetaData(1704)) - Get metadata for subqueries
2017-02-17 07:43:06,488 INFO [main]: parse.CalcitePlanner (SemanticAnalyzer.java:getMetaData(1728)) - Get metadata for destination tables
2017-02-17 07:43:06,631 INFO [main]: common.FileUtils (FileUtils.java:mkdir(501)) - Creating directory if it doesn't exist: hdfs://hadoopmaster:9000/tmp/hive/linux/d79925b9-fb4a-41c8-b45e-cc42db800405/hive_2017-02-17_07-43-04_926_1561320960043112851-1/-mr-10000/.hive-staging_hive_2017-02-17_07-43-04_926_1561320960043112851-1
2017-02-17 07:43:06,759 INFO [main]: parse.CalcitePlanner (SemanticAnalyzer.java:genFileSinkPlan(6653)) - Set stats collection dir : hdfs://hadoopmaster:9000/tmp/hive/linux/d79925b9-fb4a-41c8-b45e-cc42db800405/hive_2017-02-17_07-43-04_926_1561320960043112851-1/-mr-10000/.hive-staging_hive_2017-02-17_07-43-04_926_1561320960043112851-1/-ext-10002
2017-02-17 07:43:06,839 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:process(655)) - Processing for FS(16)
2017-02-17 07:43:06,840 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:process(655)) - Processing for SEL(15)
2017-02-17 07:43:06,841 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:process(457)) - Processing for JOIN(13)
2017-02-17 07:43:06,841 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:process(655)) - Processing for RS(10)
2017-02-17 07:43:06,841 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:process(402)) - Processing for FIL(9)
2017-02-17 07:43:06,846 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:logExpr(707)) - Pushdown Predicates of FIL For Alias : consumes
2017-02-17 07:43:06,846 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:logExpr(710)) - b03 is not null
2017-02-17 07:43:06,847 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:process(382)) - Processing for TS(0)
2017-02-17 07:43:06,847 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:logExpr(707)) - Pushdown Predicates of TS For Alias : consumes
2017-02-17 07:43:06,847 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:logExpr(710)) - b03 is not null
2017-02-17 07:43:06,849 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:process(655)) - Processing for RS(12)
2017-02-17 07:43:06,849 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:process(402)) - Processing for FIL(11)
2017-02-17 07:43:06,850 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:logExpr(707)) - Pushdown Predicates of FIL For Alias :
2017-02-17 07:43:06,850 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:logExpr(710)) - _col0 is not null
2017-02-17 07:43:06,850 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:process(655)) - Processing for GBY(8)
2017-02-17 07:43:06,851 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:logExpr(707)) - Pushdown Predicates of GBY For Alias :
2017-02-17 07:43:06,851 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:logExpr(710)) - _col0 is not null
2017-02-17 07:43:06,851 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:process(655)) - Processing for SEL(7)
2017-02-17 07:43:06,851 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:logExpr(707)) - Pushdown Predicates of SEL For Alias : sq_1
2017-02-17 07:43:06,851 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:logExpr(710)) - _col0 is not null
2017-02-17 07:43:06,852 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:process(655)) - Processing for SEL(6)
2017-02-17 07:43:06,852 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:logExpr(707)) - Pushdown Predicates of SEL For Alias : sq_1
2017-02-17 07:43:06,852 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:logExpr(710)) - _col0 is not null
2017-02-17 07:43:06,852 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:process(655)) - Processing for GBY(5)
2017-02-17 07:43:06,853 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:process(655)) - Processing for RS(4)
2017-02-17 07:43:06,853 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:process(655)) - Processing for GBY(3)
2017-02-17 07:43:06,853 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:process(655)) - Processing for SEL(2)
2017-02-17 07:43:06,853 INFO [main]: ppd.OpProcFactory (OpProcFactory.java:process(382)) - Processing for TS(1)
2017-02-17 07:43:06,863 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=partition-retrieving from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
2017-02-17 07:43:06,863 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=partition-retrieving start=1487346186863 end=1487346186863 duration=0 from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
2017-02-17 07:43:06,880 INFO [main]: optimizer.ColumnPrunerProcFactory (ColumnPrunerProcFactory.java:pruneJoinOperator(975)) - JOIN 13 oldExprs: {0=[Column[VALUE._col0], Column[VALUE._col1], Column[VALUE._col2], Column[VALUE._col3], Column[VALUE._col4], Column[VALUE._col5], Column[KEY.reducesinkkey0], Column[VALUE._col6], Column[VALUE._col7], Column[VALUE._col8], Column[VALUE._col9], Column[VALUE._col10], Column[VALUE._col11], Column[VALUE._col12]], 1=[]}
2017-02-17 07:43:06,880 INFO [main]: optimizer.ColumnPrunerProcFactory (ColumnPrunerProcFactory.java:pruneJoinOperator(1080)) - JOIN 13 newExprs: {0=[Column[VALUE._col0], Column[VALUE._col1], Column[VALUE._col2], Column[VALUE._col5], Column[KEY.reducesinkkey0]], 1=[]}
2017-02-17 07:43:06,881 INFO [main]: optimizer.ColumnPrunerProcFactory (ColumnPrunerProcFactory.java:pruneReduceSinkOperator(817)) - RS 10 oldColExprMap: {VALUE._col10=Column[BLOCK__OFFSET__INSIDE__FILE], VALUE._col11=Column[INPUT__FILE__NAME], VALUE._col12=Column[ROW__ID], KEY.reducesinkkey0=Column[b03], VALUE._col2=Column[fromdate], VALUE._col3=Column[todate], VALUE._col4=Column[company_code], VALUE._col5=Column[company_name], VALUE._col0=Column[consume_date], VALUE._col1=Column[hour_id], VALUE._col6=Column[b04], VALUE._col7=Column[b27], VALUE._col8=Column[b31], VALUE._col9=Column[b32]}
2017-02-17 07:43:06,881 INFO [main]: optimizer.ColumnPrunerProcFactory (ColumnPrunerProcFactory.java:pruneReduceSinkOperator(866)) - RS 10 newColExprMap: {KEY.reducesinkkey0=Column[b03], VALUE._col2=Column[fromdate], VALUE._col5=Column[company_name], VALUE._col0=Column[consume_date], VALUE._col1=Column[hour_id]}
2017-02-17 07:43:06,881 INFO [main]: optimizer.ColumnPrunerProcFactory (ColumnPrunerProcFactory.java:pruneReduceSinkOperator(817)) - RS 12 oldColExprMap: {KEY.reducesinkkey0=Column[_col0]}
2017-02-17 07:43:06,881 INFO [main]: optimizer.ColumnPrunerProcFactory (ColumnPrunerProcFactory.java:pruneReduceSinkOperator(866)) - RS 12 newColExprMap: {KEY.reducesinkkey0=Column[_col0]}
2017-02-17 07:43:06,883 INFO [main]: optimizer.ColumnPrunerProcFactory (ColumnPrunerProcFactory.java:pruneReduceSinkOperator(817)) - RS 4 oldColExprMap: {VALUE._col0=Column[_col0]}
2017-02-17 07:43:06,883 INFO [main]: optimizer.ColumnPrunerProcFactory (ColumnPrunerProcFactory.java:pruneReduceSinkOperator(866)) - RS 4 newColExprMap: {VALUE._col0=Column[_col0]}
2017-02-17 07:43:06,948 INFO [main]: ql.Context (Context.java:getMRScratchDir(330)) - New scratch dir is hdfs://hadoopmaster:9000/tmp/hive/linux/d79925b9-fb4a-41c8-b45e-cc42db800405/hive_2017-02-17_07-43-04_926_1561320960043112851-1
2017-02-17 07:43:06,956 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=getInputSummary from=org.apache.hadoop.hive.ql.exec.Utilities>
2017-02-17 07:43:06,984 INFO [main]: exec.Utilities (Utilities.java:run(2615)) - Cannot get size of hdfs://hadoopmaster:9000/user/hive/warehouse/consumes. Safely ignored.
2017-02-17 07:43:06,987 INFO [main]: exec.Utilities (Utilities.java:run(2615)) - Cannot get size of hdfs://hadoopmaster:9000/tmp/hive/linux/d79925b9-fb4a-41c8-b45e-cc42db800405/hive_2017-02-17_07-43-04_926_1561320960043112851-1/-mr-10003. Safely ignored.
2017-02-17 07:43:06,988 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=getInputSummary start=1487346186956 end=1487346186988 duration=32 from=org.apache.hadoop.hive.ql.exec.Utilities>
2017-02-17 07:43:06,990 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=clonePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
2017-02-17 07:43:07,123 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
2017-02-17 07:43:07,123 INFO [main]: exec.Utilities (Utilities.java:serializePlan(938)) - Serializing MapredWork via kryo
2017-02-17 07:43:07,321 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=serializePlan start=1487346187123 end=1487346187321 duration=198 from=org.apache.hadoop.hive.ql.exec.Utilities>
2017-02-17 07:43:07,321 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=deserializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
2017-02-17 07:43:07,321 INFO [main]: exec.Utilities (Utilities.java:deserializePlan(965)) - Deserializing MapredWork via kryo
2017-02-17 07:43:07,387 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=deserializePlan start=1487346187321 end=1487346187387 duration=66 from=org.apache.hadoop.hive.ql.exec.Utilities>
2017-02-17 07:43:07,387 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=clonePlan start=1487346186990 end=1487346187387 duration=397 from=org.apache.hadoop.hive.ql.exec.Utilities>
2017-02-17 07:43:07,400 INFO [main]: ql.Context (Context.java:getMRScratchDir(330)) - New scratch dir is hdfs://hadoopmaster:9000/tmp/hive/linux/d79925b9-fb4a-41c8-b45e-cc42db800405/hive_2017-02-17_07-43-04_926_1561320960043112851-1
2017-02-17 07:43:07,401 INFO [main]: ql.Context (Context.java:getMRScratchDir(330)) - New scratch dir is hdfs://hadoopmaster:9000/tmp/hive/linux/d79925b9-fb4a-41c8-b45e-cc42db800405/hive_2017-02-17_07-43-04_926_1561320960043112851-1
2017-02-17 07:43:07,406 INFO [main]: physical.LocalMapJoinProcFactory (LocalMapJoinProcFactory.java:process(139)) - Setting max memory usage to 0.9 for table sink not followed by group by
2017-02-17 07:43:07,447 INFO [main]: physical.NullScanTaskDispatcher (NullScanTaskDispatcher.java:dispatch(175)) - Looking for table scans where optimization is applicable
2017-02-17 07:43:07,451 INFO [main]: physical.NullScanTaskDispatcher (NullScanTaskDispatcher.java:dispatch(199)) - Found 0 null table scans
2017-02-17 07:43:07,452 INFO [main]: physical.NullScanTaskDispatcher (NullScanTaskDispatcher.java:dispatch(175)) - Looking for table scans where optimization is applicable
2017-02-17 07:43:07,452 INFO [main]: physical.NullScanTaskDispatcher (NullScanTaskDispatcher.java:dispatch(199)) - Found 0 null table scans
2017-02-17 07:43:07,453 INFO [main]: physical.NullScanTaskDispatcher (NullScanTaskDispatcher.java:dispatch(175)) - Looking for table scans where optimization is applicable
2017-02-17 07:43:07,453 INFO [main]: physical.NullScanTaskDispatcher (NullScanTaskDispatcher.java:dispatch(199)) - Found 0 null table scans
2017-02-17 07:43:07,473 INFO [main]: parse.CalcitePlanner (SemanticAnalyzer.java:analyzeInternal(10213)) - Completed plan generation
2017-02-17 07:43:07,473 INFO [main]: ql.Driver (Driver.java:compile(436)) - Semantic Analysis Completed
2017-02-17 07:43:07,473 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=semanticAnalyze start=1487346185530 end=1487346187473 duration=1943 from=org.apache.hadoop.hive.ql.Driver>
2017-02-17 07:43:07,519 INFO [main]: exec.ListSinkOperator (Operator.java:initialize(332)) - Initializing operator OP[32]
2017-02-17 07:43:07,521 INFO [main]: exec.ListSinkOperator (Operator.java:initialize(372)) - Initialization Done 32 OP
2017-02-17 07:43:07,521 INFO [main]: exec.ListSinkOperator (Operator.java:initializeChildren(429)) - Operator 32 OP initialized
2017-02-17 07:43:07,529 INFO [main]: ql.Driver (Driver.java:getSchema(240)) - Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:consume_date, type:string, comment:null), FieldSchema(name:hour_id, type:int, comment:null), FieldSchema(name:fromdate, type:string, comment:null), FieldSchema(name:company_name, type:string, comment:null), FieldSchema(name:b03, type:decimal(18,8), comment:null)], properties:null)
2017-02-17 07:43:07,529 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=compile start=1487346184861 end=1487346187529 duration=2668 from=org.apache.hadoop.hive.ql.Driver>
2017-02-17 07:43:07,530 INFO [main]: ql.Driver (Driver.java:checkConcurrency(160)) - Concurrency mode is disabled, not creating a lock manager
2017-02-17 07:43:07,530 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
2017-02-17 07:43:07,530 INFO [main]: ql.Driver (Driver.java:execute(1328)) - Starting command(queryId=linux_20170217074304_4798207d-cb6e-4a87-8292-3baebe3907d4): select consume_date,hour_id,fromdate,company_name,b03 from consumes where b03 in (select max(b03) from consumes)
2017-02-17 07:43:07,531 INFO [main]: ql.Driver (SessionState.java:printInfo(951)) - Query ID = linux_20170217074304_4798207d-cb6e-4a87-8292-3baebe3907d4
2017-02-17 07:43:07,531 INFO [main]: ql.Driver (SessionState.java:printInfo(951)) - Total jobs = 3
2017-02-17 07:43:07,534 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=TimeToSubmit start=1487346184861 end=1487346187534 duration=2673 from=org.apache.hadoop.hive.ql.Driver>
2017-02-17 07:43:07,534 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver>
2017-02-17 07:43:07,534 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=task.MAPRED.Stage-2 from=org.apache.hadoop.hive.ql.Driver>
2017-02-17 07:43:07,552 INFO [main]: ql.Driver (SessionState.java:printInfo(951)) - Launching Job 1 out of 3
2017-02-17 07:43:07,554 INFO [main]: ql.Driver (Driver.java:launchTask(1651)) - Starting task [Stage-2:MAPRED] in serial mode
2017-02-17 07:43:07,555 INFO [main]: exec.Task (SessionState.java:printInfo(951)) - Number of reduce tasks determined at compile time: 1
2017-02-17 07:43:07,555 INFO [main]: exec.Task (SessionState.java:printInfo(951)) - In order to change the average load for a reducer (in bytes):
2017-02-17 07:43:07,555 INFO [main]: exec.Task (SessionState.java:printInfo(951)) - set hive.exec.reducers.bytes.per.reducer=<number>
2017-02-17 07:43:07,556 INFO [main]: exec.Task (SessionState.java:printInfo(951)) - In order to limit the maximum number of reducers:
2017-02-17 07:43:07,562 INFO [main]: exec.Task (SessionState.java:printInfo(951)) - set hive.exec.reducers.max=<number>
2017-02-17 07:43:07,565 INFO [main]: exec.Task (SessionState.java:printInfo(951)) - In order to set a constant number of reducers:
2017-02-17 07:43:07,567 INFO [main]: exec.Task (SessionState.java:printInfo(951)) - set mapreduce.job.reduces=<number>
2017-02-17 07:43:07,568 INFO [main]: ql.Context (Context.java:getMRScratchDir(330)) - New scratch dir is hdfs://hadoopmaster:9000/tmp/hive/linux/d79925b9-fb4a-41c8-b45e-cc42db800405/hive_2017-02-17_07-43-04_926_1561320960043112851-1
2017-02-17 07:43:07,575 INFO [main]: mr.ExecDriver (ExecDriver.java:execute(288)) - Using org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
2017-02-17 07:43:07,577 INFO [main]: exec.Utilities (Utilities.java:getInputPaths(3397)) - Processing alias sq_1:consumes
2017-02-17 07:43:07,580 INFO [main]: exec.Utilities (Utilities.java:getInputPaths(3414)) - Adding input file hdfs://hadoopmaster:9000/user/hive/warehouse/consumes
2017-02-17 07:43:07,580 INFO [main]: exec.Utilities (Utilities.java:isEmptyPath(2698)) - Content Summary not cached for hdfs://hadoopmaster:9000/user/hive/warehouse/consumes
2017-02-17 07:43:07,651 INFO [main]: exec.Utilities (Utilities.java:createDummyFileForEmptyPartition(3497)) - Changed input file hdfs://hadoopmaster:9000/user/hive/warehouse/consumes to empty file hdfs://hadoopmaster:9000/tmp/hive/linux/d79925b9-fb4a-41c8-b45e-cc42db800405/hive_2017-02-17_07-43-04_926_1561320960043112851-1/-mr-10006/0
2017-02-17 07:43:07,651 INFO [main]: ql.Context (Context.java:getMRScratchDir(330)) - New scratch dir is hdfs://hadoopmaster:9000/tmp/hive/linux/d79925b9-fb4a-41c8-b45e-cc42db800405/hive_2017-02-17_07-43-04_926_1561320960043112851-1
2017-02-17 07:43:07,665 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
2017-02-17 07:43:07,666 INFO [main]: exec.Utilities (Utilities.java:serializePlan(938)) - Serializing MapWork via kryo
2017-02-17 07:43:08,663 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=serializePlan start=1487346187665 end=1487346188663 duration=998 from=org.apache.hadoop.hive.ql.exec.Utilities>
2017-02-17 07:43:08,669 INFO [main]: Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1173)) - mapred.submit.replication is deprecated. Instead, use mapreduce.client.submit.file.replication
2017-02-17 07:43:08,702 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
2017-02-17 07:43:08,703 INFO [main]: exec.Utilities (Utilities.java:serializePlan(938)) - Serializing ReduceWork via kryo
2017-02-17 07:43:08,745 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=serializePlan start=1487346188702 end=1487346188745 duration=43 from=org.apache.hadoop.hive.ql.exec.Utilities>
2017-02-17 07:43:08,747 ERROR [main]: mr.ExecDriver (ExecDriver.java:execute(400)) - yarn
2017-02-17 07:43:08,836 INFO [main]: client.RMProxy (RMProxy.java:createRMProxy(98)) - Connecting to ResourceManager at hadoopmaster/192.168.23.132:8050
2017-02-17 07:43:09,138 INFO [main]: client.RMProxy (RMProxy.java:createRMProxy(98)) - Connecting to ResourceManager at hadoopmaster/192.168.23.132:8050
2017-02-17 07:43:09,146 INFO [main]: exec.Utilities (Utilities.java:getBaseWork(390)) - PLAN PATH = hdfs://hadoopmaster:9000/tmp/hive/linux/d79925b9-fb4a-41c8-b45e-cc42db800405/hive_2017-02-17_07-43-04_926_1561320960043112851-1/-mr-10007/a922e0ae-c541-4b92-8f9d-088bde0d1475/map.xml
2017-02-17 07:43:09,147 INFO [main]: exec.Utilities (Utilities.java:getBaseWork(390)) - PLAN PATH = hdfs://hadoopmaster:9000/tmp/hive/linux/d79925b9-fb4a-41c8-b45e-cc42db800405/hive_2017-02-17_07-43-04_926_1561320960043112851-1/-mr-10007/a922e0ae-c541-4b92-8f9d-088bde0d1475/reduce.xml
2017-02-17 07:43:09,454 WARN [main]: mapreduce.JobResourceUploader (JobResourceUploader.java:uploadFiles(64)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2017-02-17 07:43:12,706 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=getSplits from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
2017-02-17 07:43:12,707 INFO [main]: exec.Utilities (Utilities.java:getBaseWork(390)) - PLAN PATH = hdfs://hadoopmaster:9000/tmp/hive/linux/d79925b9-fb4a-41c8-b45e-cc42db800405/hive_2017-02-17_07-43-04_926_1561320960043112851-1/-mr-10007/a922e0ae-c541-4b92-8f9d-088bde0d1475/map.xml
2017-02-17 07:43:12,707 INFO [main]: io.CombineHiveInputFormat (CombineHiveInputFormat.java:getSplits(517)) - Total number of paths: 1, launching 1 threads to check non-combinable ones.
2017-02-17 07:43:12,729 INFO [main]: io.CombineHiveInputFormat (CombineHiveInputFormat.java:getCombineSplits(439)) - CombineHiveInputSplit creating pool for hdfs://hadoopmaster:9000/tmp/hive/linux/d79925b9-fb4a-41c8-b45e-cc42db800405/hive_2017-02-17_07-43-04_926_1561320960043112851-1/-mr-10006/0; using filter path hdfs://hadoopmaster:9000/tmp/hive/linux/d79925b9-fb4a-41c8-b45e-cc42db800405/hive_2017-02-17_07-43-04_926_1561320960043112851-1/-mr-10006/0
2017-02-17 07:43:12,768 INFO [main]: input.FileInputFormat (FileInputFormat.java:listStatus(283)) - Total input paths to process : 1
2017-02-17 07:43:12,771 INFO [main]: input.CombineFileInputFormat (CombineFileInputFormat.java:createSplits(413)) - DEBUG: Terminated node allocation with : CompletedNodes: 0, size left: 0
2017-02-17 07:43:12,773 INFO [main]: io.CombineHiveInputFormat (CombineHiveInputFormat.java:getCombineSplits(494)) - number of splits 1
2017-02-17 07:43:12,775 INFO [main]: io.CombineHiveInputFormat (CombineHiveInputFormat.java:getSplits(587)) - Number of all splits 1
2017-02-17 07:43:12,775 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=getSplits start=1487346192706 end=1487346192775 duration=69 from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
2017-02-17 07:43:12,857 INFO [main]: mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(198)) - number of splits:1
2017-02-17 07:43:12,951 INFO [main]: mapreduce.JobSubmitter (JobSubmitter.java:printTokens(287)) - Submitting tokens for job: job_1487346076570_0001
2017-02-17 07:43:13,435 INFO [main]: impl.YarnClientImpl (YarnClientImpl.java:submitApplication(273)) - Submitted application application_1487346076570_0001
2017-02-17 07:43:13,505 INFO [main]: mapreduce.Job (Job.java:submit(1294)) - The url to track the job: http://hadoopmaster:8088/proxy/application_1487346076570_0001/
2017-02-17 07:43:13,510 INFO [main]: exec.Task (SessionState.java:printInfo(951)) - Starting Job = job_1487346076570_0001, Tracking URL = http://hadoopmaster:8088/proxy/application_1487346076570_0001/
2017-02-17 07:43:13,514 INFO [main]: exec.Task (SessionState.java:printInfo(951)) - Kill Command = /usr/local/hadoop/bin/hadoop job -kill job_1487346076570_0001
2017-02-17 07:43:41,582 INFO [SIGINT handler]: CliDriver (SessionState.java:printInfo(951)) - Interrupting... Be patient, this might take some time.
2017-02-17 07:43:41,584 INFO [SIGINT handler]: CliDriver (SessionState.java:printInfo(951)) - Press Ctrl+C again to kill JVM
2017-02-17 07:43:41,841 INFO [SIGINT handler]: impl.YarnClientImpl (YarnClientImpl.java:killApplication(395)) - Killed application application_1487346076570_0001
2017-02-17 07:43:42,058 INFO [SIGINT handler]: CliDriver (SessionState.java:printInfo(951)) - Exiting the JVM
2017-02-17 07:43:42,102 INFO [Thread-11]: impl.YarnClientImpl (YarnClientImpl.java:killApplication(395)) - Killed application application_1487346076570_0001
最佳答案
似乎根据日志,您正在执行以下具有 in 子句的查询:Hive 对 in 子句有一些限制。
select consume_date,hour_id,fromdate,company_name,b03 from consumes where b03 in (select max(b03) from consumes)
您可以使用以下查询,
select consume_date,hour_id,fromdate,company_name,b03 from consumes order by b03 desc limit 1;
关于hadoop - 除 select * 外的任何配置单元查询挂起,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42303308/
我只是不喜欢 Logback 的 XML 或 Groovy 配置,而更喜欢用 Java 进行配置(这也是因为我将在初始化后的不同时间在运行时更改配置)。 似乎对 Logback 进行 Java 配置的
我的 sphinx 配置是: ================================ config/sphinx.yml development: bin_path: "/usr/loc
我们计划在生产服务器中部署我们的系统。我有兴趣了解更多有关优化网站性能的信息。 Sitecore 有哪些优化建议? (缓存,网络配置中的其他设置) 我们可以在 IIS 中做哪些优化? 找不到关于这些主
我有一个 Django 应用程序,可以处理网站的两个(或更多)部分,例如网站的“admin”和“api”部分。我还为网站的其余部分提供了普通的 html 页面,其中不需要 Django。 例如,我希望
我刚刚开始研究Docker。我有一个 Node 应用程序,可以调整大小和图像,然后在完成后向 aws 发送 SQS 消息。我已成功创建应用程序的 docker 镜像,并从本地计算机复制它,但遇到了无法
如何配置 checkstyle(在 Ant nt Maven 中)任务?我尝试了一点,但没有正确收到报告。这是我的 Ant 脚本。
我正在使用 Quartz 和 Spring 框架重写一个遗留项目。原始配置是 XML 格式,现在我将其转换为 Java Config。 xml 配置使用 jobDetail 设置触发器 bean 的作
tl;rd: 使用主键对数据库进行分区 索引大小问题。 数据库大小每天增长约 1-3 GB 突袭设置。 您有使用 Hypertable 的经验吗? 长版: 我刚刚建立/购买了一个家庭服务器: 至强 E
在安装 gcp 应用程序后,我们尝试使用 GCP 的图形 api 配置 Azure Active Directory saml 配置。我们正在遵循相同的 AWS graph api saml 设置 U
我刚刚了解了 spring security 并想使用 java hibernate 配置连接到数据库,但我发现的示例或教程很少。我通过使用 xml 配置找到了更多。我在这里使用 Spring 4.0
我们最近切换到 Java 8 以使用 java.time API(LocalDate、LocalDateTime,...)。因此,我们将 Hibernate 依赖项更新到版本 4.3.10。我们编写了
欢迎访问我的GitHub 这里分类和汇总了欣宸的全部原创(含配套源码):https://github.com/zq2599/blog_demos 本篇概览 本文是《quarkus实战》系列的第六篇,咱
我是 NGINX 的新手,我正在尝试对我们的 ERP 网络服务器进行负载平衡。我有 3 个网络服务器在由 websphere 提供支持的端口 80 上运行,这对我来说是一个黑盒子: * web01.e
我们想使用 gerrit 进行代码审查,但我们在 webview 中缺少一些设置。 是否可以禁止提交者审查/验证他们自己的 提交? 是否有可能两个审稿人给 +1 一个累积它 到+2,以便可以提交? 谢
配置根据运行模式应用于 AEM 实例。在多个运行模式和多个配置的情况下,AEM 如何确定要选择的配置文件?假设以下配置在 AEM 项目中可用, /apps /myproject - con
我正在使用 Neo4j 服务器。我遇到了负载相对较低的问题。但是,响应时间相当长。我认为为请求提供服务的线程数太少了。有没有办法调整为 HTTP 请求提供服务的线程池的大小。那可能吗? 最佳答案 线程
我在/etc/default/celeryd 中有以下配置 CELERYD_NODES = "worker1 worker2 worker3" CELERYD_CHDIR = "path to pro
Plone 在其页面中显示来 self 的母语(巴西葡萄牙语)的特殊字符。但是,当我使用我创建的 spt 页面时,它会显示转义序列,例如: Educa\xc3\xa7\xc3\xa3o 代替 Educ
我正在尝试开始使用 Emacs/Clojure。安装 emacs 扩展的正确方法是什么。我正在尝试安装以下插件: https://bitbucket.org/kotarak/vimclojure 我已
我有一个简单的 C 项目结构: proj/ src/ docs/ build/ tests/ lib/ 尝试编写合适的 CMake 文件。 到目前为止我的尝试:http://pas
我是一名优秀的程序员,十分优秀!