gpt4 book ai didi

hadoop - 在 pig v0.15 中设置队列名称

转载 作者:可可西里 更新时间:2023-11-01 16:28:25 26 4
gpt4 key购买 nike

我在尝试通过 shell 执行 pig 脚本时遇到异常。

JobId   Alias   Feature Message Outputs
job_1520637789949_340250 A,B,D,top_rec GROUP_BY Message: java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1520637789949_340250 to YARN : Application rejected by queue placement policy

据我了解,这是由于没有为 MR 执行设置正确的队列名称。为了找到如何为 mapreduce 作业设置 queuename,我尝试搜索彻底的帮助,pig --help,它列出了下面的选项

Apache Pig version 0.15.0-mapr-1611 (rexported)
compiled Dec 06 2016, 05:50:07

USAGE: Pig [options] [-] : Run interactively in grunt shell.
Pig [options] -e[xecute] cmd [cmd ...] : Run cmd(s).
Pig [options] [-f[ile]] file : Run cmds found in file.
options include:
-4, -log4jconf - Log4j configuration file, overrides log conf
-b, -brief - Brief logging (no timestamps)
-c, -check - Syntax check
-d, -debug - Debug level, INFO is default
-e, -execute - Commands to execute (within quotes)
-f, -file - Path to the script to execute
-g, -embedded - ScriptEngine classname or keyword for the ScriptEngine
-h, -help - Display this message. You can specify topic to get help for that topic.
properties is the only topic currently supported: -h properties.
-i, -version - Display version information
-l, -logfile - Path to client side log file; default is current working directory.
-m, -param_file - Path to the parameter file
-p, -param - Key value pair of the form param=val
-r, -dryrun - Produces script with substituted parameters. Script is not executed.
-t, -optimizer_off - Turn optimizations off. The following values are supported:
ConstantCalculator - Calculate constants at compile time
SplitFilter - Split filter conditions
PushUpFilter - Filter as early as possible
MergeFilter - Merge filter conditions
PushDownForeachFlatten - Join or explode as late as possible
LimitOptimizer - Limit as early as possible
ColumnMapKeyPrune - Remove unused data
AddForEach - Add ForEach to remove unneeded columns
MergeForEach - Merge adjacent ForEach
GroupByConstParallelSetter - Force parallel 1 for "group all" statement
PartitionFilterOptimizer - Pushdown partition filter conditions to loader implementing LoadMetaData
PredicatePushdownOptimizer - Pushdown filter predicates to loader implementing LoadPredicatePushDown
All - Disable all optimizations
All optimizations listed here are enabled by default. Optimization values are case insensitive.
-v, -verbose - Print all error messages to screen
-w, -warning - Turn warning logging on; also turns warning aggregation off
-x, -exectype - Set execution mode: local|mapreduce|tez, default is mapreduce.
-F, -stop_on_failure - Aborts execution on the first failed job; default is off
-M, -no_multiquery - Turn multiquery optimization off; default is on
-N, -no_fetch - Turn fetch optimization off; default is on
-P, -propertyFile - Path to property file
-printCmdDebug - Overrides anything else and prints the actual command used to run Pig, including
any environment variables that are set by the pig command.
18/03/30 13:03:05 INFO pig.Main: Pig script completed in 163 milliseconds (163 ms)

我试过pig -p mapreduce.job.queuename=my_queue;并且能够毫无错误地登录到 grunt。

但是,在第一个命令本身,它抛出了下面

ERROR 2997: Encountered IOException. org.apache.pig.tools.parameters.ParseException: Encountered " <OTHER> ".job.queuename=my_queue "" at line 1, column 10.
Was expecting:
"=" ...

我不确定我是否做对了?

最佳答案

要在 pig 0.15 中设置 queuename,我有以下选项(它可能也适用于其他版本):

1) pig 带有一个选项,可以使用队列名称启动 pig session 。下面命令简单使用

pig -Dmapreduce.job.queuename=my_queue

2) 另一种选择是在 grunt shell 或 pig 脚本本身中进行设置。

set mapreduce.job.queuename my_queue;

关于hadoop - 在 pig v0.15 中设置队列名称,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49580792/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com