- 使用 Spring Initializr 创建 Spring Boot 应用程序
- 在Spring Boot中配置Cassandra
- 在 Spring Boot 上配置 Tomcat 连接池
- 将Camel消息路由到嵌入WildFly的Artemis上
本文整理了Java中org.apache.hadoop.mapred.YARNRunner
类的一些代码示例,展示了YARNRunner
类的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。YARNRunner
类的具体详情如下:
包路径:org.apache.hadoop.mapred.YARNRunner
类名称:YARNRunner
[英]This class enables the current JobClient (0.22 hadoop) to run on YARN.
[中]此类使当前的JobClient(0.22 hadoop)能够在Thread上运行。
代码示例来源:origin: org.apache.hadoop/hadoop-mapreduce-client-jobclient
@Override
public ClientProtocol create(Configuration conf) throws IOException {
if (MRConfig.YARN_FRAMEWORK_NAME.equals(conf.get(MRConfig.FRAMEWORK_NAME))) {
return new YARNRunner(conf);
}
return null;
}
代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-jobclient
@Override
public void close(ClientProtocol clientProtocol) throws IOException {
if (clientProtocol instanceof YARNRunner) {
((YARNRunner)clientProtocol).close();
}
}
}
代码示例来源:origin: org.apache.hadoop/hadoop-mapreduce-client-jobclient
private LocalResource createApplicationResource(FileContext fs, Path p,
LocalResourceType type) throws IOException {
return createApplicationResource(fs, p, null, type,
LocalResourceVisibility.APPLICATION, false);
}
代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-jobclient
@Override
public Void run() throws Exception {
yarnRunner = new YARNRunner(conf, null, null);
yarnRunner.getDelegationTokenFromHS(hsProxy);
verify(hsProxy).
getDelegationToken(any(GetDelegationTokenRequest.class));
return null;
}
});
代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-jobclient
@Override
public JobStatus submitJob(JobID jobId, String jobSubmitDir, Credentials ts)
throws IOException, InterruptedException {
addHistoryToken(ts);
// Construct necessary information to start the MR AM
ApplicationSubmissionContext appContext =
createApplicationSubmissionContext(conf, jobSubmitDir, ts);
// Submit to ResourceManager
try {
ApplicationId applicationId =
resMgrDelegate.submitApplication(appContext);
ApplicationReport appMaster = resMgrDelegate
.getApplicationReport(applicationId);
String diagnostics =
(appMaster == null ?
"application report is null" : appMaster.getDiagnostics());
if (appMaster == null
|| appMaster.getYarnApplicationState() == YarnApplicationState.FAILED
|| appMaster.getYarnApplicationState() == YarnApplicationState.KILLED) {
throw new IOException("Failed to run job : " +
diagnostics);
}
return clientCache.getClient(jobId).getJobStatus(jobId);
} catch (YarnException e) {
throw new IOException(e);
}
}
代码示例来源:origin: org.apache.hadoop/hadoop-mapreduce-client-jobclient
killUnFinishedApplication(appId);
return;
killApplication(appId);
return;
MRJobConfig.DEFAULT_MR_AM_HARD_KILL_TIMEOUT_MS);
while ((currentTimeMillis < timeKillIssued + killTimeOut)
&& !isJobInTerminalState(status)) {
try {
Thread.sleep(1000L);
status = clientCache.getClient(arg0).getJobStatus(arg0);
if (status == null) {
killUnFinishedApplication(appId);
return;
LOG.debug("Error when checking for application status", io);
if (status != null && !isJobInTerminalState(status)) {
killApplication(appId);
代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-jobclient
@Before
public void setUp() throws Exception {
resourceMgrDelegate = mock(ResourceMgrDelegate.class);
conf = new YarnConfiguration();
conf.set(YarnConfiguration.RM_PRINCIPAL, "mapred/host@REALM");
clientCache = new ClientCache(conf, resourceMgrDelegate);
clientCache = spy(clientCache);
yarnRunner = new YARNRunner(conf, resourceMgrDelegate, clientCache);
yarnRunner = spy(yarnRunner);
submissionContext = mock(ApplicationSubmissionContext.class);
doAnswer(
new Answer<ApplicationSubmissionContext>() {
@Override
public ApplicationSubmissionContext answer(InvocationOnMock invocation)
throws Throwable {
return submissionContext;
}
}
).when(yarnRunner).createApplicationSubmissionContext(any(Configuration.class),
any(String.class), any(Credentials.class));
appId = ApplicationId.newInstance(System.currentTimeMillis(), 1);
jobId = TypeConverter.fromYarn(appId);
if (testWorkDir.exists()) {
FileContext.getLocalFSFileContext().delete(new Path(testWorkDir.toString()), true);
}
testWorkDir.mkdirs();
}
代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-jobclient
YARNRunner yarnRunner = new YARNRunner(conf, rmDelegate, clientCache);
yarnRunner.addHistoryToken(creds);
verify(mockHsProxy, times(0)).getDelegationToken(
any(GetDelegationTokenRequest.class));
yarnRunner.addHistoryToken(creds);
verify(mockHsProxy, times(0)).getDelegationToken(
any(GetDelegationTokenRequest.class));
yarnRunner.addHistoryToken(creds);
verify(mockHsProxy, times(0)).getDelegationToken(
any(GetDelegationTokenRequest.class));
yarnRunner.addHistoryToken(creds);
verify(mockHsProxy, times(1)).getDelegationToken(
any(GetDelegationTokenRequest.class));
yarnRunner.addHistoryToken(creds);
verify(mockHsProxy, times(1)).getDelegationToken(
any(GetDelegationTokenRequest.class));
代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-jobclient
createApplicationResource(defaultFileContext,
jobConfPath, LocalResourceType.FILE));
if (jobConf.get(MRJobConfig.JAR) != null) {
Path jobJarPath = new Path(jobConf.get(MRJobConfig.JAR));
LocalResource rc = createApplicationResource(
FileContext.getFileContext(jobJarPath.toUri(), jobConf),
jobJarPath,
localResources.put(
MRJobConfig.JOB_SUBMIT_DIR + "/" + s,
createApplicationResource(defaultFileContext,
new Path(jobSubmitDir, s), LocalResourceType.FILE));
warnForJavaLibPath(conf.get(MRJobConfig.MAP_JAVA_OPTS,""), "map",
MRJobConfig.MAP_JAVA_OPTS, MRJobConfig.MAP_ENV);
warnForJavaLibPath(conf.get(MRJobConfig.MAPRED_MAP_ADMIN_JAVA_OPTS,""), "map",
MRJobConfig.MAPRED_MAP_ADMIN_JAVA_OPTS, MRJobConfig.MAPRED_ADMIN_USER_ENV);
warnForJavaLibPath(conf.get(MRJobConfig.REDUCE_JAVA_OPTS,""), "reduce",
MRJobConfig.REDUCE_JAVA_OPTS, MRJobConfig.REDUCE_ENV);
warnForJavaLibPath(conf.get(MRJobConfig.MAPRED_REDUCE_ADMIN_JAVA_OPTS,""), "reduce",
MRJobConfig.MAPRED_REDUCE_ADMIN_JAVA_OPTS, MRJobConfig.MAPRED_ADMIN_USER_ENV);
warnForJavaLibPath(mrAppMasterAdminOptions, "app master",
MRJobConfig.MR_AM_ADMIN_COMMAND_OPTS, MRJobConfig.MR_AM_ADMIN_USER_ENV);
vargs.add(mrAppMasterAdminOptions);
warnForJavaLibPath(mrAppMasterUserOptions, "app master",
MRJobConfig.MR_AM_COMMAND_OPTS, MRJobConfig.MR_AM_ENV);
vargs.add(mrAppMasterUserOptions);
代码示例来源:origin: org.apache.hadoop/hadoop-mapreduce-client-jobclient
@VisibleForTesting
void addHistoryToken(Credentials ts) throws IOException, InterruptedException {
/* check if we have a hsproxy, if not, no need */
MRClientProtocol hsProxy = clientCache.getInitializedHSProxy();
if (UserGroupInformation.isSecurityEnabled() && (hsProxy != null)) {
/*
* note that get delegation token was called. Again this is hack for oozie
* to make sure we add history server delegation tokens to the credentials
*/
RMDelegationTokenSelector tokenSelector = new RMDelegationTokenSelector();
Text service = resMgrDelegate.getRMDelegationTokenService();
if (tokenSelector.selectToken(service, ts.getAllTokens()) != null) {
Text hsService = SecurityUtil.buildTokenService(hsProxy
.getConnectAddress());
if (ts.getToken(hsService) == null) {
ts.addToken(hsService, getDelegationTokenFromHS(hsProxy));
}
}
}
}
代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-jobclient
private ApplicationSubmissionContext buildSubmitContext(
YARNRunner yarnRunner, JobConf jobConf) throws IOException {
File jobxml = new File(testWorkDir, MRJobConfig.JOB_CONF_FILE);
OutputStream out = new FileOutputStream(jobxml);
conf.writeXml(out);
out.close();
File jobsplit = new File(testWorkDir, MRJobConfig.JOB_SPLIT);
out = new FileOutputStream(jobsplit);
out.close();
File jobsplitmetainfo = new File(testWorkDir,
MRJobConfig.JOB_SPLIT_METAINFO);
out = new FileOutputStream(jobsplitmetainfo);
out.close();
return yarnRunner.createApplicationSubmissionContext(jobConf,
testWorkDir.toString(), new Credentials());
}
}
代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-jobclient
private void killUnFinishedApplication(ApplicationId appId)
throws IOException {
ApplicationReport application = null;
try {
application = resMgrDelegate.getApplicationReport(appId);
} catch (YarnException e) {
throw new IOException(e);
}
if (application.getYarnApplicationState() == YarnApplicationState.FINISHED
|| application.getYarnApplicationState() == YarnApplicationState.FAILED
|| application.getYarnApplicationState() == YarnApplicationState.KILLED) {
return;
}
killApplication(appId);
}
代码示例来源:origin: org.apache.hadoop/hadoop-mapreduce-client-jobclient
warnForJavaLibPath(conf.get(MRJobConfig.MAP_JAVA_OPTS, ""),
"map",
MRJobConfig.MAP_JAVA_OPTS,
MRJobConfig.MAP_ENV);
warnForJavaLibPath(conf.get(MRJobConfig.MAPRED_MAP_ADMIN_JAVA_OPTS, ""),
"map",
MRJobConfig.MAPRED_MAP_ADMIN_JAVA_OPTS,
MRJobConfig.MAPRED_ADMIN_USER_ENV);
warnForJavaLibPath(conf.get(MRJobConfig.REDUCE_JAVA_OPTS, ""),
"reduce",
MRJobConfig.REDUCE_JAVA_OPTS,
MRJobConfig.REDUCE_ENV);
warnForJavaLibPath(conf.get(MRJobConfig.MAPRED_REDUCE_ADMIN_JAVA_OPTS, ""),
"reduce",
MRJobConfig.MAPRED_REDUCE_ADMIN_JAVA_OPTS,
warnForJavaLibPath(mrAppMasterAdminOptions, "app master",
MRJobConfig.MR_AM_ADMIN_COMMAND_OPTS, MRJobConfig.MR_AM_ADMIN_USER_ENV);
vargs.add(mrAppMasterAdminOptions);
warnForJavaLibPath(mrAppMasterUserOptions, "app master",
MRJobConfig.MR_AM_COMMAND_OPTS, MRJobConfig.MR_AM_ENV);
vargs.add(mrAppMasterUserOptions);
代码示例来源:origin: org.apache.hadoop/hadoop-mapreduce-client-jobclient
@Override
public JobStatus submitJob(JobID jobId, String jobSubmitDir, Credentials ts)
throws IOException, InterruptedException {
addHistoryToken(ts);
ApplicationSubmissionContext appContext =
createApplicationSubmissionContext(conf, jobSubmitDir, ts);
// Submit to ResourceManager
try {
ApplicationId applicationId =
resMgrDelegate.submitApplication(appContext);
ApplicationReport appMaster = resMgrDelegate
.getApplicationReport(applicationId);
String diagnostics =
(appMaster == null ?
"application report is null" : appMaster.getDiagnostics());
if (appMaster == null
|| appMaster.getYarnApplicationState() == YarnApplicationState.FAILED
|| appMaster.getYarnApplicationState() == YarnApplicationState.KILLED) {
throw new IOException("Failed to run job : " +
diagnostics);
}
return clientCache.getClient(jobId).getJobStatus(jobId);
} catch (YarnException e) {
throw new IOException(e);
}
}
代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-jobclient
killUnFinishedApplication(appId);
return;
killApplication(appId);
return;
MRJobConfig.DEFAULT_MR_AM_HARD_KILL_TIMEOUT_MS);
while ((currentTimeMillis < timeKillIssued + killTimeOut)
&& !isJobInTerminalState(status)) {
try {
Thread.sleep(1000L);
status = clientCache.getClient(arg0).getJobStatus(arg0);
if (status == null) {
killUnFinishedApplication(appId);
return;
LOG.debug("Error when checking for application status", io);
if (status != null && !isJobInTerminalState(status)) {
killApplication(appId);
代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-jobclient
@VisibleForTesting
void addHistoryToken(Credentials ts) throws IOException, InterruptedException {
/* check if we have a hsproxy, if not, no need */
MRClientProtocol hsProxy = clientCache.getInitializedHSProxy();
if (UserGroupInformation.isSecurityEnabled() && (hsProxy != null)) {
/*
* note that get delegation token was called. Again this is hack for oozie
* to make sure we add history server delegation tokens to the credentials
*/
RMDelegationTokenSelector tokenSelector = new RMDelegationTokenSelector();
Text service = resMgrDelegate.getRMDelegationTokenService();
if (tokenSelector.selectToken(service, ts.getAllTokens()) != null) {
Text hsService = SecurityUtil.buildTokenService(hsProxy
.getConnectAddress());
if (ts.getToken(hsService) == null) {
ts.addToken(hsService, getDelegationTokenFromHS(hsProxy));
}
}
}
}
代码示例来源:origin: org.apache.hadoop/hadoop-mapreduce-client-jobclient
private void killUnFinishedApplication(ApplicationId appId)
throws IOException {
ApplicationReport application = null;
try {
application = resMgrDelegate.getApplicationReport(appId);
} catch (YarnException e) {
throw new IOException(e);
}
if (application.getYarnApplicationState() == YarnApplicationState.FINISHED
|| application.getYarnApplicationState() == YarnApplicationState.FAILED
|| application.getYarnApplicationState() == YarnApplicationState.KILLED) {
return;
}
killApplication(appId);
}
代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-jobclient
@Override
public ClientProtocol create(Configuration conf) throws IOException {
if (MRConfig.YARN_FRAMEWORK_NAME.equals(conf.get(MRConfig.FRAMEWORK_NAME))) {
return new YARNRunner(conf);
}
return null;
}
代码示例来源:origin: org.apache.hadoop/hadoop-mapreduce-client-jobclient
@Override
public void close(ClientProtocol clientProtocol) throws IOException {
if (clientProtocol instanceof YARNRunner) {
((YARNRunner)clientProtocol).close();
}
}
}
代码示例来源:origin: org.apache.hadoop/hadoop-mapreduce-client-jobclient
createApplicationResource(defaultFileContext,
jobConfPath, LocalResourceType.FILE));
if (jobConf.get(MRJobConfig.JAR) != null) {
? LocalResourceVisibility.PUBLIC
: LocalResourceVisibility.APPLICATION;
LocalResource rc = createApplicationResource(
FileContext.getFileContext(jobJarPath.toUri(), jobConf), jobJarPath,
MRJobConfig.JOB_JAR, LocalResourceType.PATTERN, jobJarViz,
localResources.put(
MRJobConfig.JOB_SUBMIT_DIR + "/" + s,
createApplicationResource(defaultFileContext,
new Path(jobSubmitDir, s), LocalResourceType.FILE));
安装 MapR 时遇到问题安装程序。它无法从存储库中读取。请检查以下链接以了解详细错误。让我知道如何克服这一点。 http://justpaste.it/q6vl 最佳答案 您可以尝试清理 repo
我不小心删除了 hadoop.tmp.dir,在我的例子中是/tmp/{user.name}/*。现在每次当我从 CLI 运行配置单元查询时,mapred 作业将在任务尝试时失败,如下所示: Erro
大家好, 我在 Centos 6.4, 64 位上使用 Mapr M7 版本。我正在 MapR 上测试 MapR-tables,但单击 MapR-tables --> Error Home direc
我没有对 mapred-site.xml 中 mapred.local.dir 指定的目录的写入权限(也没有对 mapred-site.xml 的写入权限)有没有一种方法可以在每个 session 基
我正在使用 Hadoop 版本 0.20.2(Cloudera 发行版 cdh3u6)并发现问题。据我了解,如果我在 /etc/hadoop/conf/mapred-site.xml 中设置一个值,它
我正在尝试从 mapr fs origin 进行简单的数据移动到 mapr fs destination (这不是我的用例,只是为了测试目的做这个简单的 Action )。尝试 validate 时这
我在MapR DB中创建了一个表,并使用hbase shell将一些原始数据加载到该表中。我在哪里可以在MapR文件系统上找到这些数据?像 hive 数据一样存储在仓库目录中,MapR DB如何工作?
MapR 自带了一种不同于HDFS 方式的新架构(Container Architecture)。有什么区别?这如何存储数据?有什么优点和缺点? 最佳答案 参见 https://www.youtube
我无法找到向我解释元数据如何在 MAPR(文件元数据)中分布的特定链接。当我查看 cloudera/hortonworks/apache hadoop 时,我知道元数据存储在 namenode 的内存
用于MapR DB的MapR REST API似乎无法正常工作。我只是尝试将列系列的列表命令用作过滤器,但它没有给我适当的输出,但它与maprcli选项完美配合。这是我所做的操作列表。 使用maprc
嗨,我正在尝试在 ubuntu 12.04 上精确安装 MapR,但面临一些问题。我按照下面的链接进行安装 http://doc.mapr.com/display/MapR/Quick+Install
我在 mapr-clusters.conf 中指定了两个集群 cluster1 secure=true cldb1:7222 cldb2:7222 cldb3:7222 cluster2 secure
是否正在进行将 Hadoop 管道从 mapred 移植到 mapreduce 包的工作? 谢谢,梅格 最佳答案 JIRA 问题 MAPREDUCE-1362解决了将管道升级到新的 mapreduce
我正在玩 Mapr Sandbox,我不明白 Mapr Warden 应用程序的作用是什么。 Mapr 网站包含对配置文件的引用,但没有程序本身的描述。 最佳答案 Warden 是一个轻型 Java
我使用 MapR,我想从 LDAP 创建用户。 我可以创建我的用户,但 Hue 不想创建关联文件夹,我收到此消息:“获取当前用户的用户信息时出错,***(错误 500)” 我尝试使用本地用户,但这是同
我现在使用 CDH 5.1。它通过 YARN 启动正常的 Hadoop 作业,但 hive 仍然可以使用 mapred。有时一个大查询会挂起很长时间,我想杀死它。 我可以通过 JobTracker W
本文整理了Java中org.apache.hadoop.mapred.YARNRunner类的一些代码示例,展示了YARNRunner类的具体用法。这些代码示例主要来源于Github/Stackove
本文整理了Java中org.apache.hadoop.mapred.YarnChild类的一些代码示例,展示了YarnChild类的具体用法。这些代码示例主要来源于Github/Stackoverf
本文整理了Java中org.apache.hadoop.mapred.YarnOutputFiles类的一些代码示例,展示了YarnOutputFiles类的具体用法。这些代码示例主要来源于Githu
本文整理了Java中org.apache.hadoop.mapred.YarnClientProtocolProvider类的一些代码示例,展示了YarnClientProtocolProvider类
我是一名优秀的程序员,十分优秀!