- 使用 Spring Initializr 创建 Spring Boot 应用程序
- 在Spring Boot中配置Cassandra
- 在 Spring Boot 上配置 Tomcat 连接池
- 将Camel消息路由到嵌入WildFly的Artemis上
本文整理了Java中org.apache.hadoop.yarn.client.api.YarnClient.createYarnClient()
方法的一些代码示例,展示了YarnClient.createYarnClient()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。YarnClient.createYarnClient()
方法的具体详情如下:
包路径:org.apache.hadoop.yarn.client.api.YarnClient
类名称:YarnClient
方法名:createYarnClient
[英]Create a new instance of YarnClient.
[中]创建YarnClient的新实例。
代码示例来源:origin: alibaba/jstorm
JstormOnYarn(String appMasterMainClass, Configuration conf) {
this.jstormClientContext.conf = conf;
this.appMasterMainClass = appMasterMainClass;
jstormClientContext.yarnClient = YarnClient.createYarnClient();
jstormClientContext.yarnClient.init(conf);
jstormClientContext.opts = JstormYarnUtils.initClientOptions();
}
代码示例来源:origin: apache/drill
public YarnRMClient(YarnConfiguration conf) {
this.conf = conf;
yarnClient = YarnClient.createYarnClient();
yarnClient.init(conf);
yarnClient.start();
}
代码示例来源:origin: apache/incubator-gobblin
public GobblinYarnAppLauncher(Config config, YarnConfiguration yarnConfiguration) throws IOException {
this.config = config;
this.applicationName = config.getString(GobblinYarnConfigurationKeys.APPLICATION_NAME_KEY);
this.appQueueName = config.getString(GobblinYarnConfigurationKeys.APP_QUEUE_KEY);
String zkConnectionString = config.getString(GobblinClusterConfigurationKeys.ZK_CONNECTION_STRING_KEY);
LOGGER.info("Using ZooKeeper connection string: " + zkConnectionString);
this.helixManager = HelixManagerFactory.getZKHelixManager(
config.getString(GobblinClusterConfigurationKeys.HELIX_CLUSTER_NAME_KEY), GobblinClusterUtils.getHostname(),
InstanceType.SPECTATOR, zkConnectionString);
this.yarnConfiguration = yarnConfiguration;
this.yarnConfiguration.set("fs.automatic.close", "false");
this.yarnClient = YarnClient.createYarnClient();
this.yarnClient.init(this.yarnConfiguration);
this.fs = config.hasPath(ConfigurationKeys.FS_URI_KEY) ?
FileSystem.get(URI.create(config.getString(ConfigurationKeys.FS_URI_KEY)), this.yarnConfiguration) :
FileSystem.get(this.yarnConfiguration);
this.closer.register(this.fs);
this.applicationStatusMonitor = Executors.newSingleThreadScheduledExecutor(
ExecutorsUtils.newThreadFactory(Optional.of(LOGGER), Optional.of("GobblinYarnAppStatusMonitor")));
this.appReportIntervalMinutes = config.getLong(GobblinYarnConfigurationKeys.APP_REPORT_INTERVAL_MINUTES_KEY);
this.appMasterJvmArgs = config.hasPath(GobblinYarnConfigurationKeys.APP_MASTER_JVM_ARGS_KEY) ?
Optional.of(config.getString(GobblinYarnConfigurationKeys.APP_MASTER_JVM_ARGS_KEY)) :
Optional.<String>absent();
this.sinkLogRootDir = new Path(config.getString(GobblinYarnConfigurationKeys.LOGS_SINK_ROOT_DIR_KEY));
this.maxGetApplicationReportFailures = config.getInt(GobblinYarnConfigurationKeys.MAX_GET_APP_REPORT_FAILURES_KEY);
this.emailNotificationOnShutdown =
config.getBoolean(GobblinYarnConfigurationKeys.EMAIL_NOTIFICATION_ON_SHUTDOWN_KEY);
}
代码示例来源:origin: apache/hive
/**
* Kills all jobs tagged with the given tag that have been started after the
* given timestamp.
*/
@Override
public void killJobs(String tag, long timestamp) {
try {
LOG.info("Looking for jobs to kill...");
Set<ApplicationId> childJobs = getYarnChildJobs(tag, timestamp);
if (childJobs.isEmpty()) {
LOG.info("No jobs found from");
return;
} else {
LOG.info(String.format("Found MR jobs count: %d", childJobs.size()));
LOG.info("Killing all found jobs");
YarnClient yarnClient = YarnClient.createYarnClient();
yarnClient.init(conf);
yarnClient.start();
for (ApplicationId app: childJobs) {
LOG.info(String.format("Killing job: %s ...", app));
yarnClient.killApplication(app);
LOG.info(String.format("Job %s killed", app));
}
}
} catch (YarnException ye) {
throw new RuntimeException("Exception occurred while killing child job(s)", ye);
} catch (IOException ioe) {
throw new RuntimeException("Exception occurred while killing child job(s)", ioe);
}
}
代码示例来源:origin: apache/flink
private AbstractYarnClusterDescriptor getClusterDescriptor(
Configuration configuration,
YarnConfiguration yarnConfiguration,
String configurationDirectory) {
final YarnClient yarnClient = YarnClient.createYarnClient();
yarnClient.init(yarnConfiguration);
yarnClient.start();
return new YarnClusterDescriptor(
configuration,
yarnConfiguration,
configurationDirectory,
yarnClient,
false);
}
}
代码示例来源:origin: apache/hive
public static void killChildYarnJobs(Configuration conf, String tag) {
try {
if (tag == null) {
return;
}
LOG.info("Killing yarn jobs using query tag:" + tag);
Set<ApplicationId> childYarnJobs = getChildYarnJobs(conf, tag);
if (!childYarnJobs.isEmpty()) {
YarnClient yarnClient = YarnClient.createYarnClient();
yarnClient.init(conf);
yarnClient.start();
for (ApplicationId app : childYarnJobs) {
yarnClient.killApplication(app);
}
}
} catch (IOException | YarnException ye) {
LOG.warn("Exception occurred while killing child job({})", ye);
}
}
代码示例来源:origin: apache/hive
public boolean isApplicationAccepted(HiveConf conf, String applicationId) {
if (applicationId == null) {
return false;
}
YarnClient yarnClient = null;
try {
LOG.info("Trying to find " + applicationId);
ApplicationId appId = getApplicationIDFromString(applicationId);
yarnClient = YarnClient.createYarnClient();
yarnClient.init(conf);
yarnClient.start();
ApplicationReport appReport = yarnClient.getApplicationReport(appId);
return appReport != null && appReport.getYarnApplicationState() == YarnApplicationState.ACCEPTED;
} catch (Exception ex) {
LOG.error("Failed getting application status for: " + applicationId + ": " + ex, ex);
return false;
} finally {
if (yarnClient != null) {
try {
yarnClient.stop();
} catch (Exception ex) {
LOG.error("Failed to stop yarn client: " + ex, ex);
}
}
}
}
}
代码示例来源:origin: apache/hive
public static boolean isApplicationAccepted(HiveConf conf, String applicationId) {
if (applicationId == null) {
return false;
}
YarnClient yarnClient = null;
try {
ApplicationId appId = getApplicationIDFromString(applicationId);
yarnClient = YarnClient.createYarnClient();
yarnClient.init(conf);
yarnClient.start();
ApplicationReport appReport = yarnClient.getApplicationReport(appId);
return appReport != null && appReport.getYarnApplicationState() == YarnApplicationState.ACCEPTED;
} catch (Exception ex) {
LOG.error("Failed getting application status for: " + applicationId + ": " + ex, ex);
return false;
} finally {
if (yarnClient != null) {
try {
yarnClient.stop();
} catch (Exception ex) {
LOG.error("Failed to stop yarn client: " + ex, ex);
}
}
}
}
代码示例来源:origin: apache/flink
@BeforeClass
public static void setupClass() {
yarnConfiguration = new YarnConfiguration();
yarnClient = YarnClient.createYarnClient();
yarnClient.init(yarnConfiguration);
yarnClient.start();
}
代码示例来源:origin: Qihoo360/XLearning
yarnClient = YarnClient.createYarnClient();
yarnClient.init(conf);
yarnClient.start();
代码示例来源:origin: apache/flink
@Before
public void checkClusterEmpty() {
if (yarnClient == null) {
yarnClient = YarnClient.createYarnClient();
yarnClient.init(getYarnConfiguration());
yarnClient.start();
}
flinkConfiguration = new org.apache.flink.configuration.Configuration(globalConfiguration);
}
代码示例来源:origin: apache/flink
YarnClient yc = YarnClient.createYarnClient();
yc.init(YARN_CONFIGURATION);
yc.start();
代码示例来源:origin: apache/drill
@Override
public void start(CallbackHandler resourceCallback,
org.apache.hadoop.yarn.client.api.async.NMClientAsync.CallbackHandler nodeCallback ) {
conf = new YarnConfiguration();
resourceMgr = AMRMClientAsync.createAMRMClientAsync(pollPeriodMs, resourceCallback);
resourceMgr.init(conf);
resourceMgr.start();
// Create the asynchronous node manager client
nodeMgr = NMClientAsync.createNMClientAsync(nodeCallback);
nodeMgr.init(conf);
nodeMgr.start();
client = YarnClient.createYarnClient();
client.init(conf);
client.start();
String appIdStr = System.getenv(DrillOnYarnConfig.APP_ID_ENV_VAR);
if (appIdStr != null) {
appId = ConverterUtils.toApplicationId(appIdStr);
try {
appReport = client.getApplicationReport(appId);
} catch (YarnException | IOException e) {
LOG.error(
"Failed to get YARN applicaiton report for App ID: " + appIdStr, e);
}
}
}
代码示例来源:origin: apache/ignite
YarnClient yarnClient = YarnClient.createYarnClient();
yarnClient.init(conf);
yarnClient.start();
代码示例来源:origin: apache/incubator-gobblin
this.yarnClient = this.closer.register(YarnClient.createYarnClient());
this.yarnClient.init(clusterConf);
this.yarnClient.start();
代码示例来源:origin: apache/flink
final YarnClient closableYarnClient = YarnClient.createYarnClient();
closableYarnClient.init(yarnConfiguration);
closableYarnClient.start();
代码示例来源:origin: apache/metron
Client(String appMasterMainClass, Configuration conf) {
this.conf = conf;
this.appMasterMainClass = appMasterMainClass;
yarnClient = YarnClient.createYarnClient();
yarnClient.init(conf);
}
代码示例来源:origin: uber/AthenaX
ClusterInfo(String name, YarnClusterConfiguration conf) {
this.name = name;
this.client = YarnClient.createYarnClient();
client.init(conf.conf());
client.start();
this.conf = conf;
}
代码示例来源:origin: uber/AthenaX
doReturn(JobITestUtil.trivialJobGraph()).when(res).jobGraph();
try (YarnClient client = YarnClient.createYarnClient()) {
ClusterInfo clusterInfo = new ClusterInfo(CLUSTER_NAME, clusterConf, client);
YarnConfiguration conf = cluster.getYarnConfiguration();
代码示例来源:origin: uber/AthenaX
@Test
public void testCreateAthenaXCluster() throws Exception {
ScheduledExecutorService executor = Executors.newScheduledThreadPool(1);
Configuration flinkConf = new Configuration();
flinkConf.setString(JobManagerOptions.ADDRESS, "localhost");
try (MiniAthenaXCluster cluster = new MiniAthenaXCluster(JobDeployerITest.class.getSimpleName())) {
cluster.start();
YarnConfiguration conf = cluster.getYarnConfiguration();
YarnClusterConfiguration clusterConf = cluster.getYarnClusterConf();
final ApplicationId appId;
try (YarnClient client = YarnClient.createYarnClient()) {
client.init(conf);
client.start();
JobDeployer deployer = new JobDeployer(clusterConf, client, executor, flinkConf);
appId = deployer.createApplication();
InstanceMetadata md = new InstanceMetadata(UUID.randomUUID(), UUID.randomUUID());
JobDefinitionResource resource = new JobDefinitionResource()
.queue(null).vCores(1L).executionSlots(1L).memory(2048L);
JobConf jobConf = new JobConf(appId, "test", Collections.emptyList(), resource, md);
deployer.start(JobITestUtil.trivialJobGraph(), jobConf);
YarnApplicationState state = MiniAthenaXCluster.pollFinishedApplicationState(client, appId);
assertEquals(FINISHED, state);
}
}
}
}
我遇到了一个错误,我不知道如何解决。我有以下代码(来自 Eliom Graffiti 教程),我正在尝试使用 make test.byte 进行测试。 open Eliom_content.Html5
我阅读文档的理解是这样的: 客户端是测试用例的子类。当我们运行 manage.py test 时,会为每个以“test_”开头的方法创建一个 SimpleTest 类的实例(它继承自 TestCase
我已经编写了一个用于接收多个客户端的服务器,它可以分别与客户端通信。在这里,我可以列出服务器中已连接的客户端,但是当客户端断开连接时,它不会从服务器中删除客户端。 Server.py import s
我正在制作一个社交网站。当任何用户在站点上更新或创建新内容时,我需要查看站点的任何其他用户来查看更改更新。 我有一些需要低延迟的评论,因此建议为此订阅。 我也有事件,但这些不需要这么低的延迟。每 10
我想在突变后使用乐观 UI 更新:https://www.apollographql.com/docs/react/basics/mutations.html 我对“乐观响应”和“更新”之间的关系感到
我想了解 Dask 在本地机器上的使用模式。 具体而言, 我有一个适合内存的数据集 我想做一些 pandas 操作 分组依据... 日期解析 等等 Pandas 通过单核执行这些操作,这些操作对我来说
我使用 Apollo、React 和 Graphcool。我有一个查询来获取登录的用户 ID: const LoginServerQuery = gql` query LoginServerQ
在本指南的帮助下,我最近在几个设备的应用程序中设置了 P2P 通信:http://developer.android.com/training/connect-devices-wirelessly/n
注意:我在节点项目中使用@twilio/conversations 1.1.0 版。我正在从使用可编程聊天过渡到对话。 我看到对 Client.getConversationByUniqueName
我对服务客户端和设备客户端库有点困惑。谁能解答我对此的疑问。 问题:当我通过 deviceClient 发送数据时,我无法接收数据,但当我使用服务客户端发送数据时,相同的代码可以工作。现在,xamar
我对服务客户端和设备客户端库有点困惑。谁能解答我对此的疑问。 问题:当我通过 deviceClient 发送数据时,我无法接收数据,但当我使用服务客户端发送数据时,相同的代码可以工作。现在,xamar
假设我有一个简单的应用程序。 如何设置 OAuth2 以允许其他应用程序访问我的应用程序的某些部分。 例如,当开发人员想要使用 Facebook API 时,他们会使用 Facebook API 用户
我有两个模块: 在一个模块中,我从另一个模块run 中引用了一个函数: @myorg/server import { Client } from '.' import { Middleware } f
我在通过服务器从客户端向客户端发送数据时遇到了一些问题(以避免监听客户端上的端口)。 我有一个这样的服务器: var net = require("net"); var server = net.cr
我正在使用 django.test.client.Client 来测试用户登录时是否显示某些文本。但是,我的 Client 对象似乎并没有让我保持登录状态。 如果使用 Firefox 手动完成,则此测
有两个我制作的程序无法运行。有服务器和客户端。服务器通过给用户一个 ID(从 0 开始)来接受许多客户端。服务器根据服务器的 ID 将命令发送到特定的客户端。 (示例:200 个客户端连接到 1 个服
今天,我在 Windows 10 的“程序和功能”列表中看到了 2 个不同版本的 ARC,因此我选择卸载旧版本,因为我需要一些空间。在卸载结束时,它们都消失了! 所以,我从 https://insta
在每个新的客户端连接上 fork 服务器进程 不同的进程(服务器的其他子进程,即 exec)无法识别在 fork 子进程中使用相同 fd 的客户端。 如何在其他进程上区分客户端? 如果文件描述符为新
a和b有什么区别? >>> import boto3 >>> a = boto3.Session().client("s3") >>> b = boto3.client("s3") >>> a ==
a和b有什么区别? >>> import boto3 >>> a = boto3.Session().client("s3") >>> b = boto3.client("s3") >>> a ==
我是一名优秀的程序员,十分优秀!