- 使用 Spring Initializr 创建 Spring Boot 应用程序
- 在Spring Boot中配置Cassandra
- 在 Spring Boot 上配置 Tomcat 连接池
- 将Camel消息路由到嵌入WildFly的Artemis上
本文整理了Java中org.apache.hadoop.yarn.conf.YarnConfiguration.addResource()
方法的一些代码示例,展示了YarnConfiguration.addResource()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。YarnConfiguration.addResource()
方法的具体详情如下:
包路径:org.apache.hadoop.yarn.conf.YarnConfiguration
类名称:YarnConfiguration
方法名:addResource
暂无
代码示例来源:origin: apache/flink
public AbstractYarnClusterDescriptor(
Configuration flinkConfiguration,
YarnConfiguration yarnConfiguration,
String configurationDirectory,
YarnClient yarnClient,
boolean sharedYarnClient) {
this.yarnConfiguration = Preconditions.checkNotNull(yarnConfiguration);
// for unit tests only
if (System.getenv("IN_TESTS") != null) {
try {
yarnConfiguration.addResource(new File(System.getenv("YARN_CONF_DIR"), "yarn-site.xml").toURI().toURL());
} catch (Throwable t) {
throw new RuntimeException("Error", t);
}
}
this.yarnClient = Preconditions.checkNotNull(yarnClient);
this.sharedYarnClient = sharedYarnClient;
this.flinkConfiguration = Preconditions.checkNotNull(flinkConfiguration);
userJarInclusion = getUserJarInclusionMode(flinkConfiguration);
this.configurationDirectory = Preconditions.checkNotNull(configurationDirectory);
}
代码示例来源:origin: alibaba/jstorm
public Executor(String instancName, String shellCommand, STARTType startType, String runningContainer,
String localDir, String deployPath,
String hadoopHome, String javaHome, String pythonHome, String dstPath, String portList, String shellArgs,
String ExecShellStringPath, String applicationId, String supervisorLogviewPort, String nimbusThriftPort) {
executorMeta = new ExecutorMeta(instancName, shellCommand, startType, runningContainer,
localDir, deployPath, hadoopHome, javaHome, pythonHome, dstPath, portList, shellArgs,
ExecShellStringPath, applicationId, supervisorLogviewPort, nimbusThriftPort);
conf = new YarnConfiguration();
Path yarnSite = new Path(hadoopHome + JOYConstants.YARN_SITE_PATH);
conf.addResource(yarnSite);
//get first log dir
logDir = conf.get(JOYConstants.YARN_NM_LOG, JOYConstants.YARN_NM_LOG_DIR).split(JOYConstants.COMMA)[0] + JOYConstants.BACKLASH + applicationId + JOYConstants.BACKLASH + runningContainer;
//Setup RegistryOperations
registryOperations = RegistryOperationsFactory.createInstance(JOYConstants.YARN_REGISTRY, conf);
try {
setupInitialRegistryPaths();
} catch (IOException e) {
e.printStackTrace();
}
registryOperations.start();
}
代码示例来源:origin: uber/AthenaX
public YarnClusterConfiguration toYarnClusterConfiguration() {
Preconditions.checkNotNull(yarnSite, "yarn.site.location is not configured");
Preconditions.checkNotNull(homeDir, "athenax.home.dir is not configured");
Preconditions.checkNotNull(flinkUberJar, "flink.uber.jar.location is not configured");
YarnConfiguration yarnConf = new YarnConfiguration();
yarnConf.addResource(new Path(URI.create(yarnSite)));
return new YarnClusterConfiguration(
yarnConf, homeDir, new Path(flinkUberJar),
resourcesToLocalize.stream().map(x -> new Path(URI.create(x))).collect(Collectors.toSet()),
additionalJars.stream().map(x -> new Path(URI.create(x))).collect(Collectors.toSet()));
}
}
代码示例来源:origin: DTStack/flinkx
public static YarnConfiguration getYarnConf(String yarnConfDir) {
YarnConfiguration yarnConf = new YarnConfiguration();
try {
File dir = new File(yarnConfDir);
if(dir.exists() && dir.isDirectory()) {
File[] xmlFileList = new File(yarnConfDir).listFiles((dir1, name) -> {
if(name.endsWith(".xml")){
return true;
}
return false;
});
if(xmlFileList != null) {
for(File xmlFile : xmlFileList) {
yarnConf.addResource(xmlFile.toURI().toURL());
}
}
}
} catch(Exception e) {
throw new RuntimeException(e);
}
haYarnConf(yarnConf);
return yarnConf;
}
代码示例来源:origin: DTStack/flinkStreamSQL
public static YarnConfiguration getYarnConf(String yarnConfDir) {
YarnConfiguration yarnConf = new YarnConfiguration();
try {
File dir = new File(yarnConfDir);
if(dir.exists() && dir.isDirectory()) {
File[] xmlFileList = new File(yarnConfDir).listFiles((dir1, name) -> {
if(name.endsWith(".xml")){
return true;
}
return false;
});
if(xmlFileList != null) {
for(File xmlFile : xmlFileList) {
yarnConf.addResource(xmlFile.toURI().toURL());
}
}
}
} catch(Exception e) {
throw new RuntimeException(e);
}
haYarnConf(yarnConf);
return yarnConf;
}
代码示例来源:origin: DTStack/flinkx
public static YarnConfiguration getYarnConfiguration(Configuration config, String yarnConfDir) throws Exception {
YarnConfiguration yarnConf = new YarnConfiguration();
config.setString(ConfigConstants.PATH_HADOOP_CONFIG, yarnConfDir);
FileSystem.initialize(config);
File dir = new File(yarnConfDir);
if (dir.exists() && dir.isDirectory()) {
File[] xmlFileList = new File(yarnConfDir).listFiles(new FilenameFilter() {
@Override
public boolean accept(File dir, String name) {
if (name.endsWith(".xml")) {
return true;
}
return false;
}
});
if (xmlFileList != null) {
for (File xmlFile : xmlFileList) {
yarnConf.addResource(xmlFile.toURI().toURL());
}
}
}
return yarnConf;
}
代码示例来源:origin: linkedin/TonY
private void createYarnClient() {
if (this.yarnConfAddress != null) {
this.yarnConf.addResource(new Path(this.yarnConfAddress));
}
if (this.hdfsConfAddress != null) {
this.hdfsConf.addResource(new Path(this.hdfsConfAddress));
}
int numRMConnectRetries = tonyConf.getInt(TonyConfigurationKeys.RM_CLIENT_CONNECT_RETRY_MULTIPLIER,
TonyConfigurationKeys.DEFAULT_RM_CLIENT_CONNECT_RETRY_MULTIPLIER);
long rmMaxWaitMS = yarnConf.getLong(YarnConfiguration.RESOURCEMANAGER_CONNECT_RETRY_INTERVAL_MS,
YarnConfiguration.DEFAULT_RESOURCEMANAGER_CONNECT_RETRY_INTERVAL_MS) * numRMConnectRetries;
yarnConf.setLong(YarnConfiguration.RESOURCEMANAGER_CONNECT_MAX_WAIT_MS, rmMaxWaitMS);
if (System.getenv(Constants.HADOOP_CONF_DIR) != null) {
hdfsConf.addResource(new Path(System.getenv(Constants.HADOOP_CONF_DIR) + File.separatorChar + Constants.CORE_SITE_CONF));
yarnConf.addResource(new Path(System.getenv(Constants.HADOOP_CONF_DIR) + File.separatorChar + Constants.CORE_SITE_CONF));
hdfsConf.addResource(new Path(System.getenv(Constants.HADOOP_CONF_DIR) + File.separatorChar + Constants.HDFS_SITE_CONF));
}
yarnClient = YarnClient.createYarnClient();
yarnClient.init(yarnConf);
}
代码示例来源:origin: org.apache.fluo/fluo-cluster
private synchronized TwillRunnerService getTwillRunner(FluoConfiguration config) {
if (!twillRunners.containsKey(config.getApplicationName())) {
YarnConfiguration yarnConfig = new YarnConfiguration();
yarnConfig.addResource(new Path(hadoopPrefix + "/etc/hadoop/core-site.xml"));
yarnConfig.addResource(new Path(hadoopPrefix + "/etc/hadoop/yarn-site.xml"));
TwillRunnerService twillRunner =
new YarnTwillRunnerService(yarnConfig, config.getAppZookeepers() + ZookeeperPath.TWILL);
twillRunner.start();
twillRunners.put(config.getApplicationName(), twillRunner);
// sleep to give twill time to retrieve state from zookeeper
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
}
return twillRunners.get(config.getApplicationName());
}
代码示例来源:origin: apache/fluo
private synchronized TwillRunnerService getTwillRunner(FluoConfiguration config) {
if (!twillRunners.containsKey(config.getApplicationName())) {
YarnConfiguration yarnConfig = new YarnConfiguration();
yarnConfig.addResource(new Path(hadoopPrefix + "/etc/hadoop/core-site.xml"));
yarnConfig.addResource(new Path(hadoopPrefix + "/etc/hadoop/yarn-site.xml"));
TwillRunnerService twillRunner =
new YarnTwillRunnerService(yarnConfig, config.getAppZookeepers() + ZookeeperPath.TWILL);
twillRunner.start();
twillRunners.put(config.getApplicationName(), twillRunner);
// sleep to give twill time to retrieve state from zookeeper
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
}
return twillRunners.get(config.getApplicationName());
}
代码示例来源:origin: io.fluo/fluo-cluster
private synchronized TwillRunnerService getTwillRunner(FluoConfiguration config) {
if (!twillRunners.containsKey(config.getApplicationName())) {
YarnConfiguration yarnConfig = new YarnConfiguration();
yarnConfig.addResource(new Path(hadoopPrefix + "/etc/hadoop/core-site.xml"));
yarnConfig.addResource(new Path(hadoopPrefix + "/etc/hadoop/yarn-site.xml"));
TwillRunnerService twillRunner =
new YarnTwillRunnerService(yarnConfig, config.getAppZookeepers() + ZookeeperPath.TWILL);
twillRunner.start();
twillRunners.put(config.getApplicationName(), twillRunner);
// sleep to give twill time to retrieve state from zookeeper
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
}
return twillRunners.get(config.getApplicationName());
}
代码示例来源:origin: org.apache.flink/flink-yarn_2.11
public AbstractYarnClusterDescriptor(
Configuration flinkConfiguration,
YarnConfiguration yarnConfiguration,
String configurationDirectory,
YarnClient yarnClient,
boolean sharedYarnClient) {
this.yarnConfiguration = Preconditions.checkNotNull(yarnConfiguration);
// for unit tests only
if (System.getenv("IN_TESTS") != null) {
try {
yarnConfiguration.addResource(new File(System.getenv("YARN_CONF_DIR"), "yarn-site.xml").toURI().toURL());
} catch (Throwable t) {
throw new RuntimeException("Error", t);
}
}
this.yarnClient = Preconditions.checkNotNull(yarnClient);
this.sharedYarnClient = sharedYarnClient;
this.flinkConfiguration = Preconditions.checkNotNull(flinkConfiguration);
userJarInclusion = getUserJarInclusionMode(flinkConfiguration);
this.configurationDirectory = Preconditions.checkNotNull(configurationDirectory);
}
代码示例来源:origin: org.apache.flink/flink-yarn
public AbstractYarnClusterDescriptor(
Configuration flinkConfiguration,
YarnConfiguration yarnConfiguration,
String configurationDirectory,
YarnClient yarnClient,
boolean sharedYarnClient) {
this.yarnConfiguration = Preconditions.checkNotNull(yarnConfiguration);
// for unit tests only
if (System.getenv("IN_TESTS") != null) {
try {
yarnConfiguration.addResource(new File(System.getenv("YARN_CONF_DIR"), "yarn-site.xml").toURI().toURL());
} catch (Throwable t) {
throw new RuntimeException("Error", t);
}
}
this.yarnClient = Preconditions.checkNotNull(yarnClient);
this.sharedYarnClient = sharedYarnClient;
this.flinkConfiguration = Preconditions.checkNotNull(flinkConfiguration);
userJarInclusion = getUserJarInclusionMode(flinkConfiguration);
this.configurationDirectory = Preconditions.checkNotNull(configurationDirectory);
}
代码示例来源:origin: org.springframework.data/spring-yarn-core
if (resources != null) {
for (Resource resource : resources) {
internalConfig.addResource(resource.getURL());
YARN 中的 yarn-site.xml 与 yarn-default.xml 有什么区别?看起来yarn-default.xml 在Hadoop 2.2 中已被弃用? 最佳答案 在所有 Hadoo
我们有一个在 yarn 上运行的流媒体应用程序,我们希望确保它 24/7 全天候运行。 有没有办法告诉 yarn 在失败时自动重启特定的应用程序? 最佳答案 你试过了吗Hadoop Yarn - Re
我在根队列下有 4 个队列,配置如下。 |-------------|-----------------|---------------------|-------------------| | Qu
我正在使用 YARN(和 Dask)版本 Hadoop 2.7.3-amzn-1 在 AWS EMR 上构建应用程序。我正在尝试测试各种故障场景,并且我想模拟容器故障。我似乎找不到一种简单的方法来杀死
我想创建一个 cron 来通过它的应用程序名称杀死一个 yarn 应用程序(Spark)。但我发现 yarn 应用程序 -kill 需要一个应用程序 ID。是否有解决方案可以通过应用程序名称杀死它,或
我正在尝试从此链接运行通用入门套件:https://github.com/ng-seed/universal即使我按照步骤安装了 Yarn,当我运行命令来运行服务器时,它给了我以下错误: 错误:找不到
我正在尝试 YARN 2.2 中的分布式 Shell 示例,希望有人能澄清托管和非托管应用程序管理器之间的区别是什么? 例如以下几行出现在客户端代码中 // unmanaged AM appConte
我有一个像这样的工作区项目: /project - package.json /packages /project-a package.json
这两个到底做什么用,在哪里使用它们? yarn 安装 yarn 构建 最佳答案 简而言之,yarn install 是用于安装项目所有依赖项的命令,通常在 package.json 文件中分配。在大多
所以,到目前为止,似乎没有 yarn audit --fix ,所以我想弄清楚如何修复我的 yarn audit错误。 我试过 yarn upgrade它修复了一些错误(这很好),但仍然存在一些错误。
我正在使用一个使用 yarn 的 dockerized pyspark 集群。为了提高数据处理管道的效率,我想增加分配给 pyspark 执行程序和驱动程序的内存量。 这是通过将以下两个键值对添加到
我尝试重新安装yarn,但重新安装后发现这个问题,我尝试搜索互联网但没有找到解决方案。 fiii@neo:~$ yarn --version node:internal/modules/cjs/loa
我正在试验2号纱和植面。 我创建了一个新文件夹:/projects/yarn2/根据他们的安装指南https://yarnpkg.com/getting-started我跑了 cd /projects
我是hadoop和YARN的新手。启动hdfs之后,我使用软件包中提供的start-yarn.sh启动YARN并出现错误。 我在Ubuntu 18.04上使用hadoop 3.2.0,jdk-11。
Apache Spark最近更新了版本至0.8.1,新增了yarn-client模式。我的问题是,yarn-client 模式的真正含义是什么?在文档中它说: With yarn-client mod
我们有一个在 HDFS 2.7.3 上运行的 Spark 流应用程序,使用 Yarn 作为资源管理器....在运行应用程序时......这两个文件夹 /tmp/hadoop/data/nm-local
我的机器上的 yarn 命令有问题。我的机器上安装了 hadoop 和 yarn 包管理器(Javascript)。当我运行 yarn init 时,它调用 hadoop 的 YARN 并响应: Er
我正在尝试运行此处列出的简单 yarn 应用程序:https://github.com/hortonworks/simple-yarn-app 我是 Java 和 Hadoop 的初学者,当我尝试使用
我正在尝试使用 YARN node labels标记工作节点,但是当我在 YARN(Spark 或简单的 YARN 应用程序)上运行应用程序时,这些应用程序无法启动。 使用 Spark,指定 --co
我一直只使用 npm 而从不显式使用 yarn/webpack。我需要从这个 repo 运行代码: https://github.com/looker-open-source/custom_visua
我是一名优秀的程序员,十分优秀!