- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我目前正在使用Java(和Yarn客户端)将作业提交到yarn集群(在Ubunutu/Linux中)环境。提交Java程序时,一切正常。当提交 Python 程序时,它似乎停滞在 ACCEPTED 状态并最终出错。
这是我用来提交程序的代码:
import org.apache.spark.SparkConf;
import org.apache.spark.deploy.yarn.Client;
import org.apache.spark.deploy.yarn.ClientArguments;
import org.apache.hadoop.conf.Configuration;
import org.apache.log4j.Logger;
/**
* This class submits a SparkPi to a YARN from a Java client (as opposed
* to submitting a Spark job from a shell command line using spark-submit).
*
* To accomplish submitting a Spark job from a Java client, we use
* the org.apache.spark.deploy.yarn.Client class described below:
*
Usage: org.apache.spark.deploy.yarn.Client [options]
Options:
--jar JAR_PATH Path to your application's JAR file (required in yarn-cluster mode)
--class CLASS_NAME Name of your application's main class (required)
--primary-py-file A main Python file
--arg ARG Argument to be passed to your application's main class.
Multiple invocations are possible, each will be passed in order.
--num-executors NUM Number of executors to start (Default: 2)
--executor-cores NUM Number of cores per executor (Default: 1).
--driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 512 Mb)
--driver-cores NUM Number of cores used by the driver (Default: 1).
--executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G)
--name NAME The name of your application (Default: Spark)
--queue QUEUE The hadoop queue to use for allocation requests (Default: 'default')
--addJars jars Comma separated list of local jars that want SparkContext.addJar to work with.
--py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place on the PYTHONPATH for Python apps.
--files files Comma separated list of files to be distributed with the job.
--archives archives Comma separated list of archives to be distributed with the job.
How to call this program example:
export SPARK_HOME="/opt/spark/spark-1.6.0"
java -DSPARK_HOME="$SPARK_HOME" org.dataalgorithms.client.SubmitYARNJobFromJava 10
*/
public class SubmitPyYARNJobFromJava {
public static void main(String[] args) throws Exception {
long startTime = System.currentTimeMillis();
// this is passed to SparkPi program
String slices = args[0];
// String slices = "15";
// String SPARK_HOME = System.getProperty("SPARK_HOME");
String SPARK_HOME = "/opt/spark/spark-1.6.0";
//
pi(SPARK_HOME, slices); // ... the code being measured ...
//
long elapsedTime = System.currentTimeMillis() - startTime;
}
static void pi(String SPARK_HOME, String slices) throws Exception {
//
String[] args = new String[]{
// application name
"--name",
"SparkPi-Python",
// Python Program
"--primary-py-file",
SPARK_HOME + "/examples/src/main/python/pi.py",
// number of executors
"--num-executors",
"2",
// driver memory
"--driver-memory",
"512m",
// executor memory
"--executor-memory",
"512m",
// executor cores
"--executor-cores",
"2",
// argument 1 to my Spark program
"--arg",
slices,
// argument 2 to my Spark program (helper argument to create a proper JavaSparkContext object)
"--arg",
"yarn-cluster"
};
Configuration config = new Configuration();
//
System.setProperty("SPARK_YARN_MODE", "true");
//
SparkConf sparkConf = new SparkConf();
ClientArguments clientArgs = new ClientArguments(args, sparkConf);
Client client = new Client(clientArgs, config, sparkConf);
client.run();
// done!
}
}
我从命令行调用代码如下:
java -cp *:. SubmitPyYARNJobFromJava 10
Pi.py程序是Spark-1.6.0附带的为Hadoop-2.6.0构建的标准程序。
from __future__ import print_function
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import sys
from random import random
from operator import add
from pyspark import SparkContext
if __name__ == "__main__":
"""
Usage: pi [partitions]
"""
sc = SparkContext(appName="PythonPi")
partitions = int(sys.argv[1]) if len(sys.argv) > 1 else 2
n = 100000 * partitions
def f(_):
x = random() * 2 - 1
y = random() * 2 - 1
return 1 if x ** 2 + y ** 2 < 1 else 0
count = sc.parallelize(range(1, n + 1), partitions).map(f).reduce(add)
print("Pi is roughly %f" % (4.0 * count / n))
sc.stop()
提交作业后,看起来它将正确提交。它到达接受状态,然后停止。
log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/02/12 16:15:24 INFO Client: Requesting a new application from cluster with 1 NodeManagers
16/02/12 16:15:24 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (2048 MB per container)
16/02/12 16:15:24 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
16/02/12 16:15:24 INFO Client: Setting up container launch context for our AM
16/02/12 16:15:24 INFO Client: Setting up the launch environment for our AM container
16/02/12 16:15:24 INFO Client: Preparing resources for our AM container
16/02/12 16:15:25 INFO Client: Source and destination file systems are the same. Not copying file:/home/shunley/workspace/rabbitmq_java_rpc/spark-assembly-1.6.0-hadoop2.6.0.jar
16/02/12 16:15:25 INFO Client: Source and destination file systems are the same. Not copying file:/tmp/spark-7dbbb73f-e5bc-4fc1-a535-02a60cb68b16/__spark_conf__6244658246692860568.zip
16/02/12 16:15:25 INFO SecurityManager: Changing view acls to: shunley
16/02/12 16:15:25 INFO SecurityManager: Changing modify acls to: shunley
16/02/12 16:15:25 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(shunley); users with modify permissions: Set(shunley)
16/02/12 16:15:26 INFO Client: Submitting application 8 to ResourceManager
16/02/12 16:15:26 INFO YarnClientImpl: Submitted application application_1455307995259_0008
16/02/12 16:15:27 INFO Client: Application report for application_1455307995259_0008 (state: ACCEPTED)
16/02/12 16:15:27 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1455311726233
final status: UNDEFINED
tracking URL: http://shunley-VirtualBox:8088/proxy/application_1455307995259_0008/
user: shunley
16/02/12 16:15:28 INFO Client: Application report for application_1455307995259_0008 (state: ACCEPTED)
16/02/12 16:15:29 INFO Client: Application report for application_1455307995259_0008 (state: ACCEPTED)
16/02/12 16:15:30 INFO Client: Application report for application_1455307995259_0008 (state: ACCEPTED)
16/02/12 16:15:31 INFO Client: Application report for application_1455307995259_0008 (state: ACCEPTED)
16/02/12 16:15:32 INFO Client: Application report for application_1455307995259_0008 (state: ACCEPTED)
16/02/12 16:15:33 INFO Client: Application report for application_1455307995259_0008 (state: ACCEPTED)
16/02/12 16:15:34 INFO Client: Application report for application_1455307995259_0008 (state: ACCEPTED)
然后它最终失败,给出的错误如下:
16/02/12 16:43:56 INFO Client: Application report for application_1455307995259_0009 (state: FAILED)
16/02/12 16:43:56 INFO Client:
client token: N/A
diagnostics: Application application_1455307995259_0009 failed 2 times due to AM Container for appattempt_1455307995259_0009_000002 exited with exitCode: 10
For more detailed output, check application tracking page:http://shunley-VirtualBox:8088/proxy/application_1455307995259_0009/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1455307995259_0009_02_000001
Exit code: 10
Stack trace: ExitCodeException exitCode=10:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 10
Failing this attempt. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1455313224060
final status: FAILED
tracking URL: http://shunley-VirtualBox:8088/cluster/app/application_1455307995259_0009
user: shunley
Exception in thread "main" org.apache.spark.SparkException: Application application_1455307995259_0009 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1029)
at SubmitPyYARNJobFromJava.pi(SubmitPyYARNJobFromJava.java:101)
at SubmitPyYARNJobFromJava.main(SubmitPyYARNJobFromJava.java:52)
16/02/12 16:43:56 INFO ShutdownHookManager: Shutdown hook called
16/02/12 16:43:56 INFO ShutdownHookManager: Deleting directory /tmp/spark-f5f15d4f-7383-4a97-b2ff-5734148d8a29
我尝试在谷歌上到处搜索类似的内容,但没有成功。以前有人见过这样的事情吗?我需要能够通过代码向 Yarn 提交 python 和 java 应用程序。到目前为止,Python 是唯一无法正常工作的。我可以提交 Java 和 Scala(还没有尝试过 R),但是我们的数据科学家用于机器学习的 Python 不起作用。
任何帮助或帮助指示将不胜感激!
谢谢。
最佳答案
您的客户端参数缺少“--class”和“--py-files”。
要提交 python 脚本,类应为“org.apache.spark.deploy.PythonRunner”。此外,还应附加 pyspark 库和 py4j,以便驱动程序可以正确导入 Spark。
因此您的客户端配置应如下所示:
String[] args = new String[]{
// application name
"--name",
"SparkPi-Python",
"--class",
"org.apache.spark.deploy.PythonRunner",
"--py-files",
SPARK_HOME + "/python/lib/pyspark.zip,"+ SPARK_HOME +"/python/lib/py4j-0.9-src.zip",
// Python Program
"--primary-py-file",
SPARK_HOME + "/examples/src/main/python/pi.py",
// number of executors
"--num-executors",
"2",
// driver memory
"--driver-memory",
"512m",
// executor memory
"--executor-memory",
"512m",
// executor cores
"--executor-cores",
"2",
// argument 1 to my Spark program
"--arg",
slices,
// argument 2 to my Spark program (helper argument to create a proper JavaSparkContext object)
"--arg",
"yarn-cluster"
};
关于java - 从 Java 代码向 Yarn 提交 Python 应用程序时出现问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35373367/
我正在编写一个具有以下签名的 Java 方法。 void Logger(Method method, Object[] args); 如果一个方法(例如 ABC() )调用此方法 Logger,它应该
我是 Java 新手。 我的问题是我的 Java 程序找不到我试图用作的图像文件一个 JButton。 (目前这段代码什么也没做,因为我只是得到了想要的外观第一的)。这是我的主课 代码: packag
好的,今天我在接受采访,我已经编写 Java 代码多年了。采访中说“Java 垃圾收集是一个棘手的问题,我有几个 friend 一直在努力弄清楚。你在这方面做得怎么样?”。她是想骗我吗?还是我的一生都
我的 friend 给了我一个谜语让我解开。它是这样的: There are 100 people. Each one of them, in his turn, does the following
如果我将使用 Java 5 代码的应用程序编译成字节码,生成的 .class 文件是否能够在 Java 1.4 下运行? 如果后者可以工作并且我正在尝试在我的 Java 1.4 应用程序中使用 Jav
有关于why Java doesn't support unsigned types的问题以及一些关于处理无符号类型的问题。我做了一些搜索,似乎 Scala 也不支持无符号数据类型。限制是Java和S
我只是想知道在一个 java 版本中生成的字节码是否可以在其他 java 版本上运行 最佳答案 通常,字节码无需修改即可在 较新 版本的 Java 上运行。它不会在旧版本上运行,除非您使用特殊参数 (
我有一个关于在命令提示符下执行 java 程序的基本问题。 在某些机器上我们需要指定 -cp 。 (类路径)同时执行java程序 (test为java文件名与.class文件存在于同一目录下) jav
我已经阅读 StackOverflow 有一段时间了,现在我才鼓起勇气提出问题。我今年 20 岁,目前在我的家乡(罗马尼亚克卢日-纳波卡)就读 IT 大学。足以介绍:D。 基本上,我有一家提供簿记应用
我有 public JSONObject parseXML(String xml) { JSONObject jsonObject = XML.toJSONObject(xml); r
我已经在 Java 中实现了带有动态类型的简单解释语言。不幸的是我遇到了以下问题。测试时如下代码: def main() { def ks = Map[[1, 2]].keySet()
一直提示输入 1 到 10 的数字 - 结果应将 st、rd、th 和 nd 添加到数字中。编写一个程序,提示用户输入 1 到 10 之间的任意整数,然后以序数形式显示该整数并附加后缀。 public
我有这个 DownloadFile.java 并按预期下载该文件: import java.io.*; import java.net.URL; public class DownloadFile {
我想在 GUI 上添加延迟。我放置了 2 个 for 循环,然后重新绘制了一个标签,但这 2 个 for 循环一个接一个地执行,并且标签被重新绘制到最后一个。 我能做什么? for(int i=0;
我正在对对象 Student 的列表项进行一些测试,但是我更喜欢在 java 类对象中创建硬编码列表,然后从那里提取数据,而不是连接到数据库并在结果集中选择记录。然而,自从我这样做以来已经很长时间了,
我知道对象创建分为三个部分: 声明 实例化 初始化 classA{} classB extends classA{} classA obj = new classB(1,1); 实例化 它必须使用
我有兴趣使用 GPRS 构建车辆跟踪系统。但是,我有一些问题要问以前做过此操作的人: GPRS 是最好的技术吗?人们意识到任何问题吗? 我计划使用 Java/Java EE - 有更好的技术吗? 如果
我可以通过递归方法反转数组,例如:数组={1,2,3,4,5} 数组结果={5,4,3,2,1}但我的结果是相同的数组,我不知道为什么,请帮助我。 public class Recursion { p
有这样的标准方式吗? 包括 Java源代码-测试代码- Ant 或 Maven联合单元持续集成(可能是巡航控制)ClearCase 版本控制工具部署到应用服务器 最后我希望有一个自动构建和集成环境。
我什至不知道这是否可能,我非常怀疑它是否可能,但如果可以,您能告诉我怎么做吗?我只是想知道如何从打印机打印一些文本。 有什么想法吗? 最佳答案 这里有更简单的事情。 import javax.swin
我是一名优秀的程序员,十分优秀!