- android - 多次调用 OnPrimaryClipChangedListener
- android - 无法更新 RecyclerView 中的 TextView 字段
- android.database.CursorIndexOutOfBoundsException : Index 0 requested, 光标大小为 0
- android - 使用 AppCompat 时,我们是否需要明确指定其 UI 组件(Spinner、EditText)颜色
因此,我尝试使用以下方法在 Python 2.7 中创建一个 Spark session :
#Initialize SparkSession and SparkContext
from pyspark.sql import SparkSession
from pyspark import SparkContext
#Create a Spark Session
SpSession = SparkSession \
.builder \
.master("local[2]") \
.appName("V2 Maestros") \
.config("spark.executor.memory", "1g") \
.config("spark.cores.max","2") \
.config("spark.sql.warehouse.dir", "file:///c:/temp/spark-warehouse")\
.getOrCreate()
#Get the Spark Context from Spark Session
SpContext = SpSession.sparkContext
我收到以下指向 python\lib\pyspark.zip\pyspark\java_gateway.py
path`
Exception: Java gateway process exited before sending the driver its port number
试图查看包含以下内容的 java_gateway.py 文件:
import atexit
import os
import sys
import select
import signal
import shlex
import socket
import platform
from subprocess import Popen, PIPE
if sys.version >= '3':
xrange = range
from py4j.java_gateway import java_import, JavaGateway, GatewayClient
from py4j.java_collections import ListConverter
from pyspark.serializers import read_int
# patching ListConverter, or it will convert bytearray into Java ArrayList
def can_convert_list(self, obj):
return isinstance(obj, (list, tuple, xrange))
ListConverter.can_convert = can_convert_list
def launch_gateway():
if "PYSPARK_GATEWAY_PORT" in os.environ:
gateway_port = int(os.environ["PYSPARK_GATEWAY_PORT"])
else:
SPARK_HOME = os.environ["SPARK_HOME"]
# Launch the Py4j gateway using Spark's run command so that we pick up the
# proper classpath and settings from spark-env.sh
on_windows = platform.system() == "Windows"
script = "./bin/spark-submit.cmd" if on_windows else "./bin/spark-submit"
submit_args = os.environ.get("PYSPARK_SUBMIT_ARGS", "pyspark-shell")
if os.environ.get("SPARK_TESTING"):
submit_args = ' '.join([
"--conf spark.ui.enabled=false",
submit_args
])
command = [os.path.join(SPARK_HOME, script)] + shlex.split(submit_args)
# Start a socket that will be used by PythonGatewayServer to communicate its port to us
callback_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
callback_socket.bind(('127.0.0.1', 0))
callback_socket.listen(1)
callback_host, callback_port = callback_socket.getsockname()
env = dict(os.environ)
env['_PYSPARK_DRIVER_CALLBACK_HOST'] = callback_host
env['_PYSPARK_DRIVER_CALLBACK_PORT'] = str(callback_port)
# Launch the Java gateway.
# We open a pipe to stdin so that the Java gateway can die when the pipe is broken
if not on_windows:
# Don't send ctrl-c / SIGINT to the Java gateway:
def preexec_func():
signal.signal(signal.SIGINT, signal.SIG_IGN)
proc = Popen(command, stdin=PIPE, preexec_fn=preexec_func, env=env)
else:
# preexec_fn not supported on Windows
proc = Popen(command, stdin=PIPE, env=env)
gateway_port = None
# We use select() here in order to avoid blocking indefinitely if the subprocess dies
# before connecting
while gateway_port is None and proc.poll() is None:
timeout = 1 # (seconds)
readable, _, _ = select.select([callback_socket], [], [], timeout)
if callback_socket in readable:
gateway_connection = callback_socket.accept()[0]
# Determine which ephemeral port the server started on:
gateway_port = read_int(gateway_connection.makefile(mode="rb"))
gateway_connection.close()
callback_socket.close()
if gateway_port is None:
raise Exception("Java gateway process exited before sending the driver its port number")
# In Windows, ensure the Java child processes do not linger after Python has exited.
# In UNIX-based systems, the child process can kill itself on broken pipe (i.e. when
# the parent process' stdin sends an EOF). In Windows, however, this is not possible
# because java.lang.Process reads directly from the parent process' stdin, contending
# with any opportunity to read an EOF from the parent. Note that this is only best
# effort and will not take effect if the python process is violently terminated.
if on_windows:
# In Windows, the child process here is "spark-submit.cmd", not the JVM itself
# (because the UNIX "exec" command is not available). This means we cannot simply
# call proc.kill(), which kills only the "spark-submit.cmd" process but not the
# JVMs. Instead, we use "taskkill" with the tree-kill option "/t" to terminate all
# child processes in the tree (http://technet.microsoft.com/en-us/library/bb491009.aspx)
def killChild():
Popen(["cmd", "/c", "taskkill", "/f", "/t", "/pid", str(proc.pid)])
atexit.register(killChild)
# Connect to the gateway
gateway = JavaGateway(GatewayClient(port=gateway_port), auto_convert=True)
# Import the classes used by PySpark
java_import(gateway.jvm, "org.apache.spark.SparkConf")
java_import(gateway.jvm, "org.apache.spark.api.java.*")
java_import(gateway.jvm, "org.apache.spark.api.python.*")
java_import(gateway.jvm, "org.apache.spark.ml.python.*")
java_import(gateway.jvm, "org.apache.spark.mllib.api.python.*")
# TODO(davies): move into sql
java_import(gateway.jvm, "org.apache.spark.sql.*")
java_import(gateway.jvm, "org.apache.spark.sql.hive.*")
java_import(gateway.jvm, "scala.Tuple2")
return gateway
我是 Spark 和 Pyspark 的新手,因此无法在此处调试问题。我还尝试查看其他一些建议: Spark + Python - Java gateway process exited before sending the driver its port number?和 Pyspark: Exception: Java gateway process exited before sending the driver its port number
但目前无法解决这个问题。请帮忙!
这是 spark 环境的样子:
# This script loads spark-env.sh if it exists, and ensures it is only loaded once.
# spark-env.sh is loaded from SPARK_CONF_DIR if set, or within the current directory's
# conf/ subdirectory.
# Figure out where Spark is installed
if [ -z "${SPARK_HOME}" ]; then
export SPARK_HOME="$(cd "`dirname "$0"`"/..; pwd)"
fi
if [ -z "$SPARK_ENV_LOADED" ]; then
export SPARK_ENV_LOADED=1
# Returns the parent of the directory this script lives in.
parent_dir="${SPARK_HOME}"
user_conf_dir="${SPARK_CONF_DIR:-"$parent_dir"/conf}"
if [ -f "${user_conf_dir}/spark-env.sh" ]; then
# Promote all variable declarations to environment (exported) variables
set -a
. "${user_conf_dir}/spark-env.sh"
set +a
fi
fi
# Setting SPARK_SCALA_VERSION if not already set.
if [ -z "$SPARK_SCALA_VERSION" ]; then
ASSEMBLY_DIR2="${SPARK_HOME}/assembly/target/scala-2.11"
ASSEMBLY_DIR1="${SPARK_HOME}/assembly/target/scala-2.10"
if [[ -d "$ASSEMBLY_DIR2" && -d "$ASSEMBLY_DIR1" ]]; then
echo -e "Presence of build for both scala versions(SCALA 2.10 and SCALA 2.11) detected." 1>&2
echo -e 'Either clean one of them or, export SPARK_SCALA_VERSION=2.11 in spark-env.sh.' 1>&2
exit 1
fi
if [ -d "$ASSEMBLY_DIR2" ]; then
export SPARK_SCALA_VERSION="2.11"
else
export SPARK_SCALA_VERSION="2.10"
fi
fi
以下是我的 Spark 环境是如何在 Python 中设置的:
import os
import sys
# NOTE: Please change the folder paths to your current setup.
#Windows
if sys.platform.startswith('win'):
#Where you downloaded the resource bundle
os.chdir("E:/Udemy - Spark/SparkPythonDoBigDataAnalytics-Resources")
#Where you installed spark.
os.environ['SPARK_HOME'] = 'E:/Udemy - Spark/Apache Spark/spark-2.0.0-bin-hadoop2.7'
#other platforms - linux/mac
else:
os.chdir("/Users/kponnambalam/Dropbox/V2Maestros/Modules/Apache Spark/Python")
os.environ['SPARK_HOME'] = '/users/kponnambalam/products/spark-2.0.0-bin-hadoop2.7'
os.curdir
# Create a variable for our root path
SPARK_HOME = os.environ['SPARK_HOME']
# Create a variable for our root path
SPARK_HOME = os.environ['SPARK_HOME']
#Add the following paths to the system path. Please check your installation
#to make sure that these zip files actually exist. The names might change
#as versions change.
sys.path.insert(0,os.path.join(SPARK_HOME,"python"))
sys.path.insert(0,os.path.join(SPARK_HOME,"python","lib"))
sys.path.insert(0,os.path.join(SPARK_HOME,"python","lib","pyspark.zip"))
sys.path.insert(0,os.path.join(SPARK_HOME,"python","lib","py4j-0.10.1-src.zip"))
#Initialize SparkSession and SparkContext
from pyspark.sql import SparkSession
from pyspark import SparkContext
最佳答案
阅读许多帖子后,我终于让 Spark 在我的 Windows 笔记本电脑上运行。我使用 Anaconda Python,但我相信这也适用于标准发行版。
因此,您需要确保可以独立运行 Spark。我的假设是您安装了有效的 python 路径和 Java。对于 Java,我在路径中定义了“C:\ProgramData\Oracle\Java\javapath”,它重定向到我的 Java8 bin 文件夹。
转到 %SPARK_HOME%\bin 并尝试运行 pyspark,它是 Python Spark shell。如果你的环境和我的一样,你会看到关于无法找到 winutils 和 hadoop 的异常。第二个异常(exception)是关于缺少 Hive:
pyspark.sql.utils.IllegalArgumentException:u“实例化‘org.apache.spark.sql.hive.HiveSessionStateBuilder’时出错:”
然后我找到并简单地关注了https://jaceklaskowski.gitbooks.io/mastering-apache-spark/spark-tips-and-tricks-running-spark-windows.html具体来说:
winutils.exe chmod -R 777 C:\tmp\hive
。希望对您有所帮助,您可以享受在本地运行 Spark 代码的乐趣。
关于java - 异常 : Java gateway process exited before sending the driver its port number while creating a Spark Session in Python,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43863569/
我正在学习网络和套接字,但有些东西我不明白。我经常听说“TCP端口”但我认为端口与应用层有关(例如 HTTP 服务器为 80)。那你为什么不说“应用程序端口”呢?为什么端口似乎与 TCP 层相关联(它
配置 Nginx 以允许像这样的 DOMAIN:PORT 请求的正确方法是什么: http://example.com:8080/?a=xxx&b=yyy&c=zzz over TCP or UDP
已关闭。此问题不符合Stack Overflow guidelines 。目前不接受答案。 这个问题似乎不是关于 a specific programming problem, a software
这是我的 nginx.conf,适用于 https。 如果有人输入 HTTP://dev.local.org:3002,我该如何重定向到 HTTPS://dev.local.org:3002? 这个
我在这方面需要一点帮助,而我对这方面的 RegEx 知识有点欠缺。 我有一个代理列表,我正在尝试解析该列表并将 IP 和端口号与字符串分开。 正在读取的字符串看起来像这样。(示例 1) 121.121
我正在尝试制作一个 Firefox 扩展。我需要与后台脚本 (main.js) 交换数据,所以我尝试使用端口,但它不起作用。 //Content.js self.port.on("alert",fun
我正在学习教程,他们使用命令[[ -z "$PORT" ]] && export PORT=8080我不完全明白它在做什么。我对 bash 命令的了解非常基础,所以我什至不知道用什么谷歌来解决这个问题
我已经阅读了数据表和谷歌,但我仍然不明白。 就我而言,我将 PIC18F26K20 的 PIN RC6 设置为 INPUT 模式: TRISCbits.TRISC6 = 1; 然后我用 PORT 和
我想知道是否可以将公共(public) IP 端口(例如端口 80)映射到 Azure iaas VM 上的不同本地/私有(private) IP 端口(例如端口 81)。我相信这在旧门户中是可行的,
我有一个用 python-twisted 编写的客户端,它将 UDP 数据包发送到 IP aaa.bbb.ccc.ddd 的端口 1234,然后等待响应。我还有用 C-libuv 编写的 UDP 服务
我有一个使用弹性 IP 12.34.56.78 运行的 Amazon EC2 实例。我拥有一个域名 example.com,我已将其设置为指向 EC2 实例。我在 EC2 实例的端口 80 上运行 A
我正在尝试在 AWS Lightsail 上配置网站。我做的第一件事是在 中将端口号从 22 更改为 2200 /etc/ssh/sshd_config ,然后我像这样配置了简单的防火墙 sudo u
几天前才意识到 Docker 似乎绕过了我的 iptable 规则。我对 Docker 和 iptables 的经验并不令人难以置信。最近几天尝试了很多不同的东西。还看到最近的 docker 版本有很
我从他们的website 下载了零层使用以下命令: curl -s https://install.zerotier.com | sudo bash 每当我尝试使用 zerotier cli 时,都会
我是字符串操作的新手,只是试图替换列表中的值。 我试图修复的两个输入是 MCAFEE和 PORT O'BRIAN . 所以我跑 ucwords(strtolower($rawTitle)) .但现在我
我正在使用 SSH 访问我大学的 afs 系统。我喜欢使用 rmate(远程 TextMate),它需要 SSH 隧道,因此我在 .bashrc 中包含了这个别名。 alias sshr=ssh -R
当我使用 Control-C 退出“Heroku Open”(Heroku 工具栏服务器命令)时。我无法重新启动。我收到此错误: /vendor/bundle/gems/puma-2.14.0/lib
我正在发送这样的消息: self.port.emit("nodes_grubed", textNodesValues); 并想对此使用react: worker.port.on("nodes_grub
我正在尝试在此扩展中创建一个函数,该函数将打开具有给定网址的选项卡,并在该选项卡上使用给定文件名运行脚本。该功能大部分工作正常,只是我无法在主脚本和我在新选项卡上运行的脚本之间进行通信(我为此使用了
我在我的 .NET MVC 4 元素中使用 Bootstrap ,我使用 NuGet 导入 Bootstrap 我的元素,我有一个布局页面,我在这个页面中包含 Bootstrap 标签,我的索引页面正
我是一名优秀的程序员,十分优秀!