gpt4 book ai didi

python - 无法在 Spark worker 上导入 Pandas

转载 作者:太空宇宙 更新时间:2023-11-04 05:06:09 25 4
gpt4 key购买 nike

我已经能够设置一个 virtualenv 并在我的 Spark 集群的所有工作节点中安装我需要的库,但我仍然无法导入 pandas:

Traceback (most recent call last):
File "/scratch/nas/2/larissa/ExperimentsML/app/experimenter/sklearn/sklearn-spark-tests.py", line 209, in <module>
main()
File "/scratch/nas/2/larissa/ExperimentsML/app/experimenter/sklearn/sklearn-spark-tests.py", line 202, in main
print sc.parallelize(experiments).map(lambda experiment: run_experiment(df, input_dict, experiment)).collect()
File "/scratch/1/larissa/spark-2.1.1-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/rdd.py", line 808, in collect
File "/scratch/1/larissa/spark-2.1.1-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
File "/scratch/1/larissa/spark-2.1.1-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 79 in stage 0.0 failed 4 times, most recent failure: Lost task 79.3 in stage 0.0 (TID 82, 172.18.8.2, executor 0): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/scratch/1/larissa/spark-2.1.1-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 174, in main
process()
File "/scratch/1/larissa/spark-2.1.1-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 169, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/scratch/1/larissa/spark-2.1.1-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/serializers.py", line 268, in dump_stream
vs = list(itertools.islice(iterator, batch))
File "/scratch/nas/2/larissa/ExperimentsML/app/experimenter/sklearn/sklearn-spark-tests.py", line 202, in <lambda>
print sc.parallelize(experiments).map(lambda experiment: run_experiment(df, input_dict, experiment)).collect()
File "/scratch/nas/2/larissa/ExperimentsML/app/experimenter/sklearn/sklearn-spark-tests.py", line 61, in run_experiment
import pandas
ImportError: No module named pandas

我有这个代码:

sc.parallelize(experiments).map(lambda 实验: run_experiment(df, input_dict, experiment)).collect()

run_experiment 中,我有以下导入:

def run_experiment(df, input_dict, experiment):
from sklearn.pipeline import Pipeline
from sklearn.svm import SVC
from sklearn import datasets, linear_model, tree
from sklearn.preprocessing import *
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
from sklearn.metrics import *

import pandas

...

我只得到 pandas 的错误,所以 scikit-learn 安装成功。

在我的 install_virtualenv.sh 脚本中,我有以下库,并且根据日志,它们都已正确安装。

pip install --upgrade pip
pip install urllib3[secure]
pip install pyopenssl ndg-httpsclient pyasn1
pip install cloudpickle==0.2.2
pip install distributed==1.16.1
pip install joblib==0.11
pip install scipy==0.18.1
pip install numpy==1.12.0
#pip install pandas==0.19.2
#pip install pandas --upgrade
pip install pickleshare==0.7.4
pip install py4j==0.10.4
pip install python-dateutil==2.6.0
pip install --upgrade requests
pip install scikit-learn==0.18.1
pip install sklearn==0.0
pip install sklearn-pandas==1.3.0
pip install spark-sklearn==0.2.0
easy_install pandas

使用 pip list 我得到:

asn1crypto (0.22.0)
backports-abc (0.5)
backports.ssl-match-hostname (3.5.0.1)
certifi (2017.4.17)
cffi (1.10.0)
chardet (3.0.4)
click (6.7)
cloudpickle (0.2.2)
cryptography (1.9)
dask (0.14.1)
dask-searchcv (0.0.2)
distributed (1.16.1)
enum34 (1.1.6)
futures (3.1.1)
HeapDict (1.0.0)
idna (2.5)
ipaddress (1.0.18)
joblib (0.11)
msgpack-python (0.4.8)
ndg-httpsclient (0.4.2)
numpy (1.12.0)
pandas (0.19.2)
pathlib2 (2.2.1)
pickleshare (0.7.4)
pip (9.0.1)
psutil (5.2.2)
py4j (0.10.4)
pyasn1 (0.2.3)
pycparser (2.17)
pyOpenSSL (17.0.0)
python-dateutil (2.6.0)
pytz (2017.2)
requests (2.17.3)
scandir (1.5)
scikit-learn (0.18.1)
scipy (0.18.1)
setuptools (2.2)
singledispatch (3.4.0.3)
six (1.10.0)
sklearn (0.0)
sklearn-pandas (1.3.0)
sortedcollections (0.5.3)
sortedcontainers (1.5.7)
spark-sklearn (0.2.0)
tblib (1.3.2)
toolz (0.8.2)
tornado (4.5.1)
urllib3 (1.21.1)
zict (0.1.2)

如您所见,我什至尝试使用 easy_install,但仍然无法导入 pandas。知道为什么会这样吗?

谢谢!

最佳答案

听起来您的 Spark worker 指向 python 的默认/系统安装(例如 /usr/bin/python)而不是您的 virtualenv。您可以通过设置 PYSPARK_PYTHON 环境变量告诉 Spark 使用您的 virtualenv,例如export PYSPARK_PYTHON=/path/to/my/python

关于python - 无法在 Spark worker 上导入 Pandas ,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44444544/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com