gpt4 book ai didi

python - 在 numpy 数组上计算自定义函数会导致 UnpicklingError : NEWOBJ class argument has NULL tp_new

转载 作者:太空宇宙 更新时间:2023-11-03 16:24:14 24 4
gpt4 key购买 nike

我遇到了一个非常奇怪的问题,使用spicy.spacial距离矩阵计算工作正常,但使用距离矩阵的自定义函数会导致Spark错误。

我的数据如下所示:

33.848366,-84.3733852,A,1234
33.848237299999994,-84.37318470000001,A,1234
33.8488057,-84.3731556,A,1234
33.847644200000005,-84.3727751,A,1234
33.84840429999999,-84.3732269,A,1234
33.849072899999996,-84.37342070000001,A,1234
33.8428191,-84.38306340000001,A,1234
33.842778499999994,-84.3830113,A,1234
33.8394582,-84.3770177,A,1234
33.847117299999994,-84.365351,A,1234

我的完全可重现的代码如下所示:

from pyspark import SparkContext

import pandas as pd
import numpy as np

from sklearn.cluster import DBSCAN
from math import radians, cos, sin, asin, sqrt
from scipy.spatial.distance import pdist, squareform

# This function taken from another StackOverflow post (modified radius only)
def distHaversine(pos1, pos2, r = 6378137):
pos1 = pos1 * np.pi / 180
pos2 = pos2 * np.pi / 180
cos_lat1 = np.cos(pos1[..., 0])
cos_lat2 = np.cos(pos2[..., 0])
cos_lat_d = np.cos(pos1[..., 0] - pos2[..., 0])
cos_lon_d = np.cos(pos1[..., 1] - pos2[..., 1])
return r * np.arccos(cos_lat_d - cos_lat1 * cos_lat2 * (1 - cos_lon_d))

def myFunc(x):
points = pd.DataFrame(list(x[1]))
points.columns = ['lat', 'lon']
## PROBLEM LINE: UNCOMMENTING THIS LINE AND COMMENT BELOW TWO RESULTS IN THE ERROR ##
# pointsDistMatrix = distHaversine(np.array(points)[:, None], np.array(points))
pointsDistMatrix = pdist(points)
pointsDistMatrix = squareform(pointsDistMatrix)
db = DBSCAN(eps = 75, min_samples = 3, metric = 'precomputed',
algorithm = 'kd_tree').fit(pointsDistMatrix)
points['cluster'] = db.labels_
return ((x[0], [tuple(x) for x in points.values]))

textFile = sc.textFile('df.csv')
processedGeoData = textFile \
.map(lambda x: x.split(',')) \
.map(lambda x: ((str(x[3]), str(x[2])),
(float(x[0]), float(x[1])))) \
.groupByKey() \
.sortByKey(False) \
.map(myFunc)

processedGeoData.collect()

我得到的错误是这样的:

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 50.0 failed 1 times, most recent failure: Lost task 0.0 in stage 50.0 (TID 70, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/local/Cellar/apache-spark/1.5.2/libexec/python/lib/pyspark.zip/pyspark/worker.py", line 98, in main
command = pickleSer._read_with_length(infile)
File "/usr/local/Cellar/apache-spark/1.5.2/libexec/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
return self.loads(obj)
File "/usr/local/Cellar/apache-spark/1.5.2/libexec/python/lib/pyspark.zip/pyspark/serializers.py", line 422, in loads
return pickle.loads(obj)
UnpicklingError: NEWOBJ class argument has NULL tp_new

知道发生了什么吗?为什么自定义矩阵不起作用,但 scipy.spatial 矩阵起作用?

以下是我正在使用的不同软件包的版本:

Python 2.7.10
numpy==1.9.2
pandas==0.16.0rc1-22-g96aa9cb
scikit-learn==0.15.2
scipy==0.15.1
pyspark=1.5.2

最佳答案

为了使其工作,创建一个模块havesine.py:

import pandas as pd
import numpy as np

def distHaversine(pos1, pos2, r = 6378137):
pos1 = pos1 * np.pi / 180
pos2 = pos2 * np.pi / 180
cos_lat1 = np.cos(pos1[..., 0])
cos_lat2 = np.cos(pos2[..., 0])
cos_lat_d = np.cos(pos1[..., 0] - pos2[..., 0])
cos_lon_d = np.cos(pos1[..., 1] - pos2[..., 1])
return r * np.arccos(cos_lat_d - cos_lat1 * cos_lat2 * (1 - cos_lon_d))

并分发它(--py-files/sc.addPyFile)。接下来导入distHaversine

>>> from haversine import distHaversine

你就解决了。

关于python - 在 numpy 数组上计算自定义函数会导致 UnpicklingError : NEWOBJ class argument has NULL tp_new,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38133935/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com