gpt4 book ai didi

Python MeanShift 内存错误

转载 作者:太空宇宙 更新时间:2023-11-03 15:17:09 26 4
gpt4 key购买 nike

我在 sklearn.cluster 模块 ( here are the docs ) 中运行一个名为 MeanShift() 的聚类算法。我正在处理的对象有 310,057 个点分布在 3 维空间中。我运行它的计算机总共有 128Gb 的 ram,所以当我收到以下错误时,我很难相信我真的在使用它。

[user@host ~]$ python meanshifttest.py
Traceback (most recent call last):
File "meanshifttest.py", line 13, in <module>
ms = MeanShift().fit(X)
File "/home/user/anaconda/lib/python2.7/site-packages/sklearn/cluster/mean_shift_.py", line 280, in fit
cluster_all=self.cluster_all)
File "/home/user/anaconda/lib/python2.7/site-packages/sklearn/cluster/mean_shift_.py", line 99, in mean_shift
bandwidth = estimate_bandwidth(X)
File "/home/user/anaconda/lib/python2.7/site-packages/sklearn/cluster/mean_shift_.py", line 45, in estimate_bandwidth
d, _ = nbrs.kneighbors(X, return_distance=True)
File "/home/user/anaconda/lib/python2.7/site-packages/sklearn/neighbors/base.py", line 313, in kneighbors
return_distance=return_distance)
File "binary_tree.pxi", line 1313, in sklearn.neighbors.kd_tree.BinaryTree.query (sklearn/neighbors/kd_tree.c:10007)
File "binary_tree.pxi", line 595, in sklearn.neighbors.kd_tree.NeighborsHeap.__init__ (sklearn/neighbors/kd_tree.c:4709)
MemoryError

我正在运行的代码如下所示:

from sklearn.cluster import MeanShift
import asciitable
import numpy as np
import time

data = asciitable.read('./multidark_MDR1_FOFID85000000000_ParticlePos.csv',delimiter=',')
x = [data[i][2] for i in range(len(data))]
y = [data[i][3] for i in range(len(data))]
z = [data[i][4] for i in range(len(data))]
X = np.array(zip(x,y,z))

t0 = time.time()
ms = MeanShift().fit(X)
t1 = time.time()
print str(t1-t0) + " seconds."
labels = ms.labels_
print set(labels)

有人对正在发生的事情有任何想法吗?不幸的是,我无法切换聚类算法,因为这是我发现的唯一一个除了接受无链接长度/k 个聚类/先验信息之外还做得很好的算法。

提前致谢!

**更新:我仔细查看了文档,它说了以下内容:

Scalability:

Because this implementation uses a flat kernel and
a Ball Tree to look up members of each kernel, the complexity will is
to O(T*n*log(n)) in lower dimensions, with n the number of samples
and T the number of points. In higher dimensions the complexity will
tend towards O(T*n^2).

Scalability can be boosted by using fewer seeds, for example by using
a higher value of min_bin_freq in the get_bin_seeds function.

Note that the estimate_bandwidth function is much less scalable than
the mean shift algorithm and will be the bottleneck if it is used.

这似乎有些道理,因为如果您详细查看错误,它会提示 estimate_bandwidth。这是否表明我只是在算法中使用了太多粒子?

最佳答案

从错误消息来看,我怀疑它正在尝试计算点之间的所有成对距离,这意味着它需要 310057² float 或 716GB RAM。

您可以通过为 MeanShift 构造函数提供显式 bandwidth 参数来禁用此行为。

这可以说是一个错误;考虑为它提交错误报告。 (包括我自己在内的 scikit-learn 工作人员最近一直致力于在不同的地方摆脱这些过于昂贵的距离计算,但显然没有人关注 meanshift。)

编辑:上面的计算结果是 3 倍,但内存使用量确实是二次方的。我刚刚在 scikit-learn 的开发版本中修复了这个问题。

关于Python MeanShift 内存错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/20104999/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com