gpt4 book ai didi

python - DBSCAN 从图中去除噪声

转载 作者:太空宇宙 更新时间:2023-11-04 02:53:13 24 4
gpt4 key购买 nike

使用 DBSCAN,

(DBSCAN(eps=epsilon, min_samples=10, algorithm='ball_tree', metric='haversine')

我已经聚集了一个纬度和经度对列表,然后我使用 matplotlib 绘制了这些列表。绘图时,它包括“噪声”坐标,这些点未分配给创建的 270 个集群之一。我想消除图中的噪音,只绘制满足指定要求的簇,但我不确定该怎么做。我该如何排除噪音(同样,那些未分配给集群的点)?

下面是我用来聚类和绘图的代码:

df = pd.read_csv('xxx.csv')

# define the number of kilometers in one radiation
# which will be used to convert esp from km to radiation
kms_per_rad = 6371.0088

# define a function to calculate the geographic coordinate
# centroid of a cluster of geographic points
# it will be used later to calculate the centroids of DBSCAN cluster
# because Scikit-learn DBSCAN cluster class does not come with centroid attribute.
def get_centroid(cluster):
"""calculate the centroid of a cluster of geographic coordinate points
Args:
cluster coordinates, nx2 array-like (array, list of lists, etc)
n is the number of points(latitude, longitude)in the cluster.
Return:
geometry centroid of the cluster

"""
cluster_ary = np.asarray(cluster)
centroid = cluster_ary.mean(axis=0)
return centroid

# convert eps to radians for use by haversine
epsilon = 0.1/kms_per_rad #1.5=1.5km 1=1km 0.5=500m 0.25=250m 0.1=100m

# Extract intersection coordinates (latitude, longitude)
tweet_coords = df.as_matrix(columns=['latitude','longitude'])

start_time = time.time()
dbsc = (DBSCAN(eps=epsilon, min_samples=10, algorithm='ball_tree', metric='haversine')
.fit(np.radians(tweet_coords)))

tweet_cluster_labels = dbsc.labels_

# get the number of clusters
num_clusters = len(set(dbsc.labels_))

# print the outcome
message = 'Clustered {:,} points down to {:,} clusters, for {:.1f}% compression in {:,.2f} seconds'
print(message.format(len(df), num_clusters, 100*(1 - float(num_clusters) / len(df)), time.time()-start_time))
print('Silhouette coefficient: {:0.03f}'.format(metrics.silhouette_score(tweet_coords, tweet_cluster_labels)))

# Turn the clusters into a pandas series,where each element is a cluster of points
dbsc_clusters = pd.Series([tweet_coords[tweet_cluster_labels==n] for n in range(num_clusters)])

# get centroid of each cluster
cluster_centroids = dbsc_clusters.map(get_centroid)
# unzip the list of centroid points (lat, lon) tuples into separate lat and lon lists
cent_lats, cent_lons = zip(*cluster_centroids)
# from these lats/lons create a new df of one representative point for eac cluster
centroids_df = pd.DataFrame({'longitude':cent_lons, 'latitude':cent_lats})
#print centroids_df

# Plot the clusters and cluster centroids
fig, ax = plt.subplots(figsize=[20, 12])
tweet_scatter = ax.scatter(df['longitude'], df['latitude'], c=tweet_cluster_labels, cmap = cm.hot, edgecolor='None', alpha=0.25, s=50)
centroid_scatter = ax.scatter(centroids_df['longitude'], centroids_df['latitude'], marker='x', linewidths=2, c='k', s=50)
ax.set_title('Tweet Clusters & Cluser Centroids', fontsize = 30)
ax.set_xlabel('Longitude', fontsize=24)
ax.set_ylabel('Latitude', fontsize = 24)
ax.legend([tweet_scatter, centroid_scatter], ['Tweets', 'Tweets Cluster Centroids'], loc='upper right', fontsize = 20)
plt.show()

cluster_small_scale

cluster_large_scale

黑点是噪音,那些没有添加到由 DBSCAN 输入定义的集群中,彩色点是集群。我的目标是仅可视化集群。

最佳答案

将标签存储在原始 DataFrame 的附加列中

df['tweet_cluster_labels'] = tweet_cluster_labels

过滤DataFrame,使其只包含非噪声点(噪声样本被赋予标签-1)

df_filtered = df[df.tweet_cluster_labels>-1]

然后绘制那些点

tweet_scatter = ax.scatter(df_filtered['longitude'], 
df_filtered['latitude'],
c=df_filtered.tweet_cluster_labels,
cmap=cm.hot, edgecolor='None', alpha=0.25, s=50)

关于python - DBSCAN 从图中去除噪声,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43172715/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com