gpt4 book ai didi

python - 加速数据帧中行的缓慢嵌套循环?

转载 作者:太空宇宙 更新时间:2023-11-03 14:07:37 24 4
gpt4 key购买 nike

cluster_name 大小为 101,878

最大路径大小为 1,508,931

数据框看起来像这样

| cluster_name | maxpath             | chrom_pos          |
|--------------+---------------------+--------------------|
| cluster_1007 | 5_G,6_C,7_A,8_A,9_T | chr11:611117-799999|
| cluster_1007 | 5_G,6_C,7_A,8_A,9_T | chr11:611117-799999|
| cluster_1007 | 3_G,4_C,5_A,6_A,7_T | chr12:823492-102341|

我想做的是对于给定的集群,我想将每个最大路径与其他给定的最大路径进行比较该集群,我对最大路径不重叠的集群感兴趣 它们在染色体上的注释也是不相交的。例如“5_G,6_C,7_A,8_A,9_T”和“3_G,4_C,5_A,6_A,7_T”不重叠并且具有不同的注释。我的代码中最大的痛点之一是我将每个最大路径与该集群中的所有其他最大路径进行比较。有些集群有超过 1000 个最大路径,因此速度会慢很多。当发现两个满足条件的最大路径时,我尝试通过返回集群的名称来减少成对比较的数量。我尝试将代码的某些部分移动到 numpy 数组中,但它仍然非常慢。我的代码看起来像这样。有人有任何可以提供帮助的想法吗?

import pandas as pd
import numpy as np

def find_cluster(cluster, maxpaths):
"""
returns clusters with disjoint maxpaths annotated
to different genomes or annotated on the same
genome min of 10kb apart
"""
for idx, item in enumerate(maxpaths):
unique = set(item.split(','))
for j in range(idx+1, len(maxpaths)):
compare = set(maxpaths[j].split(','))
chrom_string1 = subset_df[
subset_df['maxpath'] == item]['chrom_pos'].values[0]
chrom_string2 = subset_df[
subset_df['maxpath'] == maxpaths[j]]['chrom_pos'].values[0]

chrom1 = chrom_string1.split(':')[0]
chrom_end1 = int(chrom_string1.split('-')[1])

chrom2 = chrom_string2.split(':')[0]
chrom_end2 = int(chrom_string2.split('-')[1])
if len(unique.intersection(compare)) == 0:
if chrom1 != chrom2:
return(cluster)
elif chrom1 == chrom2:
if abs(chrom_end1 - chrom_end2) > 10000:
return(cluster)

file_number = input.df_subset.split('_')[-1].split('.')[0]
df = pd.read_csv(input.df_subset)
cluster_names = df['cluster_name'].unique()
disjoint_clusters = []
for i in cluster_names:
subset_df = df[df['cluster_name'] ==i]
maxpaths_array = subset_df['maxpath'].as_matrix()
cluster = find_cluster(i,maxpaths_array)
disjoint_clusters.append(cluster)
disjoint_maxpaths = pd.DataFrame({"clusters_with_disjoint_maxpaths":disjoint_clusters})
disjoint_maxpaths.to_csv(os.path.abspath('results/disjoint_maxpath_clusters_{}.csv'.format(file_number)),index=False)

最佳答案

在结合了一些建议的想法之后,这就是我想出的。我失去了可读性,但我需要的是性能。整个运行在我的主计算机上大约需要 2 个小时才能完成。

def find_cluster(cluster_name, dataframe):
"""
returns clusters with disjoint maxpaths annotated
to different genomes or annotated on the same
genome min of 10kb apart
"""
cluster = cluster_name
df = dataframe
maxpaths = [set(item) for item in [_.split(',') for _ in df['maxpath'].as_matrix()]]
chrom_string = df['chrom_pos'].as_matrix()
chrom = [_.split(':')[0] for _ in df['chrom_pos'].as_matrix()]
chrom_end = [int(_.split('-')[1]) for _ in df['chrom_pos'].as_matrix()]
for idx,item in enumerate(maxpaths):
for j in range(idx+1, len(maxpaths)):
if item.isdisjoint(maxpaths[j]):
if chrom[idx] != chrom[j]:
return(cluster)
elif chrom[idx] == chrom[j]:
if abs(chrom_end[idx] - chrom_end[j]) > 10000:
return(cluster)

def df_to_dict(dataframe):
"""
Create dict with cluster as key and
subset of dataframe at cluster as value
"""
df = dataframe
unique_clusters = df['cluster_name'].unique()
sub_dfs = []
for i in unique_clusters:
sub_dfs.append(df[df['cluster_name'] == i])
return(dict(zip(unique_clusters, sub_dfs)))

def get_clusters(dataframe):
"""
return disjoint clusters
"""
df = pd.read_csv(dataframe, index_col=False)
df_dict = df_to_dict(df)
disjoint = [find_cluster(k,v) for k,v in df_dict.items() if find_cluster(k,v)]
return(disjoint)


def test_new():
cluster = ["cluster_689"]
disjoint_cluster = []
for i in cluster:
found = find_cluster(i, subset_df)
disjoint_cluster.append(found)
return(disjoint_cluster)

def test_old():
cluster = ["cluster_689"]
disjoint_cluster = []
for i in cluster:
maxpaths_array = subset_df['maxpath'].as_matrix()
found = old_find_cluster(i,maxpaths_array)
disjoint_cluster.append(found)
return(disjoint_cluster)

new = %timeit for x in range(3): test_new()
old = %timeit for x in range(3): test_old()

产量

新的 find_cluster
每个循环 247 µs ± 5.48 µs(7 次运行的平均值 ± 标准差,每次 1000 个循环)

旧的 find_cluster
每个循环 138 ms ± 587 µs(7 次运行的平均值 ± 标准差,每次 10 个循环)

寻找不相交的最大路径时的速度是巨大的。我无法计算整个脚本的时间,因为最后几个大型集群在超过 24 小时后从未完成运行。这是将数据帧拆分为 100 个较小的数据帧之后的结果。但我确信总的来说,脚本在 find_clusters 函数之外变得更快。感谢大家的帮助。

关于python - 加速数据帧中行的缓慢嵌套循环?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48759553/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com