gpt4 book ai didi

python - 如何更有效地计算全局效率?

转载 作者:太空宇宙 更新时间:2023-11-03 20:46:30 25 4
gpt4 key购买 nike

我创建了一些代码来计算加权全局效率,但是,代码运行时间太长。我需要使代码更加高效,或者需要找到一种更有效的方法来计算大型数据集(最多 6000 个点)。

我已经编辑了很多代码,并且尝试了 igraph(没有加权全局效率的函数),但没有任何东西可以让它足够快地完成计算。我当前的代码全部显示在下面

import networkx as nx
import numpy as np
from networkx import algorithms
from networkx.algorithms import efficiency
from networkx.algorithms.efficiency import global_efficiency
from networkx.exception import NetworkXNoPath
import pandas as pd
from tqdm import tqdm
from itertools import permutations
import time
from multiprocessing import Pool, cpu_count

def efficiency_weighted(G, u, v, weight):
try:
eff = 1 / nx.shortest_path_length(G, u, v, weight='weight')
except NetworkXNoPath:
eff = 0
return eff

def global_efficiency_weighted(G):
n = len(G)
denom = n * (n - 1)
if denom != 0:
g_eff = sum(efficiency_weighted(G, u, v, weight='weight') for u, v in permutations(G, 2)) / denom
else:
g_eff = 0
return g_eff


data=pd.read_csv("lobe2 1.csv")
lol1 = data.values.tolist()
data=pd.read_csv("lobe2 2.csv")
lol2 = data.values.tolist()
data=pd.read_csv("lobe2 3.csv")
lol3 = data.values.tolist()
data=pd.read_csv("lobe2 4.csv")
lol4 = data.values.tolist()
data=pd.read_csv("lobe2 5.csv")
lol5 = data.values.tolist()
data=pd.read_csv("lobe2 6.csv")
lol6 = data.values.tolist()


combos=lol1+lol2+lol3+lol4 #lists to be used for deletion in the matrix


datasafe=pd.read_csv("b1.csv", index_col=0)

##uncommennt this section for sample benchmarking
#size = 25
#subset = [c[0] for c in combos[0:size]]
#datasafe = datasafe.loc[subset, :]
#datasafe = datasafe[subset]
#combos = combos[0:size]

################################
########## Single core
################################

tic = time.time()

GE_list=[]
for combo in tqdm(combos):
df_temp = datasafe.copy()
df_temp.loc[combo, :] = 0
df_temp[combo] = 0
g=nx.from_pandas_adjacency(df_temp)
ge=global_efficiency_weighted(g)
# ge=global_efficiency(g) #uncomment to test non-weighted
GE_list.append(ge)

toc = time.time()
single = toc-tic

print("results for single core")
print(GE_list)

################################
########## Multi core
################################

def multi_global(datasafe,combo):
df_temp = datasafe.copy()
df_temp.loc[combo, :] = 0
df_temp[combo] = 0
g=nx.from_pandas_adjacency(df_temp) #omptimise by zoring on adjacency
ge=global_efficiency_weighted(g)
return ge

tic = time.time()

cpu = cpu_count()-1
pool = Pool(processes=cpu)

results = [pool.apply(multi_global, args=(datasafe, combo)) for combo in tqdm(combos)]

pool.close()
pool.join()
pool.terminate()

toc = time.time()
multi = toc-tic

################################
########## Multi core async
################################

def multi_global_as(datasafe,combo):
df_temp = datasafe.copy()
df_temp.loc[combo, :] = 0
df_temp[combo] = 0
g=nx.from_pandas_adjacency(df_temp) #omptimise by zoring on adjacency
ge=global_efficiency_weighted(g)
pbar.update(1)
return combo,ge

tic = time.time()

cpu = cpu_count()-1
pool = Pool(processes=cpu)
pbar = tqdm(total=int(len(combos)/cpu))

results = [pool.apply_async(multi_global_as, args=(datasafe, combo)) for combo in combos]
res=[result.get() for result in results]

pool.close()
pool.join()
pool.terminate()
pbar.close()

toc = time.time()
multi_as = toc-tic

print("results for # cpu: " + str(cpu))
print(results)
print("time for single core: "+str(single))
print("time for multi core: "+str(multi))
print("time for multi async core: "+str(multi_as))

计算加权全局效率的结果是准确的,但是花费的时间太长。

最佳答案

当前,对于每对节点,您计算图中的最短路径。这是一个昂贵的计算。在计算一对节点的最短路径时,该算法做了很多对其他对有用的工作。不幸的是,这些信息被丢弃了,然后您继续处理下一对。

相反,请使用 all_pairs_dijkstra这将找到所有对之间的最短路径。

具体来说,在您的调用 sum(efficiency_weighted(G, u, v, Weight='weight') for u, v in permutations(G, 2)) 中,您将计算最短路径G 中每对节点的 u 到 v。这是低效的。

这应该可以完成相同的工作,而无需调用efficiency_weighted:

def global_efficiency_weighted(G):
n = len(G)
denom = n * (n - 1)
if denom != 0:
shortest_paths = nx.all_pairs_dijkstra(G, weight = 'weight')
g_eff = sum(1./shortest_paths[u][0][v] if shortest_paths[u][0][v] !=0 else 0 for u, v in permutations(G, 2)) / denom
else:
g_eff = 0
return g_eff

关于python - 如何更有效地计算全局效率?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56554132/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com