gpt4 book ai didi

Python 与 MATLAB 在算法上的性能

转载 作者:太空宇宙 更新时间:2023-11-04 02:19:46 25 4
gpt4 key购买 nike

我有一个关于两位代码的性能问题。一种是在 python 中实现的,一种是在 MATLAB 中实现的。该代码计算了一个时间序列的样本熵(听起来很复杂,但基本上是一堆 for 循环)。

我根据时间序列在相对较大的时间序列(约 95k+ 个样本)上运行这两个实现。 MATLAB 实现在 ~45 秒到 1 分钟内完成计算。 python 基本上永远不会完成。我将 tqdm 扔到 python for 循环上,上层循环仅以大约 ~1.85s/it 的速度移动,这给出了 50 多个小时作为估计完成时间(我让它运行 15 分钟以上并且迭代计数非常一致).

示例输入和运行时间:

MATLAB(~ 52 秒):

a = rand(1, 95000)
sampenc(a, 4, 0.1 * std(a))

Python(目前需要 5 分钟,估计需要 49 小时):

import numpy as np
a = np.random.rand(1, 95000)[0]
sample_entropy(a, 4, 0.1 * np.std(a))

Python 实现:

# https://github.com/nikdon/pyEntropy
def sample_entropy(time_series, sample_length, tolerance=None):
"""Calculate and return Sample Entropy of the given time series.
Distance between two vectors defined as Euclidean distance and can
be changed in future releases
Args:
time_series: Vector or string of the sample data
sample_length: Number of sequential points of the time series
tolerance: Tolerance (default = 0.1...0.2 * std(time_series))
Returns:
Vector containing Sample Entropy (float)
References:
[1] http://en.wikipedia.org/wiki/Sample_Entropy
[2] http://physionet.incor.usp.br/physiotools/sampen/
[3] Madalena Costa, Ary Goldberger, CK Peng. Multiscale entropy analysis
of biological signals
"""
if tolerance is None:
tolerance = 0.1 * np.std(time_series)

n = len(time_series)
prev = np.zeros(n)
curr = np.zeros(n)
A = np.zeros((sample_length, 1)) # number of matches for m = [1,...,template_length - 1]
B = np.zeros((sample_length, 1)) # number of matches for m = [1,...,template_length]

for i in range(n - 1):
nj = n - i - 1
ts1 = time_series[i]
for jj in range(nj):
j = jj + i + 1
if abs(time_series[j] - ts1) < tolerance: # distance between two vectors
curr[jj] = prev[jj] + 1
temp_ts_length = min(sample_length, curr[jj])
for m in range(int(temp_ts_length)):
A[m] += 1
if j < n - 1:
B[m] += 1
else:
curr[jj] = 0
for j in range(nj):
prev[j] = curr[j]

N = n * (n - 1) / 2
B = np.vstack(([N], B[:sample_length - 1]))
similarity_ratio = A / B
se = - np.log(similarity_ratio)
se = np.reshape(se, -1)
return se

MATLAB 实现:

function [e,A,B]=sampenc(y,M,r);
%function [e,A,B]=sampenc(y,M,r);
%
%Input
%
%y input data
%M maximum template length
%r matching tolerance
%
%Output
%
%e sample entropy estimates for m=0,1,...,M-1
%A number of matches for m=1,...,M
%B number of matches for m=0,...,M-1 excluding last point

n=length(y);
lastrun=zeros(1,n);
run=zeros(1,n);
A=zeros(M,1);
B=zeros(M,1);
p=zeros(M,1);
e=zeros(M,1);

for i=1:(n-1)
nj=n-i;
y1=y(i);
for jj=1:nj
j=jj+i;
if abs(y(j)-y1)<r
run(jj)=lastrun(jj)+1;
M1=min(M,run(jj));
for m=1:M1
A(m)=A(m)+1;
if j<n
B(m)=B(m)+1;
end
end
else
run(jj)=0;
end
end
for j=1:nj
lastrun(j)=run(j);
end
end
N=n*(n-1)/2;
B=[N;B(1:(M-1))];
p=A./B;
e=-log(p);

我还尝试了一些其他的 python 实现,它们都具有相同的缓慢结果: vectorized-sample-entropy

sampen

sampen2.py

Wikipedia sample entropy implementation

我不认为计算机有问题,因为它在 MATLAB 中快速运行相对论。

据我所知,两组代码在实现方面是相同的。我不知道为什么 python 实现这么慢。我会理解几秒钟的差异,但不会有这么大的差异。让我知道您对此的看法或关于如何改进 Python 版本的建议。

顺便说一句:我将 Python 3.6.5 与 numpy 1.14.5 和 MATLAB R2018a 结合使用。

最佳答案

如评论中所述,默认情况下,Matlab 使用 jit 编译器,而 Python 不使用。在 Python 中,您可以使用 Numba 来做同样的事情。

稍作修改的代码

import numba as nb
import numpy as np
import time

@nb.jit(fastmath=True,error_model='numpy')
def sample_entropy(time_series, sample_length, tolerance=None):
"""Calculate and return Sample Entropy of the given time series.
Distance between two vectors defined as Euclidean distance and can
be changed in future releases
Args:
time_series: Vector or string of the sample data
sample_length: Number of sequential points of the time series
tolerance: Tolerance (default = 0.1...0.2 * std(time_series))
Returns:
Vector containing Sample Entropy (float)
References:
[1] http://en.wikipedia.org/wiki/Sample_Entropy
[2] http://physionet.incor.usp.br/physiotools/sampen/
[3] Madalena Costa, Ary Goldberger, CK Peng. Multiscale entropy analysis
of biological signals
"""
if tolerance is None:
tolerance = 0.1 * np.std(time_series)

n = len(time_series)
prev = np.zeros(n)
curr = np.zeros(n)
A = np.zeros((sample_length)) # number of matches for m = [1,...,template_length - 1]
B = np.zeros((sample_length)) # number of matches for m = [1,...,template_length]

for i in range(n - 1):
nj = n - i - 1
ts1 = time_series[i]
for jj in range(nj):
j = jj + i + 1
if abs(time_series[j] - ts1) < tolerance: # distance between two vectors
curr[jj] = prev[jj] + 1
temp_ts_length = min(sample_length, curr[jj])
for m in range(int(temp_ts_length)):
A[m] += 1
if j < n - 1:
B[m] += 1
else:
curr[jj] = 0
for j in range(nj):
prev[j] = curr[j]

N = n * (n - 1) // 2

B2=np.empty(sample_length)
B2[0]=N
B2[1:]=B[:sample_length - 1]
similarity_ratio = A / B2
se = - np.log(similarity_ratio)
return se

时间

a = np.random.rand(1, 95000)[0] #Python
a = rand(1, 95000) #Matlab
Python 3.6, Numba 0.40dev, Matlab 2016b, Core i5-3210M

Python: 487s
Python+Numba: 12.2s
Matlab: 71.1s

关于Python 与 MATLAB 在算法上的性能,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51903350/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com