gpt4 book ai didi

python - 加速高斯 EM 算法

转载 作者:塔克拉玛干 更新时间:2023-11-03 04:22:37 25 4
gpt4 key购买 nike

我的 python 代码如下...它需要很长时间。必须有一些我可以使用的 numpy 技巧吗?我正在分析的图片很小,而且是灰度的...

def gaussian_probability(x,mean,standard_dev):
termA = 1.0 / (standard_dev*np.sqrt(2.0*np.pi))
termB = np.exp(-((x - mean)**2.0)/(2.0*(standard_dev**2.0)))
g = (termA*termB)
return g
def sum_of_gaussians(x):
return sum([self.mixing_coefficients[i] *
gaussian_probability(x, self.means[i], self.variances[i]**0.5)
for i in range(self.num_components)])
def expectation():
dim = self.image_matrix.shape
rows, cols = dim[0], dim[1]
responsibilities = []
for i in range(self.num_components):
gamma_k = np.zeros([rows, cols])
for j in range(rows):
for k in range(cols):
p = (self.mixing_coefficients[i] *
gaussian_probability(self.image_matrix[j,k],
self.means[i],
self.variances[i]**0.5))
gamma_k[j,k] = p / sum_of_gaussians(self.image_matrix[j,k])
responsibilities.append(gamma_k)
return responsibilities

我只包括期望步骤,因为虽然最大化步骤循环遍历责任矩阵数组的每个元素,但它似乎进行得相对较快(所以瓶颈可能是所有的 gaussian_probability 计算?)

最佳答案

你可以通过做两件事来大大加快你的计算速度:

  • 不要在每个循环中计算归一化!如目前所写,对于具有 M 个分量的 NxN 图像,您要计算每个相关计算 N * N * M 次,导致 O[N^4 M^2]算法!相反,您应该计算一次所有元素,然后除以总和,总和将为 O[N^2 M]

  • 使用 numpy 向量化而不是显式循环。这可以按照您设置代码的方式非常直接地完成。

本质上,您的expectation 函数应该如下所示:

def expectation(self):
responsibilities = (self.mixing_coefficients[:, None, None] *
gaussian_probability(self.image_matrix,
self.means[:, None, None],
self.variances[:, None, None] ** 0.5))
return responsibilities / responsibilities.sum(0)

您没有提供完整的示例,所以我不得不即兴创作一些来检查和基准测试,但这里有一个快速的例子:

import numpy as np

def gaussian_probability(x,mean,standard_dev):
termA = 1.0 / (standard_dev*np.sqrt(2.0*np.pi))
termB = np.exp(-((x - mean)**2.0)/(2.0*(standard_dev**2.0)))
return termA * termB

class EM(object):
def __init__(self, N=5):
self.image_matrix = np.random.rand(20, 20)
self.num_components = N
self.mixing_coefficients = 1 + np.random.rand(N)
self.means = 10 * np.random.rand(N)
self.variances = np.ones(N)

def sum_of_gaussians(self, x):
return sum([self.mixing_coefficients[i] *
gaussian_probability(x, self.means[i], self.variances[i]**0.5)
for i in range(self.num_components)])

def expectation(self):
dim = self.image_matrix.shape
rows, cols = dim[0], dim[1]
responsibilities = []
for i in range(self.num_components):
gamma_k = np.zeros([rows, cols])
for j in range(rows):
for k in range(cols):
p = (self.mixing_coefficients[i] *
gaussian_probability(self.image_matrix[j,k],
self.means[i],
self.variances[i]**0.5))
gamma_k[j,k] = p / self.sum_of_gaussians(self.image_matrix[j,k])
responsibilities.append(gamma_k)
return responsibilities

def expectation_fast(self):
responsibilities = (self.mixing_coefficients[:, None, None] *
gaussian_probability(self.image_matrix,
self.means[:, None, None],
self.variances[:, None, None] ** 0.5))
return responsibilities / responsibilities.sum(0)

现在我们可以实例化对象并比较期望步骤的两个实现:

em = EM(5)
np.allclose(em.expectation(),
em.expectation_fast())
# True

看看时间,对于包含 5 个组件的 20x20 图像,我们的速度提高了大约 1000 倍:

%timeit em.expectation()
10 loops, best of 3: 65.9 ms per loop

%timeit em.expectation_fast()
10000 loops, best of 3: 74.5 µs per loop

随着图像大小和组件数量的增加,这种改进也会增加。祝你好运!

关于python - 加速高斯 EM 算法,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/33707870/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com