gpt4 book ai didi

python - sklearn.mixture.GMM(高斯混合模型)的问题

转载 作者:行者123 更新时间:2023-11-28 17:28:24 27 4
gpt4 key购买 nike

一般来说,我是 scikit-lear 和 GMM 的新手...我对 python (scikit-learn) 中的高斯混合模型的拟合质量有一些疑问。

我有一组数据,您可以在 DATA HERE 找到它我想与具有 n = 2 个组件的 GMM 相匹配。

作为基准,我叠加了一个 Normal 拟合。

错误/怪异:

  1. 设置 n = 1 个组件,我无法使用 GMM(1) 恢复正常基准拟合
  2. 设置n = 2个分量,Normal拟合优于GMM(2)拟合
  3. GMM(n) 似乎总是提供相同的拟合...

这是我得到的:我在这里做错了什么? (图片显示与 GMM(2) 的拟合)。预先感谢您的帮助。

enter image description here

下面的代码(要运行它,将数据保存在同一文件夹中)

from numpy import *
import pandas as pd
import matplotlib.pyplot as plt
from datetime import datetime
from collections import OrderedDict
from scipy.stats import norm
from sklearn.mixture import GMM

# Upload the data: "epsi" (array of floats)
file_xlsx = './db_X.xlsx'
data = pd.read_excel(file_xlsx)
epsi = data["epsi"].values;
t_ = len(epsi);

# Normal fit (for benchmark)
epsi_grid = arange(min(epsi),max(epsi)+0.001,0.001);

mu = mean(epsi);
sigma2 = var(epsi);

normal = norm.pdf(epsi_grid, mu, sqrt(sigma2));

# TENTATIVE - Gaussian mixture fit
gmm = GMM(n_components = 2); # fit quality doesn't improve if I set: covariance_type = 'full'
gmm.fit(reshape(epsi,(t_,1)));

gauss_mixt = exp(gmm.score(reshape(epsi_grid,(len(epsi_grid),1))));

# same result if I apply the definition of pdf of a Gaussian mixture:
# pdf_mixture = w_1 * N(mu_1, sigma_1) + w_2 * N(mu_2, sigma_2)
# as suggested in:
# http://stackoverflow.com/questions/24878729/how-to-construct-and-plot-uni-variate-gaussian-mixture-using-its-parameters-in-p
#
#gauss_mixt = array([p * norm.pdf(epsi_grid, mu, sd) for mu, sd, p in zip(gmm.means_.flatten(), sqrt(gmm.covars_.flatten()), gmm.weights_)]);
#gauss_mixt = sum(gauss_mixt, axis = 0);


# Create a figure showing the comparison between the estimated distributions

# setting the figure object
fig = plt.figure(figsize = (10,8))
fig.set_facecolor('white')
ax = plt.subplot(111)

# colors
red = [0.9, 0.3, 0.0];
grey = [0.9, 0.9, 0.9];
green = [0.2, 0.6, 0.3];

# x-axis limits
q_inf = float(pd.DataFrame(epsi).quantile(0.0025));
q_sup = float(pd.DataFrame(epsi).quantile(0.9975));
ax.set_xlim([q_inf, q_sup])

# empirical pdf of data
nb = int(10*log(t_));
ax.hist(epsi, bins = nb, normed = True, color = grey, edgecolor = 'k', label = "Empirical");

# Normal fit
ax.plot(epsi_grid, normal, color = green, lw = 1.0, label = "Normal fit");

# Gaussian Mixture fit
ax.plot(epsi_grid, gauss_mixt, color = red, lw = 1.0, label = "GMM(2)");

# title
ax.set_title("Issue: Normal fit out-performs the GMM fit?", size = 14)

# legend
ax.legend(loc='upper left');

plt.tight_layout()
plt.show()

最佳答案

问题在于单个分量方差 min_covar 的界限,默认情况下为 1e-3,旨在防止过度拟合。

降低该限制解决了问题(见图):

gmm = GMM(n_components = 2, min_covar = 1e-12)

enter image description here

关于python - sklearn.mixture.GMM(高斯混合模型)的问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36628291/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com