gpt4 book ai didi

python-2.7 - 拟合后如何从sklearn GMM中的每个组件获得标准偏差

转载 作者:行者123 更新时间:2023-12-04 13:04:19 24 4
gpt4 key购买 nike

拟合后如何从sklearn GMM中的每个组件获得标准偏差?

model.fit(dataSet)
model.means_ is the means of each components.
model.weights_ is the co-efficient of each components.

我在哪里可以找到每个高斯分量的偏差?

谢谢,

最佳答案

您可以在协方差矩阵的对角线上获得方差:第一个对角线元素是 sigma_x,第二个是 sigma_y。

基本上,如果您有 N 个混合物,而 C 是您的高斯混合实例:

cov = C.covariances_
[ np.sqrt( np.trace(cov[i])/N) for i in range(0,N) ]

会给你每个混合物的平均标准偏差。

我检查了下面的这个模拟,它似乎收敛了大约 1% 的真实值与数百或数千个点:
# -*- coding: utf-8 -*-
"""
Created on Wed Jul 24 12:37:38 2019

- - -

Simulate two point - gaussian normalized - distributions.
Use GMM cluster fit and look how covariance elements are related to sigma.


@author: Adrien MAU / ISMO & Abbelight

"""

import numpy as np
import matplotlib
import matplotlib.pyplot as plt

import sklearn
from sklearn import cluster, mixture

colorsList = ['c','r','g']
CustomCmap = matplotlib.colors.ListedColormap(colorsList)


sigma1=16
sigma2=4
npoints = 2000

s = (100,100)
x1 = np.random.normal( 50, sigma1, npoints )
y1 = np.random.normal( 70, sigma1, npoints )

x2 = np.random.normal( 20, sigma2, npoints )
y2 = np.random.normal( 50, sigma2, npoints )

x = np.hstack((x1,x2))
y = np.hstack((y1,y2))


C = mixture.GaussianMixture(n_components= 2 , covariance_type='full' )
subdata = np.transpose( np.vstack((x,y)) )
C.fit( subdata )

m = C.means_
w = C.weights_
cov = C.covariances_


print('\n')
print( 'test var 1 : ' , np.sqrt( np.trace( cov[0]) /2 ) )
print( 'test var 2 : ' , np.sqrt( np.trace( cov[1]) /2 ) )

plt.scatter(x1,y1)
plt.scatter(x2,y2)

plt.scatter( m[0,0], m[0,1])
plt.scatter( m[1,0], m[1,1])
plt.title('Initial data, and found Centroid')
plt.axis('equal')



gmm_sub_sigmas = [ np.sqrt( np.trace(cov[i])/2) for i in range(0,2) ]
xdiff= (np.transpose(np.repeat([x],2 ,axis=0)) - m[:,0]) / gmm_sub_sigmas
ydiff= (np.transpose(np.repeat([y],2 ,axis=0)) - m[:,1]) / gmm_sub_sigmas
# distances = np.hypot(xdiff,ydiff) #not the effective distance for gaussian distributions...
distances = 0.5*np.hypot(xdiff,ydiff) + np.log(gmm_sub_sigmas) # I believe this is a good estimate of closeness to a gaussian distribution
res2 = np.argmin( distances , axis=1)

plt.figure()
plt.scatter(x,y, c=res2, cmap=CustomCmap )
plt.axis('equal')
plt.title('GMM Associated data')

关于python-2.7 - 拟合后如何从sklearn GMM中的每个组件获得标准偏差,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40874263/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com