gpt4 book ai didi

python - 使用 Iris 数据集在 Python 上进行模糊聚类

转载 作者:行者123 更新时间:2023-12-05 07:44:37 37 4
gpt4 key购买 nike

我正在研究 iris 数据集的模糊 c 均值聚类,但由于某些错误无法可视化。 Using this tutorial我为虹膜写了以下内容,但它显示名为“AttributeError:shape”的错误。这是我的代码:

from sklearn import datasets
from sklearn.cluster import KMeans
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import sklearn.metrics as sm
import skfuzzy as fuzz

iris = datasets.load_iris()

x = pd.DataFrame(iris.data, columns=['Sepal Length', 'Sepal Width', 'Petal Length', 'Petal Width'])
y = pd.DataFrame(iris.target, columns=['Target'])
plt.figure(figsize=(6, 3))

model =fuzz.cluster.cmeans(iris,3,2,error=0.005,maxiter=1000,init=None,seed=None)
model.fit(x)
plt.show()

我假设在变量模型中传递参数就足够了,但是它显示了上述错误。如果可能的话,你能告诉我哪里出错了吗?如何解决这个问题?非常感谢您的帮助!

最佳答案

我先尝试预处理数据,我创建了一个很好的绘图,我简单地按照教程,我执行 SVD 将维度降为两个,然后我开始绘图,似乎对于教程你只需要二维 (x,y)。你不需要执行 model.fit() 我没有在 documentation 中找到这种命令,这里是代码:

import numpy as np, pandas as pd, os
import matplotlib
import matplotlib.pyplot as plt
import itertools
from sklearn.metrics import confusion_matrix
import statsmodels.api as sm
import statsmodels.formula.api as smf
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import [![TruncatedSVD
from skle][1]][1]arn.preprocessing import Normalizer
import skfuzzy as fuzz
from sklearn import datasets
################################################################################
iris = datasets.load_iris()

x = pd.DataFrame(iris.data, columns=['Sepal Length', 'Sepal Width', 'Petal Length', 'Petal Width'])
y = pd.DataFrame(iris.target, columns=['Target'])
scaler = StandardScaler()
X_std = scaler.fit_transform(x)
lsa = TruncatedSVD(2, algorithm = 'arpack')
dtm_lsa = lsa.fit_transform(X_std)
dtm_lsa = Normalizer(copy=False).fit_transform(dtm_lsa)
a= pd.DataFrame(dtm_lsa, columns = ["component_1","component_2"])
a['targets']=y
fig1, axes1 = plt.subplots(3, 3, figsize=(8, 8))
alldata = np.vstack((a['component_1'], a['component_2']))
fpcs = []

colors = ['b', 'orange', 'g', 'r', 'c', 'm', 'y', 'k', 'Brown', 'ForestGreen']

for ncenters, ax in enumerate(axes1.reshape(-1), 2):
cntr, u, u0, d, jm, p, fpc = fuzz.cluster.cmeans(
alldata, ncenters, 2, error=0.005, maxiter=1000, init=None)

# Store fpc values for later plots
fpcs.append(fpc)

# Plot assigned clusters, for each data point in training set
cluster_membership = np.argmax(u, axis=0)
for j in range(ncenters):
ax.plot(a['component_1'][cluster_membership == j],
a['component_2'][cluster_membership == j], '.', color=colors[j])

# Mark the center of each fuzzy cluster
for pt in cntr:
ax.plot(pt[0], pt[1], 'rs')

ax.set_title('Centers = {0}; FPC = {1:.2f}'.format(ncenters, fpc))
ax.axis('off')

fig1.tight_layout()
fig1.savefig('iris_dataset.png')

Iris Data Set

关于python - 使用 Iris 数据集在 Python 上进行模糊聚类,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42708253/

37 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com