gpt4 book ai didi

python - Warm_Start=True 的 MLPClassifier 在一次迭代中收敛

转载 作者:太空宇宙 更新时间:2023-11-03 21:09:15 26 4
gpt4 key购买 nike

我正在使用 scikit learn 的 MLPClassifier 和以下参数

mlp = MLPClassifier(hidden_layer_sizes=(3,2,),solver='sgd', verbose=True,
learning_rate='constant',learning_rate_init=0.001, random_state=rr,
warm_start=True, max_iter=400, n_iter_no_change=20)

我希望将我的分类器拟合到不同但非常相似的数据上,并查看神经网络需要多长时间才能收敛。

我生成了一个非常简单的数据集。 It is a data set of 50,000 (x,y) points and the colours denote how I have classified the points.

我的分类器最初是在第一个图上进行训练的,然后我就这样做了

mlp.fit(new_data, new_data_labels)

其中对于每个图,new_data = 我的旧数据 + 新数据集。

这运行得很好,但是,当我将分类器适合我的新的、更大的数据集时,它会在一次迭代中收敛。似乎无论我如何改变数据,我的分类器覆盖范围都会立即消失,但是 my loss graph looks terrible 。我不太确定我哪里出错了。

我的输出看起来像这样

Iteration 134, loss = 0.55557070
Iteration 135, loss = 0.55550839
Training loss did not improve more than tol=0.000100 for 20 consecutive epochs. Stopping.
Training set score: 0.663680
Training set loss: 0.555508
Iteration 136, loss = 0.56689723
Training loss did not improve more than tol=0.000100 for 20 consecutive epochs. Stopping.
Training set score: 0.643810
Training set loss: 0.566897
Iteration 137, loss = 0.57723775
Training loss did not improve more than tol=0.000100 for 20 consecutive epochs. Stopping.
Training set score: 0.624447
Training set loss: 0.577238
Iteration 138, loss = 0.58684895
Training loss did not improve more than tol=0.000100 for 20 consecutive epochs. Stopping.

最佳答案

您可以使用mlp.loss_curve_获取模型的损失曲线。

关于python - Warm_Start=True 的 MLPClassifier 在一次迭代中收敛,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55174730/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com