gpt4 book ai didi

neural-network - 使用多种训练方法通过 Encog 训练 ANN

转载 作者:行者123 更新时间:2023-12-04 21:40:23 25 4
gpt4 key购买 nike

我想知道在使用弹性传播训练之前,使用遗传算法、粒子群优化和模拟退火训练前馈神经网络是否会改善结果。

这是我正在使用的代码:

                    CalculateScore score = new TrainingSetScore(trainingSet);
StopTrainingStrategy stop = new StopTrainingStrategy();
StopTrainingStrategy stopGA = new StopTrainingStrategy();
StopTrainingStrategy stopSIM = new StopTrainingStrategy();
StopTrainingStrategy stopPSO = new StopTrainingStrategy();

Randomizer randomizer = new NguyenWidrowRandomizer();
//Backpropagation train = new Backpropagation((BasicNetwork) network, trainingSet, 0.2, 0.1);
// LevenbergMarquardtTraining train = new LevenbergMarquardtTraining((BasicNetwork) network, trainingSet);
int population = 500;
MLTrain trainGA = new MLMethodGeneticAlgorithm(new MethodFactory(){
@Override
public MLMethod factor() {
final BasicNetwork result = createNetwork();
((MLResettable)result).reset();
return result;
}}, score,population);


Date dStart = new Date();

int epochGA = 0;
trainGA.addStrategy(stopGA);
do{
trainGA.iteration();
if(writeOnStdOut)
System.out.println("Epoch Genetic #" + epochGA + " Error:" + trainGA.getError());
epochGA++;//0000001
previousError = trainGA.getError();
Date dtemp = new Date();
totsecs = ((double)(dtemp.getTime()-dStart.getTime())/1000);
} while(previousError > maximumAcceptedErrorTreshold && epochGA < (maxIterations/5) && !stopGA.shouldStop() && totsecs < (secs/3));

NeuralPSO trainPSO = new NeuralPSO((BasicNetwork) network, randomizer, score, 100);

int epochPSO = 0;
trainPSO.addStrategy(stopPSO);
dStart = new Date();
do{
trainPSO.iteration();
if(writeOnStdOut)
System.out.println("Epoch Particle Swarm #" + epochPSO + " Error:" + trainPSO.getError());
epochPSO++;//0000001
previousError = trainPSO.getError();
Date dtemp = new Date();
totsecs = ((double)(dtemp.getTime()-dStart.getTime())/1000);
} while(previousError > maximumAcceptedErrorTreshold && epochPSO < (maxIterations/5) && !stopPSO.shouldStop() && totsecs < (secs/3));

MLTrain trainSIM = new NeuralSimulatedAnnealing((MLEncodable) network, score, startTemperature, stopTemperature, cycles);

int epochSA = 0;
trainSIM.addStrategy(stopSIM);
dStart = new Date();
do{
trainSIM.iteration();
if(writeOnStdOut)
System.out.println("Epoch Simulated Annealing #" + epochSA + " Error:" + trainSIM.getError());
epochSA++;//0000001
previousError = trainSIM.getError();
Date dtemp = new Date();
totsecs = ((double)(dtemp.getTime()-dStart.getTime())/1000);
} while(previousError > maximumAcceptedErrorTreshold && epochSA < (maxIterations/5) && !stopSIM.shouldStop() && totsecs < (secs/3));




previousError = 0;
BasicTraining train = getTraining(method,(BasicNetwork) network, trainingSet);


//train.addStrategy(new Greedy());
//trainAlt.addStrategy(new Greedy());
HybridStrategy strAnneal = new HybridStrategy(trainSIM);

train.addStrategy(strAnneal);
//train.addStrategy(strGenetic);
//train.addStrategy(strPSO);

train.addStrategy(stop);
//
// Backpropagation train = new Backpropagation((ContainsFlat) network, trainingSet, 0.7, 0.3);
dStart = new Date();

int epoch = 1;

do {
train.iteration();
if(writeOnStdOut)
System.out.println("Epoch #" + epoch + " Error:" + train.getError());
epoch++;//0000001
if(Math.abs(train.getError()-previousError)<0.0000001) iterationWithoutImprovement++; else iterationWithoutImprovement = 0;
previousError = train.getError();

Date dtemp = new Date();
totsecs = ((double)(dtemp.getTime()-dStart.getTime())/1000);
} while(previousError > maximumAcceptedErrorTreshold && epoch < maxIterations && !stop.shouldStop() && totsecs < secs);//&& iterationWithoutImprovement < maxiter);

如您所见,是一系列训练算法,它们应该会改进整体训练。

请让我知道它是否有意义以及代码是否正确。
它似乎有效,但我想确定,因为有时我看到 GA 取得的进展是从 PSO 重置的。

谢谢

最佳答案

这似乎是合乎逻辑的,但它行不通。

使用 RPROP 的默认参数,此序列可能不起作用。原因是在您之前的训练之后,神经网络的权重将接近局部最优。由于接近局部最优值,只有权重的微小变化才会更接近最优值(降低错误率)。默认情况下,RPROP 在权重矩阵中使用 0.1 的初始更新值。对于如此接近最佳状态的网络来说,这是一个巨大的值(value)。你“此时正在瓷器店里放牛”。第一次迭代将使网络远离最佳状态,并且本质上将开始新的全局搜索。

降低初始更新值应该有帮助。我不确定是多少。您可能想查看带有您的数据的列车的平均 RPROP 权重更新值,以获得一个想法。或者尝试将其设置得非常小,然后按自己的方式工作。

关于neural-network - 使用多种训练方法通过 Encog 训练 ANN,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/28030488/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com