gpt4 book ai didi

python - 神经网络反向传播未完全训练

转载 作者:太空狗 更新时间:2023-10-29 18:30:10 24 4
gpt4 key购买 nike

我有一个我训练过的神经网络,如下所示,它有效,或者至少看起来有效,但问题出在训练上。我试图训练它作为一个或门,但它似乎永远不会到达那里,输出往往看起来像这样:

prior to training:

[[0.50181624]
[0.50183743]
[0.50180414]
[0.50182533]]

post training:

[[0.69641759]
[0.754652 ]
[0.75447178]
[0.79431198]]

expected output:

[[0]
[1]
[1]
[1]]

我有这个损失图:

enter image description here

奇怪的是它似乎在训练,但同时并没有完全达到预期的输出。我知道它永远不会真正实现 0 和 1,但与此同时我希望它能够管理并获得更接近预期输出的结果。

我在试图弄清楚如何反向传播错误时遇到了一些问题,因为我想让这个网络有任意数量的隐藏层,所以我将局部梯度存储在一个层中,与权重一起,并发送错误从最后回来。

我怀疑是罪魁祸首的主要函数是 NeuralNetwork.train 和两种前向方法。

import sys
import math
import numpy as np
import matplotlib.pyplot as plt
from itertools import product


class NeuralNetwork:
class __Layer:
def __init__(self,args):
self.__epsilon = 1e-6
self.localGrad = 0
self.__weights = np.random.randn(
args["previousLayerHeight"],
args["height"]
)*0.01
self.__biases = np.zeros(
(args["biasHeight"],1)
)

def __str__(self):
return str(self.__weights)

def forward(self,X):
a = np.dot(X, self.__weights) + self.__biases
self.localGrad = np.dot(X.T,self.__sigmoidPrime(a))
return self.__sigmoid(a)

def adjustWeights(self, err):
self.__weights -= (err * self.__epsilon)

def __sigmoid(self, z):
return 1/(1 + np.exp(-z))

def __sigmoidPrime(self, a):
return self.__sigmoid(a)*(1 - self.__sigmoid(a))

def __init__(self,args):
self.__inputDimensions = args["inputDimensions"]
self.__outputDimensions = args["outputDimensions"]
self.__hiddenDimensions = args["hiddenDimensions"]
self.__layers = []
self.__constructLayers()

def __constructLayers(self):
self.__layers.append(
self.__Layer(
{
"biasHeight": self.__inputDimensions[0],
"previousLayerHeight": self.__inputDimensions[1],
"height": self.__hiddenDimensions[0][0]
if len(self.__hiddenDimensions) > 0
else self.__outputDimensions[0]
}
)
)

for i in range(len(self.__hiddenDimensions)):
self.__layers.append(
self.__Layer(
{
"biasHeight": self.__hiddenDimensions[i + 1][0]
if i + 1 < len(self.__hiddenDimensions)
else self.__outputDimensions[0],
"previousLayerHeight": self.__hiddenDimensions[i][0],
"height": self.__hiddenDimensions[i + 1][0]
if i + 1 < len(self.__hiddenDimensions)
else self.__outputDimensions[0]
}
)
)

def forward(self,X):
out = self.__layers[0].forward(X)
for i in range(len(self.__layers) - 1):
out = self.__layers[i+1].forward(out)
return out

def train(self,X,Y,loss,epoch=5000000):
for i in range(epoch):
YHat = self.forward(X)
delta = -(Y-YHat)
loss.append(sum(Y-YHat))
err = np.sum(np.dot(self.__layers[-1].localGrad,delta.T), axis=1)
err.shape = (self.__hiddenDimensions[-1][0],1)
self.__layers[-1].adjustWeights(err)
i=0
for l in reversed(self.__layers[:-1]):
err = np.dot(l.localGrad, err)
l.adjustWeights(err)
i += 1

def printLayers(self):
print("Layers:\n")
for l in self.__layers:
print(l)
print("\n")

def main(args):
X = np.array([[x,y] for x,y in product([0,1],repeat=2)])
Y = np.array([[0],[1],[1],[1]])
nn = NeuralNetwork(
{
#(height,width)
"inputDimensions": (4,2),
"outputDimensions": (1,1),
"hiddenDimensions":[
(6,1)
]
}
)

print("input:\n\n",X,"\n")
print("expected output:\n\n",Y,"\n")
nn.printLayers()
print("prior to training:\n\n",nn.forward(X), "\n")
loss = []
nn.train(X,Y,loss)
print("post training:\n\n",nn.forward(X), "\n")
nn.printLayers()
fig,ax = plt.subplots()

x = np.array([x for x in range(5000000)])
loss = np.array(loss)
ax.plot(x,loss)
ax.set(xlabel="epoch",ylabel="loss",title="logic gate training")

plt.show()

if(__name__=="__main__"):
main(sys.argv[1:])

有人可以指出我在这里做错了什么吗,我强烈怀疑这与我处理矩阵的方式有关,但与此同时我一点也不知道发生了什么。

感谢您花时间阅读我的问题,并花时间回复(如果相关)。

编辑:实际上这有很多错误,但我仍然对如何解决它感到困惑。虽然损失图看起来像它的训练,而且有点像,但我上面做的数学是错误的。

看训练函数。

def train(self,X,Y,loss,epoch=5000000):
for i in range(epoch):
YHat = self.forward(X)
delta = -(Y-YHat)
loss.append(sum(Y-YHat))
err = np.sum(np.dot(self.__layers[-1].localGrad,delta.T), axis=1)
err.shape = (self.__hiddenDimensions[-1][0],1)
self.__layers[-1].adjustWeights(err)
i=0
for l in reversed(self.__layers[:-1]):
err = np.dot(l.localGrad, err)
l.adjustWeights(err)
i += 1

请注意我是如何得到 delta = -(Y-Yhat) 然后将它与最后一层的“局部梯度”点积的。 “局部梯度”是局部 W 梯度。

def forward(self,X):
a = np.dot(X, self.__weights) + self.__biases
self.localGrad = np.dot(X.T,self.__sigmoidPrime(a))
return self.__sigmoid(a)

我跳过了链式法则中的一个步骤。我真的应该先乘以 W* sigprime(XW + b),因为这是 X 的局部梯度,然后乘以局部 W 梯度。我试过了,但我仍然遇到问题,这是新的前向方法(注意层的 __init__ 需要为新变量初始化,我将激活函数更改为 tanh)

def forward(self, X):
a = np.dot(X, self.__weights) + self.__biases
self.localPartialGrad = self.__tanhPrime(a)
self.localWGrad = np.dot(X.T, self.localPartialGrad)
self.localXGrad = np.dot(self.localPartialGrad,self.__weights.T)
return self.__tanh(a)

并将训练方法更新为如下所示:

def train(self, X, Y, loss, epoch=5000):
for e in range(epoch):
Yhat = self.forward(X)
err = -(Y-Yhat)
loss.append(sum(err))
print("loss:\n",sum(err))
for l in self.__layers[::-1]:
l.adjustWeights(err)
if(l != self.__layers[0]):
err = np.multiply(err,l.localPartialGrad)
err = np.multiply(err,l.localXGrad)

我得到的新图表到处都是,我不知道发生了什么。这是我更改的最后一部分代码:

def adjustWeights(self, err):
perr = np.multiply(err, self.localPartialGrad)
werr = np.sum(np.dot(self.__weights,perr.T),axis=1)
werr = werr * self.__epsilon
werr.shape = (self.__weights.shape[0],1)
self.__weights = self.__weights - werr

最佳答案

您的网络正在学习,从损失图表中可以看出,因此反向传播实现是正确的(恭喜!)。这种特殊架构的主要问题是激活函数的选择:sigmoid。我已将 sigmoid 替换为 tanh,它立即运行得更好。

来自 this discussion on CV.SE :

There are two reasons for that choice (assuming you have normalized your data, and this is very important):

  • Having stronger gradients: since data is centered around 0, the derivatives are higher. To see this, calculate the derivative of the tanh function and notice that input values are in the range [0,1]. The range of the tanh function is [-1,1] and that of the sigmoid function is [0,1]

  • Avoiding bias in the gradients. This is explained very well in the paper, and it is worth reading it to understand these issues.

虽然我确定也可以训练基于 sigmoid 的神经网络,但看起来它对输入值更加敏感(注意它们不是以零为中心), 因为激活本身不是以零为中心的。 tanh 无论如何都比 sigmoid 好,所以更简单的方法就是使用那个激活函数。

关键的变化是:

def __tanh(self, z):
return np.tanh(z)

def __tanhPrime(self, a):
return 1 - self.__tanh(a) ** 2

... 而不是 __sigmoid__sigmoidPrime

我还稍微调整了超参数,这样网络现在可以学习 100k 个 epoch,而不是 5m:

prior to training:

[[ 0. ]
[-0.00056925]
[-0.00044885]
[-0.00101794]]

post training:

[[0. ]
[0.97335842]
[0.97340917]
[0.98332273]]

plot

完整代码在this gist .

关于python - 神经网络反向传播未完全训练,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48970179/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com