gpt4 book ai didi

python - 计算纯python中NN的梯度

转载 作者:行者123 更新时间:2023-12-04 07:57:24 25 4
gpt4 key购买 nike

import numpy

# Data and parameters

X = numpy.array([[-1.086, 0.997, 0.283, -1.506]])
T = numpy.array([[-0.579]])
W1 = numpy.array([[-0.339, -0.047, 0.746, -0.319, -0.222, -0.217],
[ 1.103, 1.093, 0.502, 0.193, 0.369, 0.745],
[-0.468, 0.588, -0.627, -0.319, 0.454, -0.714],
[-0.070, -0.431, -0.128, -1.399, -0.886, -0.350]])
W2 = numpy.array([[ 0.379, -0.071, 0.001, 0.281, -0.359, 0.116],
[-0.329, -0.705, -0.160, 0.234, 0.138, -0.005],
[ 0.977, 0.169, 0.400, 0.914, -0.528, -0.424],
[ 0.712, -0.326, 0.012, 0.437, 0.364, 0.716],
[ 0.611, 0.437, -0.315, 0.325, 0.128, -0.541],
[ 0.579, 0.330, 0.019, -0.095, -0.489, 0.081]])
W3 = numpy.array([[ 0.191, -0.339, 0.474, -0.448, -0.867, 0.424],
[-0.165, -0.051, -0.342, -0.656, 0.512, -0.281],
[ 0.678, 0.330, -0.128, -0.443, -0.299, -0.495],
[ 0.852, 0.067, 0.470, -0.517, 0.074, 0.481],
[-0.137, 0.421, -0.443, -0.557, 0.155, -0.155],
[ 0.262, -0.807, 0.291, 1.061, -0.010, 0.014]])
W4 = numpy.array([[ 0.073],
[-0.760],
[ 0.174],
[-0.655],
[-0.175],
[ 0.507]])
B1 = numpy.array([-0.760, 0.174, -0.655, -0.175, 0.507, -0.300])
B2 = numpy.array([ 0.205, 0.413, 0.114, -0.560, -0.136, 0.800])
B3 = numpy.array([-0.827, -0.113, -0.225, 0.049, 0.305, 0.657])
B4 = numpy.array([-0.270])

# Forward pass

Z1 = X.dot(W[0])+B[0]
A1 = numpy.maximum(0,Z1)
Z2 = A1.dot(W[1])+B[1]
A2 = numpy.maximum(0,Z2)
Z3 = A2.dot(W[2])+B[2]
A3 = numpy.maximum(0,Z3)
Y = A3.dot(W[3])+B[3];

# Error

err = ((Y-T)**2).mean()
给出这个例子,我想实现向后传递,并获得关于权重和偏置参数的梯度。显然,最后一层的梯度如下:
DY = 2*(Y-T)
DB4 = DY.mean(axis=0)
DW4 = A3.T.dot(DY) / len(X)
DZ3 = DY.dot(W4.T)*(Z3 > 0)
我知道不同的导数是使用链式法则计算的,但我不太明白你是如何得出这个解决方案的。

最佳答案

让我们使用(偏)导数的链式法则和矩阵微分法则,引用下图显示了神经网络的最后一个隐藏层,用于回归(MSE)误差的反向传播:
enter image description here

E = err = (Y - T)**2 (take mean over the batch to compute MSE)


DY = ∂E/∂Y= 2 * (Y - T)


∂E/∂W3= (∂E/∂Y).(∂Y/∂W3)
= DY. (∂/∂W3 (A3.W3+B3))= DY.A3.T

= A3.T.DY(take mean over all training examples in training batch X: sum and divide by batch size |X|)


∂E/∂B3= (∂E/∂Y).(∂Y/∂B3)
= DY. (∂/∂B3 (A3.W3+B3))= DY.1

= DY (take mean over all the examples in a batch)


∂E/∂Z3
= (∂E/∂Y).(∂Y/∂A3).(∂A3/∂Z3)

= DY.(∂/∂A3 (A3.W3+B3)).(1.𝟙{Z3>0} + 0.𝟙{Z3 <= 0})

= DY. W3.T. 𝟙{Z3 > 0), where 𝟙(.) is the indicator function. Using thedefinition of nonlinear RELU activation, the derivative is 1 whenZ3>0, otherwise 0.

关于python - 计算纯python中NN的梯度,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/66628950/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com