gpt4 book ai didi

python - Numpy 神经网络中的权重未更新,错误是静态的

转载 作者:行者123 更新时间:2023-11-30 09:41:39 25 4
gpt4 key购买 nike

我正在尝试在 Mnist 数据集上构建一个神经网络以进行硬件分配。我不是要求任何人为我做作业,我只是无法弄清楚为什么训练准确性和测试准确性似乎在每个时期都是静态的?

就好像我更新权重的方法不起作用。

Epoch: 0, Train Accuracy: 10.22%, Train Cost: 3.86, Test Accuracy: 10.1%
Epoch: 1, Train Accuracy: 10.22%, Train Cost: 3.86, Test Accuracy: 10.1%
Epoch: 2, Train Accuracy: 10.22%, Train Cost: 3.86, Test Accuracy: 10.1%
Epoch: 3, Train Accuracy: 10.22%, Train Cost: 3.86, Test Accuracy: 10.1%
.
.
.

但是,当我在循环中运行实际的前向和反向传播线而没有任何类或方法的“绒毛”时,成本会下降。我似乎无法让它在当前的类设置中工作。

我尝试构建自己的方法,明确地在反向传播和前馈方法之间传递权重和偏差,但是,这些更改并没有解决这个梯度下降问题。

我非常确定这与下面的 NeuralNetwork 类中 backprop 方法的定义有关。我一直在努力寻找一种通过访问主训练循环中的权重和偏差变量来更新权重的方法。

def backward(self, Y_hat, Y):
'''
Backward pass through network. Update parameters

INPUT
Y_hat: Network predicted
shape: (?, 10)

Y: Correct target
shape: (?, 10)

RETURN
cost: calculate J for errors
type: (float)

'''

#Naked Backprop
dJ_dZ2 = Y_hat - Y
dJ_dW2 = np.matmul(np.transpose(X2), dJ_dZ2)
dJ_db2 = Y_hat - Y
dJ_dX2 = np.matmul(dJ_db2, np.transpose(NeuralNetwork.W2))
dJ_dZ1 = dJ_dX2 * d_sigmoid(Z1)
inner_mat = np.matmul(Y-Y_hat,np.transpose(NeuralNetwork.W2))
dJ_dW1 = np.matmul(np.transpose(X),inner_mat) * d_sigmoid(Z1)
dJ_db1 = np.matmul(Y - Y_hat, np.transpose(NeuralNetwork.W2)) * d_sigmoid(Z1)

lr = 0.1

# weight updates here
#just line 'em up and do lr * the dJ_.. vars you found above
NeuralNetwork.W2 = NeuralNetwork.W2 - lr * dJ_dW2
NeuralNetwork.b2 = NeuralNetwork.b2 - lr * dJ_db2
NeuralNetwork.W1 = NeuralNetwork.W1 - lr * dJ_dW1
NeuralNetwork.b1 = NeuralNetwork.b1 - lr * dJ_db1

# calculate the cost
cost = -1 * np.sum(Y * np.log(Y_hat))

# calc gradients

# weight updates

return cost#, W1, W2, b1, b2

我真的很困惑,非常感谢任何帮助!

此处显示完整代码...

import keras
import numpy as np
import matplotlib.pyplot as plt
from keras.datasets import mnist

np.random.seed(0)

"""### Load MNIST Dataset"""

(x_train, y_train), (x_test, y_test) = mnist.load_data()

X = x_train[0].reshape(1,-1)/255.; Y = y_train[0]
zeros = np.zeros(10); zeros[Y] = 1
Y = zeros

#Here we implement the forward pass for the network using the single example, $X$, from above

### Initialize weights and Biases

num_hidden_nodes = 200
num_classes = 10

# init weights
#first set of weights (these are what the input matrix is multiplied by)
W1 = np.random.uniform(-1e-3,1e-3,size=(784,num_hidden_nodes))
#this is the first bias layer and i think it's a 200 dimensional vector of the biases that go into each neuron before the sigmoid function.
b1 = np.zeros((1,num_hidden_nodes))

#again this are the weights for the 2nd layer that are multiplied by the activation output of the 1st layer
W2 = np.random.uniform(-1e-3,1e-3,size=(num_hidden_nodes,num_classes))
#these are the biases that are added to each neuron before the final softmax activation.
b2 = np.zeros((1,num_classes))


# multiply input with weights
Z1 = np.add(np.matmul(X,W1), b1)

def sigmoid(z):
return 1 / (1 + np.exp(- z))

def d_sigmoid(g):
return sigmoid(g) * (1. - sigmoid(g))

# activation function of Z1
X2 = sigmoid(Z1)


Z2 = np.add(np.matmul(X2,W2), b2)

# softmax
def softmax(z):
# subracting the max adds numerical stability
shiftx = z - np.max(z)
exps = np.exp(shiftx)
return exps / np.sum(exps)

def d_softmax(Y_hat, Y):
return Y_hat - Y

# the hypothesis,
Y_hat = softmax(Z2)

"""Initially the network guesses all categories equally. As we perform backprop the network will get better at discerning images and their categories."""


"""### Calculate Cost"""

cost = -1 * np.sum(Y * np.log(Y_hat))


#so i think the main thing here is like a nested chain rule thing, where we find the change in the cost with respec to each
# set of matrix weights and biases?

#here is probably the order of how we do things based on whats in math below...
'''
1. find the partial deriv of the cost function with respect to the output of the second layer, without the softmax it looks like for some reason?
2. find the partial deriv of the cost function with respect to the weights of the second layer, which is dope cause we can re-use the partial deriv from step 1
3. this one I know intuitively we're looking for the parial deriv of cost with respect to the bias term of the second layer, but how TF does that math translate into
numpy? is that the same y_hat - Y from the first step? where is there anyother Y_hat - y?
4. This is also confusing cause I know where to get the weights for layer 2 from and how to transpose them, but again, where is the Y_hat - Y?
5. Here we take the missing partial deriv from step 4 and multiply it by the d_sigmoid function of the first layer outputs before activations.
6. In this step we multiply the first layer weights (transposed) by the var from 5
7. And this is weird too, this just seems like the same step as number 5 repeated for some reason but with y-y_hat instead of y_hat-y
'''
#look at tutorials like this https://www.youtube.com/watch?v=7qYtIveJ6hU
#I think the most backprop layer steps are fine without biases but how do we find the bias derivatives

#maybe just the hypothesis matrix minus the actual y matrix?
dJ_dZ2 = Y_hat - Y


#find partial deriv of cost w respect to 2nd layer weights
dJ_dW2 = np.matmul(np.transpose(X2), dJ_dZ2)


#finding the partial deriv of cost with respect to the 2nd layer biases
#I'm still not 100% sure why this is here and why it works out to Y_hat - Y
dJ_db2 = Y_hat - Y


#finding the partial deriv of cost with respect to 2nd layer inputs
dJ_dX2 = np.matmul(dJ_db2, np.transpose(W2))



#finding the partial deriv of cost with respect to Activation of layer 1
dJ_dZ1 = dJ_dX2 * d_sigmoid(Z1)



#y-yhat matmul 2nd layer weights
#I added the transpose to the W2 var because the matrices were not compaible sizes without it
inner_mat = np.matmul(Y-Y_hat,np.transpose(W2))
dJ_dW1 = np.matmul(np.transpose(X),inner_mat) * d_sigmoid(Z1)


class NeuralNetwork:
# set learning rate
lr = 0.01

# init weights
W1 = np.random.uniform(-1e-3,1e-3,size=(784,num_hidden_nodes))
b1 = np.zeros((1,num_hidden_nodes))

W2 = np.random.uniform(-1e-3,1e-3,size=(num_hidden_nodes,num_classes))
b2 = np.zeros((1,num_classes))


def __init__(self, num_hidden_nodes, num_classes, lr=0.01):
'''
# set learning rate
lr = lr

# init weights
W1 = np.random.uniform(-1e-3,1e-3,size=(784,num_hidden_nodes))
b1 = np.zeros((1,num_hidden_nodes))

W2 = np.random.uniform(-1e-3,1e-3,size=(num_hidden_nodes,num_classes))
b2 = np.zeros((1,num_classes))
'''
def forward(self, X1):
'''
Forward pass through the network

INPUT
X: input to network
shape: (?, 784)

RETURN
Y_hat: prediction from output of network
shape: (?, 10)
'''
Z1 = np.add(np.matmul(X,W1), b1)
X2 = sigmoid(Z1)# activation function of Z1
Z2 = np.add(np.matmul(X2,W2), b2)
Y_hat = softmax(Z2)

#return the hypothesis
return Y_hat

# store input for backward pass

# you can basically copy and past what you did in the forward pass above here

# think about what you need to store for the backward pass

return

def backward(self, Y_hat, Y):
'''
Backward pass through network. Update parameters

INPUT
Y_hat: Network predicted
shape: (?, 10)

Y: Correct target
shape: (?, 10)

RETURN
cost: calculate J for errors
type: (float)

'''

#Naked Backprop
dJ_dZ2 = Y_hat - Y
dJ_dW2 = np.matmul(np.transpose(X2), dJ_dZ2)
dJ_db2 = Y_hat - Y
dJ_dX2 = np.matmul(dJ_db2, np.transpose(NeuralNetwork.W2))
dJ_dZ1 = dJ_dX2 * d_sigmoid(Z1)
inner_mat = np.matmul(Y-Y_hat,np.transpose(NeuralNetwork.W2))
dJ_dW1 = np.matmul(np.transpose(X),inner_mat) * d_sigmoid(Z1)
dJ_db1 = np.matmul(Y - Y_hat, np.transpose(NeuralNetwork.W2)) * d_sigmoid(Z1)

lr = 0.1

# weight updates here
#just line 'em up and do lr * the dJ_.. vars you found above
NeuralNetwork.W2 = NeuralNetwork.W2 - lr * dJ_dW2
NeuralNetwork.b2 = NeuralNetwork.b2 - lr * dJ_db2
NeuralNetwork.W1 = NeuralNetwork.W1 - lr * dJ_dW1
NeuralNetwork.b1 = NeuralNetwork.b1 - lr * dJ_db1

# calculate the cost
cost = -1 * np.sum(Y * np.log(Y_hat))

# calc gradients

# weight updates

return cost#, W1, W2, b1, b2

nn = NeuralNetwork(200,10,lr=.01)
num_train = float(len(x_train))
num_test = float(len(x_test))

for epoch in range(10):
train_correct = 0; train_cost = 0
# training loop
for i in range(len(x_train)):
x = x_train[i]; y = y_train[i]
# standardizing input to range 0 to 1
X = x.reshape(1,784) /255.

# forward pass through network
Y_hat = nn.forward(X)

# get pred number
pred_num = np.argmax(Y_hat)

# check if prediction was accurate
if pred_num == y:
train_correct += 1

# make a one hot categorical vector; same as keras.utils.to_categorical()
zeros = np.zeros(10); zeros[y] = 1
Y = zeros

# compute gradients and update weights
train_cost += nn.backward(Y_hat, Y)

test_correct = 0
# validation loop
for i in range(len(x_test)):
x = x_test[i]; y = y_test[i]
# standardizing input to range 0 to 1
X = x.reshape(1,784) /255.

# forward pass
Y_hat = nn.forward(X)

# get pred number
pred_num = np.argmax(Y_hat)

# check if prediction was correct
if pred_num == y:
test_correct += 1

# no backward pass here!

# compute average metrics for train and test
train_correct = round(100*(train_correct/num_train), 2)
test_correct = round(100*(test_correct/num_test ), 2)
train_cost = round( train_cost/num_train, 2)

# print status message every epoch
log_message = 'Epoch: {epoch}, Train Accuracy: {train_acc}%, Train Cost: {train_cost}, Test Accuracy: {test_acc}%'.format(
epoch=epoch,
train_acc=train_correct,
train_cost=train_cost,
test_acc=test_correct
)
print (log_message)



此外,该项目位于 this Colab 和 ipynb 笔记本

最佳答案

我相信在循环的这一部分中这一点非常清楚:

for epoch in range(10):
train_correct = 0; train_cost = 0
# training loop
for i in range(len(x_train)):
x = x_train[i]; y = y_train[i]
# standardizing input to range 0 to 1
X = x.reshape(1,784) /255.

# forward pass through network
Y_hat = nn.forward(X)

# get pred number
pred_num = np.argmax(Y_hat)

# check if prediction was accurate
if pred_num == y:
train_correct += 1

# make a one hot categorical vector; same as keras.utils.to_categorical()
zeros = np.zeros(10); zeros[y] = 1
Y = zeros

# compute gradients and update weights
train_cost += nn.backward(Y_hat, Y)

test_correct = 0
# validation loop
for i in range(len(x_test)):
x = x_test[i]; y = y_test[i]
# standardizing input to range 0 to 1
X = x.reshape(1,784) /255.

# forward pass
Y_hat = nn.forward(X)

# get pred number
pred_num = np.argmax(Y_hat)

# check if prediction was correct
if pred_num == y:
test_correct += 1

# no backward pass here!

# compute average metrics for train and test
train_correct = round(100*(train_correct/num_train), 2)
test_correct = round(100*(test_correct/num_test ), 2)
train_cost = round( train_cost/num_train, 2)

# print status message every epoch
log_message = 'Epoch: {epoch}, Train Accuracy: {train_acc}%, Train Cost: {train_cost}, Test Accuracy: {test_acc}%'.format(
epoch=epoch,
train_acc=train_correct,
train_cost=train_cost,
test_acc=test_correct
)
print (log_message)

对于循环中 10 个纪元中的每个纪元,您将 train_ Correcttrain_cost 设置为 0,因此在每个 epoch< 后不会进行更新

关于python - Numpy 神经网络中的权重未更新,错误是静态的,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57939318/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com