gpt4 book ai didi

python - 为什么我的神经网络得到错误的输出?

转载 作者:行者123 更新时间:2023-12-01 07:35:35 24 4
gpt4 key购买 nike

我已经为神经网络编写了这段代码,但不确定为什么我得到的输出到底是不正确的。

我创建了一个具有两个 1x1 层或神经元的网络。输入是 1 到 0 之间的随机数,并将其设置为网络所需的输出。这些是输入(左)和接收(右)值的示例:

[0.11631148733527708] [0.52613976]

[0.19471305546308992] [0.54367643]

[0.38620499751234083] [0.58595699]

[0.507207377588539] [0.61203927]

[0.9552623183688456] [0.70232115]

这是我的代码:

ma​​in.py

from NeuralNetwork import NeuralNetwork
from random import random

net = NeuralNetwork((1, 1))
net.learning_rate = 0.01

while True:
v1 = [random() for i in range(0, 1)]
actual = v1

net.input(v1)
net.actual(actual)

net.calculate()
net.backpropagate()

print(f"{v1} {net.output()}")

神经网络.py

import numpy as np
from math import e

def sigmoid(x):
sig_x = 1 / (1 + e**-x)
return sig_x

def d_sigmoid(x):
sig_x = 1 / (1 + e**-x)
d_sig_x = np.dot(sig_x.transpose(), (1 - sig_x))
return d_sig_x

class NeuralNetwork():
def __init__(self, sizes):
self.activations = [np.zeros((size, 1)) for size in sizes]
self.values = [np.zeros((size, 1)) for size in sizes[1:]]
self.biases = [np.zeros((size, 1)) for size in sizes[1:]]

self.weights = [np.zeros((sizes[i + 1], sizes[i])) for i in range(0, len(sizes) - 1)]
self.activation_functions = [(sigmoid, d_sigmoid) for i in range(0, len(sizes) - 1)]

self.last_layer_actual = np.zeros((sizes[-1], 1))
self.learning_rate = 0.01

def calculate(self):
for i, activations in enumerate(self.activations[:-1]):
activation_function = self.activation_functions[i][0]

self.values[i] = np.dot(self.weights[i], activations) + self.biases[i]
self.activations[i + 1] = activation_function(self.values[i])

def backpropagate(self):
current = 2 * (self.activations[-1] - self.last_layer_actual)
last_weights = 1

for i, weights in enumerate(self.weights[::-1]):
d_activation_func = self.activation_functions[-i - 1][1]

current = np.dot(last_weights, current)
current = np.dot(current, d_activation_func(self.values[-i - 1]))

weights_change = np.dot(current, self.activations[-i - 2].transpose())
weights -= weights_change * self.learning_rate

self.biases[-i - 1] -= current * self.learning_rate

last_weights = weights.transpose()

def input(self, network_input):
self.activations[0] = np.array(network_input).reshape(-1, 1)

def output(self):
return self.activations[-1].ravel()

def actual(self, last_layer_actual):
self.last_layer_actual = np.array(last_layer_actual).reshape(-1, 1)

最佳答案

我刚刚意识到 sigmoid 函数不是线性的。

因此,为了使所有输出等于输入,单个权重的期望值不能是恒定的

就这么简单

关于python - 为什么我的神经网络得到错误的输出?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57004145/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com