- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我创建了自己的非常简单的1层神经网络,专门研究二进制分类问题。输入数据点乘以权重并加上一个偏差。将整个内容相加(加权和)并通过激活函数(例如relu
或sigmoid
)进行输入。那将是预测输出。不涉及其他任何层(即隐藏层)。
仅出于我自己对数学方面的理解,我不想使用现有的库/程序包(例如Keras,PyTorch,Scikit-learn ..etc),而只是想使用简单的python代码创建神经网络。该模型是在方法(simple_1_layer_classification_NN
)中创建的,该方法采用必要的参数进行预测。但是,我遇到了一些问题,因此在下面列出了一些问题以及我的代码。
P.s.对于包含这么多代码,我真的很抱歉,但是我不知道如何在不参考相关代码的情况下提出问题。
问题:
1-当我通过一些训练数据集来训练网络时,我发现最终平均准确度会随着不同数量的时期而完全不同,而对于某种最佳数量的时期绝对没有明确的规律。我将其他参数保持不变:learning rate = 0.5
,activation = sigmoid
(因为它是1层-既是输入层又是输出层。不涉及任何隐藏层。我读过sigmoid
比relu
更适合输出层>),cost function = squared error
。以下是不同时期的结果:
纪元= 100,000。
平均准确度:50.10541638874056
时期= 500,000。
平均准确度:50.08965597645948
纪元= 1,000,000。
平均准确度:97.56879179064482
时代= 7,500,000。
平均准确度:49.994692515332524
时代750,000。
平均准确度:77.0028368954157
纪元= 100。
平均准确度:48.96967591507596
纪元= 500。
平均准确度:48.20721972881673
纪元= 10,000。
平均准确度:71.58066454336122
纪元= 50,000
平均准确度:62.52998222597177
纪元= 100,000。
平均准确度:49.813675726563424
纪元= 1,000,000。
平均准确度:49.993141329926374
如您所见,似乎没有明确的模式。我尝试了100万个时代,并获得了97.6%的准确性。然后,我尝试了750万个时代,获得了50%的准确性。五百万个纪元也获得了50%的准确性。 100个纪元导致49%的准确性。然后是真正奇怪的一个,再次尝试了100万个时代,并获得了50%。
因此,我在下面共享我的代码,因为我不相信网络在做任何学习。似乎只是随机猜测。我应用了反向传播和偏导数的概念来优化权重和偏差。所以我不确定我的代码在哪里出问题。
2- simple_1_layer_classification_NN
参数是我包含在input_dimension
方法的参数列表中的参数之一。起初,我认为需要锻炼输入层所需的权数。然后我意识到,只要将dataset_input_matrix
(特征矩阵)参数传递给该方法,我就可以访问矩阵的随机索引以访问来自矩阵(input_observation_vector = dataset_input_matrix[ri]
)的随机观察向量。然后遍历观察以访问每个功能。观察向量的循环数(或长度)将准确地告诉我需要多少个权重(因为每个特征将需要一个权重(作为其系数)。因此(len(input_observation_vector))
会告诉我输入中所需的权重数)层,因此我不需要让用户将input_dimension
参数传递给该方法。
所以我的问题很简单,当可以简单地通过评估输入矩阵中观察向量的长度来计算出input_dimension
参数时,是否有任何必要/理由?
3-当我尝试绘制costs
值的数组时,什么都没有显示-plt.plot(y_costs)
。 cost
值(从每个纪元产生)仅每50个纪元附加到costs
数组。这是为了避免在纪元数确实很高的情况下在数组中添加了太多的cost
元素。在行:
if i % 50 == 0:
costs.append(cost)
costs
数组为空。我不确定为什么会这样,何时应该每第50个时期追加一个
cost
值。可能我忽略了一个看不见的非常愚蠢的东西。
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
import sys
# import os
class NN_classification:
def __init__(self):
self.bias = float()
self.weights = []
self.chosen_activation_func = None
self.chosen_cost_func = None
self.train_average_accuracy = int()
self.test_average_accuracy = int()
# -- Activation functions --:
def sigmoid(x):
return 1/(1 + np.exp(-x))
def relu(x):
return np.maximum(0.0, x)
# -- Derivative of activation functions --:
def sigmoid_derivation(x):
return NN_classification.sigmoid(x) * (1-NN_classification.sigmoid(x))
def relu_derivation(x):
if x <= 0:
return 0
else:
return 1
# -- Squared-error cost function --:
def squared_error(pred, target):
return np.square(pred - target)
# -- Derivative of squared-error cost function --:
def squared_error_derivation(pred, target):
return 2 * (pred - target)
# --- neural network structure diagram ---
# O output prediction
# / \ w1, w2, b
# O O datapoint 1, datapoint 2
def simple_1_layer_classification_NN(self, dataset_input_matrix, output_data_labels, input_dimension, epochs, activation_func='sigmoid', learning_rate=0.2, cost_func='squared_error'):
weights = []
bias = int()
cost = float()
costs = []
dCost_dWeights = []
chosen_activation_func_derivation = None
chosen_cost_func = None
chosen_cost_func_derivation = None
correct_pred = int()
incorrect_pred = int()
# store the chosen activation function to use to it later on in the activation calculation section and in the 'predict' method
# Also the same goes for the derivation section.
if activation_func == 'sigmoid':
self.chosen_activation_func = NN_classification.sigmoid
chosen_activation_func_derivation = NN_classification.sigmoid_derivation
elif activation_func == 'relu':
self.chosen_activation_func = NN_classification.relu
chosen_activation_func_derivation = NN_classification.relu_derivation
else:
print("Exception error - no activation function utilised, in training method", file=sys.stderr)
return
# store the chosen cost function to use to it later on in the cost calculation section.
# Also the same goes for the cost derivation section.
if cost_func == 'squared_error':
chosen_cost_func = NN_classification.squared_error
chosen_cost_func_derivation = NN_classification.squared_error_derivation
else:
print("Exception error - no cost function utilised, in training method", file=sys.stderr)
return
# Set initial network parameters (weights & bias):
# Will initialise the weights to a uniform distribution and ensure the numbers are small close to 0.
# We need to loop through all the weights to set them to a random value initially.
for i in range(input_dimension):
# create random numbers for our initial weights (connections) to begin with. 'rand' method creates small random numbers.
w = np.random.rand()
weights.append(w)
# create a random number for our initial bias to begin with.
bias = np.random.rand()
# We perform the training based on the number of epochs specified
for i in range(epochs):
# create random index
ri = np.random.randint(len(dataset_input_matrix))
# Pick random observation vector: pick a random observation vector of independent variables (x) from the dataset matrix
input_observation_vector = dataset_input_matrix[ri]
# reset weighted sum value at the beginning of every epoch to avoid incrementing the previous observations weighted-sums on top.
weighted_sum = 0
# Loop through all the independent variables (x) in the observation
for i in range(len(input_observation_vector)):
# Weighted_sum: we take each independent variable in the entire observation, add weight to it then add it to the subtotal of weighted sum
weighted_sum += input_observation_vector[i] * weights[i]
# Add Bias: add bias to weighted sum
weighted_sum += bias
# Activation: process weighted_sum through activation function
activation_func_output = self.chosen_activation_func(weighted_sum)
# Prediction: Because this is a single layer neural network, so the activation output will be the same as the prediction
pred = activation_func_output
# Cost: the cost function to calculate the prediction error margin
cost = chosen_cost_func(pred, output_data_labels[ri])
# Also calculate the derivative of the cost function with respect to prediction
dCost_dPred = chosen_cost_func_derivation(pred, output_data_labels[ri])
# Derivative: bringing derivative from prediction output with respect to the activation function used for the weighted sum.
dPred_dWeightSum = chosen_activation_func_derivation(weighted_sum)
# Bias is just a number on its own added to the weighted sum, so its derivative is just 1
dWeightSum_dB = 1
# The derivative of the Weighted Sum with respect to each weight is the input data point / independant variable it's multiplied by.
# Therefore I simply assigned the input data array to another variable I called 'dWeightedSum_dWeights'
# to represent the array of the derivative of all the weights involved. I could've used the 'input_sample'
# array variable itself, but for the sake of readibility, I created a separate variable to represent the derivative of each of the weights.
dWeightedSum_dWeights = input_observation_vector
# Derivative chaining rule: chaining all the derivative functions together (chaining rule)
# Loop through all the weights to workout the derivative of the cost with respect to each weight:
for dWeightedSum_dWeight in dWeightedSum_dWeights:
dCost_dWeight = dCost_dPred * dPred_dWeightSum * dWeightedSum_dWeight
dCost_dWeights.append(dCost_dWeight)
dCost_dB = dCost_dPred * dPred_dWeightSum * dWeightSum_dB
# Backpropagation: update the weights and bias according to the derivatives calculated above.
# In other word we update the parameters of the neural network to correct parameters and therefore
# optimise the neural network prediction to be as accurate to the real output as possible
# We loop through each weight and update it with its derivative with respect to the cost error function value.
for i in range(len(weights)):
weights[i] = weights[i] - learning_rate * dCost_dWeights[i]
bias = bias - learning_rate * dCost_dB
# for each 50th loop we're going to get a summary of the
# prediction compared to the actual ouput
# to see if the prediction is as expected.
# Anything in prediction above 0.5 should match value
# 1 of the actual ouptut. Any prediction below 0.5 should
# match value of 0 for actual output
if i % 50 == 0:
costs.append(cost)
# Compare prediction to target
error_margin = np.sqrt(np.square(pred - output_data_labels[ri]))
accuracy = (1 - error_margin) * 100
self.train_average_accuracy += accuracy
# Evaluate whether guessed correctly or not based on classification binary problem 0 or 1 outcome. So if prediction is above 0.5 it guessed 1 and below 0.5 it guessed incorrectly. If it's dead on 0.5 it is incorrect for either guesses. Because it's no exactly a good guess for either 0 or 1. We need to set a good standard for the neural net model.
if (error_margin < 0.5) and (error_margin >= 0):
correct_pred += 1
elif (error_margin >= 0.5) and (error_margin <= 1):
incorrect_pred += 1
else:
print("Exception error - 'margin error' for 'predict' method is out of range. Must be between 0 and 1, in training method", file=sys.stderr)
return
# store the final optimised weights to the weights instance variable so it can be used in the predict method.
self.weights = weights
# store the final optimised bias to the weights instance variable so it can be used in the predict method.
self.bias = bias
# Calculate average accuracy from the predictions of all obervations in the training dataset
self.train_average_accuracy /= epochs
# Print out results
print('Average Accuracy: {}'.format(self.train_average_accuracy))
print('Correct predictions: {}, Incorrect Predictions: {}'.format(correct_pred, incorrect_pred))
print('costs = {}'.format(costs))
y_costs = np.array(costs)
plt.plot(y_costs)
plt.show()
from numpy import array
#define array of dataset
# each observation vector has 3 datapoints or 3 columns: length, width, and outcome label (0, 1 to represent blue flower and red flower respectively).
data = array([[3, 1.5, 1],
[2, 1, 0],
[4, 1.5, 1],
[3, 1, 0],
[3.5, 0.5, 1],
[2, 0.5, 0],
[5.5, 1, 1],
[1, 1, 0]])
# separate data: split input, output, train and test data.
X_train, y_train, X_test, y_test = data[:6, :-1], data[:6, -1], data[6:, :-1], data[6:, -1]
nn_model = NN_classification()
nn_model.simple_1_layer_classification_NN(X_train, y_train, 2, 1000000, learning_rate=0.5)
最佳答案
您是否尝试过降低学习率?您的网络过高可能会跳过本地最小值。
这是一篇关于学习率的文章:https://towardsdatascience.com/understanding-learning-rates-and-how-it-improves-performance-in-deep-learning-d0d4059c1c10
永远不会增加成本的原因是因为您在嵌套的for循环中使用了相同的变量“ i”。
# We perform the training based on the number of epochs specified
for i in range(epochs):
# create random index
ri = np.random.randint(len(dataset_input_matrix))
# Pick random observation vector: pick a random observation vector of independent variables (x) from the dataset matrix
input_observation_vector = dataset_input_matrix[ri]
# reset weighted sum value at the beginning of every epoch to avoid incrementing the previous observations weighted-sums on top.
weighted_sum = 0
# Loop through all the independent variables (x) in the observation
for i in range(len(input_observation_vector)):
# Weighted_sum: we take each independent variable in the entire observation, add weight to it then add it to the subtotal of weighted sum
weighted_sum += input_observation_vector[i] * weights[i]
# Add Bias: add bias to weighted sum
weighted_sum += bias
# Activation: process weighted_sum through activation function
activation_func_output = self.chosen_activation_func(weighted_sum)
# Prediction: Because this is a single layer neural network, so the activation output will be the same as the prediction
pred = activation_func_output
# Cost: the cost function to calculate the prediction error margin
cost = chosen_cost_func(pred, output_data_labels[ri])
# Also calculate the derivative of the cost function with respect to prediction
dCost_dPred = chosen_cost_func_derivation(pred, output_data_labels[ri])
# Derivative: bringing derivative from prediction output with respect to the activation function used for the weighted sum.
dPred_dWeightSum = chosen_activation_func_derivation(weighted_sum)
# Bias is just a number on its own added to the weighted sum, so its derivative is just 1
dWeightSum_dB = 1
# The derivative of the Weighted Sum with respect to each weight is the input data point / independant variable it's multiplied by.
# Therefore I simply assigned the input data array to another variable I called 'dWeightedSum_dWeights'
# to represent the array of the derivative of all the weights involved. I could've used the 'input_sample'
# array variable itself, but for the sake of readibility, I created a separate variable to represent the derivative of each of the weights.
dWeightedSum_dWeights = input_observation_vector
# Derivative chaining rule: chaining all the derivative functions together (chaining rule)
# Loop through all the weights to workout the derivative of the cost with respect to each weight:
for dWeightedSum_dWeight in dWeightedSum_dWeights:
dCost_dWeight = dCost_dPred * dPred_dWeightSum * dWeightedSum_dWeight
dCost_dWeights.append(dCost_dWeight)
dCost_dB = dCost_dPred * dPred_dWeightSum * dWeightSum_dB
# Backpropagation: update the weights and bias according to the derivatives calculated above.
# In other word we update the parameters of the neural network to correct parameters and therefore
# optimise the neural network prediction to be as accurate to the real output as possible
# We loop through each weight and update it with its derivative with respect to the cost error function value.
for i in range(len(weights)):
weights[i] = weights[i] - learning_rate * dCost_dWeights[i]
bias = bias - learning_rate * dCost_dB
# for each 50th loop we're going to get a summary of the
# prediction compared to the actual ouput
# to see if the prediction is as expected.
# Anything in prediction above 0.5 should match value
# 1 of the actual ouptut. Any prediction below 0.5 should
# match value of 0 for actual output
if i % 50 == 0:
costs.append(cost)
# Compare prediction to target
error_margin = np.sqrt(np.square(pred - output_data_labels[ri]))
accuracy = (1 - error_margin) * 100
self.train_average_accuracy += accuracy
# Modify the learning rate based on the cost
# Placed just before the bias is calculated
learning_rate = 0.999 * learning_rate + 0.1 * cost
关于python - 改善简单的1层神经网络,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55620438/
我正在处理一组标记为 160 个组的 173k 点。我想通过合并最接近的(到 9 或 10 个组)来减少组/集群的数量。我搜索过 sklearn 或类似的库,但没有成功。 我猜它只是通过 knn 聚类
我有一个扁平数字列表,这些数字逻辑上以 3 为一组,其中每个三元组是 (number, __ignored, flag[0 or 1]),例如: [7,56,1, 8,0,0, 2,0,0, 6,1,
我正在使用 pipenv 来管理我的包。我想编写一个 python 脚本来调用另一个使用不同虚拟环境(VE)的 python 脚本。 如何运行使用 VE1 的 python 脚本 1 并调用另一个 p
假设我有一个文件 script.py 位于 path = "foo/bar/script.py"。我正在寻找一种在 Python 中通过函数 execute_script() 从我的主要 Python
这听起来像是谜语或笑话,但实际上我还没有找到这个问题的答案。 问题到底是什么? 我想运行 2 个脚本。在第一个脚本中,我调用另一个脚本,但我希望它们继续并行,而不是在两个单独的线程中。主要是我不希望第
我有一个带有 python 2.5.5 的软件。我想发送一个命令,该命令将在 python 2.7.5 中启动一个脚本,然后继续执行该脚本。 我试过用 #!python2.7.5 和http://re
我在 python 命令行(使用 python 2.7)中,并尝试运行 Python 脚本。我的操作系统是 Windows 7。我已将我的目录设置为包含我所有脚本的文件夹,使用: os.chdir("
剧透:部分解决(见最后)。 以下是使用 Python 嵌入的代码示例: #include int main(int argc, char** argv) { Py_SetPythonHome
假设我有以下列表,对应于及时的股票价格: prices = [1, 3, 7, 10, 9, 8, 5, 3, 6, 8, 12, 9, 6, 10, 13, 8, 4, 11] 我想确定以下总体上最
所以我试图在选择某个单选按钮时更改此框架的背景。 我的框架位于一个类中,并且单选按钮的功能位于该类之外。 (这样我就可以在所有其他框架上调用它们。) 问题是每当我选择单选按钮时都会出现以下错误: co
我正在尝试将字符串与 python 中的正则表达式进行比较,如下所示, #!/usr/bin/env python3 import re str1 = "Expecting property name
考虑以下原型(prototype) Boost.Python 模块,该模块从单独的 C++ 头文件中引入类“D”。 /* file: a/b.cpp */ BOOST_PYTHON_MODULE(c)
如何编写一个程序来“识别函数调用的行号?” python 检查模块提供了定位行号的选项,但是, def di(): return inspect.currentframe().f_back.f_l
我已经使用 macports 安装了 Python 2.7,并且由于我的 $PATH 变量,这就是我输入 $ python 时得到的变量。然而,virtualenv 默认使用 Python 2.6,除
我只想问如何加快 python 上的 re.search 速度。 我有一个很长的字符串行,长度为 176861(即带有一些符号的字母数字字符),我使用此函数测试了该行以进行研究: def getExe
list1= [u'%app%%General%%Council%', u'%people%', u'%people%%Regional%%Council%%Mandate%', u'%ppp%%Ge
这个问题在这里已经有了答案: Is it Pythonic to use list comprehensions for just side effects? (7 个答案) 关闭 4 个月前。 告
我想用 Python 将两个列表组合成一个列表,方法如下: a = [1,1,1,2,2,2,3,3,3,3] b= ["Sun", "is", "bright", "June","and" ,"Ju
我正在运行带有最新 Boost 发行版 (1.55.0) 的 Mac OS X 10.8.4 (Darwin 12.4.0)。我正在按照说明 here构建包含在我的发行版中的教程 Boost-Pyth
学习 Python,我正在尝试制作一个没有任何第 3 方库的网络抓取工具,这样过程对我来说并没有简化,而且我知道我在做什么。我浏览了一些在线资源,但所有这些都让我对某些事情感到困惑。 html 看起来
我是一名优秀的程序员,十分优秀!