- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我在 Keras 中构建了一段代码来训练神经网络来模仿我在 MATLAB 中开发的系统的行为。我将输出和输入数据从 MATLAB 导出到 Keras。每当我训练时,准确率都是 0.00%,损失总是 382.9722....
我尝试了一切(增加隐藏层、激活函数、批量大小、纪元等),但似乎没有任何方法可以解决问题。如果有人能告诉我代码或我的数据是否有问题,我将不胜感激。
data = pd.read_csv('testkeras.txt')
print(data.head())
Y = data.output
X = data.drop('output', axis=1)
xtrain, xtest, ytrain, ytest = train_test_split(X,Y,test_size=0.5)
model = Sequential()
model.add(Dense(units = 64, input_dim = 6, init = 'uniform',
activation='relu'))
model.add(Dense(units = 32, activation='relu'))
model.add(Dense(units = 16, activation='relu'))
model.add(Dense(1, activation='sigmoid')) #output layer
model.compile(optimizer = 'rmsprop', loss = 'mean_absolute_error',
metrics=['acc'])
history = model.fit(xtrain, ytrain, batch_size = 2048, epochs = 20,
validation_split= 0.2, verbose=1)
score = model.evaluate(xtest, ytest, batch_size=2048)
print(score)
来自 matlab 的示例原始数据为(前 6 列为输入,最后一列为输出)
2,2,2,2,2,2,2.5404e+05
2,2,2,2,2,2,2.5404e+05
2,2,1.9998,1.9998,1.9998,1.9998,2.5404e+05
2,2,1.9988,1.9988,1.9988,1.9988,2.5404e+05
2,2,1.9938,1.9938,1.9938,1.9938,2.5404e+05
2,2,1.9687,1.9687,1.9687,1.9687,2.5403e+05
2,2,1.8431,1.8431,1.8431,1.8431,2.5401e+05
2,2,1.2153,1.2153,1.2153,1.2153,2.5388e+05
2,2,-1.9186,-1.9186,-1.9186,-1.9186,2.5324e+05
2,2,-17.469,-17.469,-17.469,-17.469,2.5007e+05
2,1.9997,-92.331,-92.331,-92.331,-92.331,2.3481e+05
2,1.9936,-402.94,-402.94,-402.94,-402.94,1.7135e+05
2,1.9724,-723.02,-723.02,-723.02,-723.02,1.0558e+05
2,1.9373,-938.65,-938.65,-938.65,-938.65,60759
1.9999,1.8683,-1105.7,-1105.7,-1105.7,-1105.7,24988
1.9999,1.8212,-1152.8,-1152.8,-1152.8,-1152.8,14210
1.9997,1.7097,-1190.6,-1190.6,-1190.6,-1190.6,3712
1.9996,1.6936,-1192.1,-1192.1,-1192.1,-1192.1,3012.4
1.9994,1.6126,-1192.5,-1192.5,-1192.5,-1192.5,898.37
1.9992,1.5645,-1189.5,-1189.5,-1189.5,-1189.5,291.6
1.9987,1.4363,-1176.9,-1176.9,-1176.9,-1176.9,-362.02
1.9981,1.3097,-1161.9,-1161.9,-1161.9,-1161.9,-523.72
1.9974,1.1848,-1146.5,-1146.5,-1146.5,-1146.5,-564.79
1.9965,1.0615,-1131.1,-1131.1,-1131.1,-1131.1,-576.24
1.9955,0.93983,-1115.8,-1115.8,-1115.8,-1115.8,-580.39
1.9944,0.81985,-1100.6,-1100.6,-1100.6,-1100.6,-582.7
1.9931,0.70149,-1085.6,-1085.6,-1085.6,-1085.6,-584.53
1.9918,0.58475,-1070.7,-1070.7,-1070.7,-1070.7,-586.19
1.9903,0.46962,-1055.9,-1055.9,-1055.9,-1055.9,-587.78
1.9887,0.35607,-1041.3,-1041.3,-1041.3,-1041.3,-589.31
1.987,0.2441,-1026.8,-1026.8,-1026.8,-1026.8,-590.78
1.9852,0.13368,-1012.4,-1012.4,-1012.4,-1012.4,-592.21
1.9833,0.024813,-998.22,-998.22,-998.22,-998.22,-593.58
1.9813,-0.082527,-984.13,-984.13,-984.13,-984.13,-594.9
1.9791,-0.18835,-970.17,-970.17,-970.17,-970.17,-596.17
1.9769,-0.29267,-956.34,-956.34,-956.34,-956.34,-597.4
1.9745,-0.39551,-942.64,-942.64,-942.64,-942.64,-598.57
1.9721,-0.49687,-929.07,-929.07,-929.07,-929.07,-599.7
1.9695,-0.59677,-915.62,-915.62,-915.62,-915.62,-600.78
X-train数据是
3492 -0.49055 2.0 2.0 2.0 2.0 2.0
9730 -0.49055 2.0 2.0 2.0 2.0 2.0
3027 -0.49055 2.0 2.0 2.0 2.0 2.0
4307 -0.49055 2.0 2.0 2.0 2.0 2.0
3364 -0.49055 2.0 2.0 2.0 2.0 2.0
(5008, 6)
Y 列数据是,
3492 -1.333700e-06
9730 5.215400e-08
3027 4.209600e-06
4307 5.215400e-08
3364 5.215400e-08
Name: output, dtype: float64
(5008,)
最佳答案
正如评论中提到的,这是一个回归问题,因此准确性没有意义。
但是您的代码中还有另一个问题。你的最后一层激活函数是 sigmoid:
model.add(Dense(1, activation='sigmoid')) #output layer
Sigmoid Function定义在 0 和 1 之间,这意味着网络的输出永远不会小于 0 或大于 1。因此,你永远不会接近负输出。我看到有 2 个选项可以解决这个问题
如果您扩展输入数据,它还可能会提高您的性能(错误更少,学习速度更快)。通常以均值为 0、方差为 1 的方式对其进行缩放。这称为标准化。你可以这样做,例如与 sklearns StandardScaler
此外,您的训练数据似乎有些错误:
3492 -0.49055 2.0 2.0 2.0 2.0 2.0
9730 -0.49055 2.0 2.0 2.0 2.0 2.0
3027 -0.49055 2.0 2.0 2.0 2.0 2.0
4307 -0.49055 2.0 2.0 2.0 2.0 2.0
3364 -0.49055 2.0 2.0 2.0 2.0 2.0
这里的每一行都是相同的,而您的标签 (y) 不同。您无法构建将相同输入映射到不同输出的网络。
关于python - 为什么深度学习 Keras 上的准确率总是 0.00%,损失很高,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55536102/
我训练了 LSTM 分类模型,但得到了奇怪的结果(0 准确率)。这是我的带有预处理步骤的数据集: import pandas as pd from sklearn.model_selection im
使用 TFlearn 构建 DNN 后,我想计算网络的准确性。 这是代码: def create_model(self): x = tf.placeholder(dtype= tf.float
Duplicate calculating Precision, Recall and F Score 我有一个带有文本描述和分类级别(即levelA和levelB)的输入文件。我想编写一个 SVM
如何计算语义分割中前 k 个准确率?在分类中,我们可以将 topk 准确率计算为: correct = output.eq(gt.view(1, -1).expand_as(output)) 最佳答案
我正在尝试解决多标签分类问题 from sklearn.preprocessing import MultiLabelBinarizer traindf = pickl
我是一名优秀的程序员,十分优秀!