gpt4 book ai didi

machine-learning - 使用 tensorflow 设置 MLP 进行二元分类

转载 作者:行者123 更新时间:2023-11-30 08:31:05 25 4
gpt4 key购买 nike

我在尝试使用 tensorflow 设置多层感知器进行二元分类时遇到了一些麻烦。

我有一个非常大的数据集(大约 1,5*10^6 个示例),每个数据集都有一个二进制 (0/1) 标签和 100 个特征。我需要做的是建立一个简单的 MLP,然后尝试更改学习率和初始化模式以记录结果(这是一项作业)。不过,我得到了奇怪的结果,因为我的 MLP 似乎很早就陷入了低但不是很高的成本,并且从未摆脱它。由于学习率值相当低,成本几乎立即变为 NAN。我不知道问题是否在于我构建 MLP 的方式(我做了几次尝试,准备发布最后一个的代码),或者我的 tensorflow 实现是否遗漏了某些内容。

代码

import tensorflow as tf
import numpy as np
import scipy.io

# Import and transform dataset
print("Importing dataset.")
dataset = scipy.io.mmread('tfidf_tsvd.mtx')

with open('labels.txt') as f:
all_labels = f.readlines()

all_labels = np.asarray(all_labels)
all_labels = all_labels.reshape((1498271,1))

# Split dataset into training (66%) and test (33%) set
training_set = dataset[0:1000000]
training_labels = all_labels[0:1000000]
test_set = dataset[1000000:1498272]
test_labels = all_labels[1000000:1498272]

print("Dataset ready.")

# Parameters
learning_rate = 0.01 #argv
mini_batch_size = 100
training_epochs = 10000
display_step = 500

# Network Parameters
n_hidden_1 = 64 # 1st hidden layer of neurons
n_hidden_2 = 32 # 2nd hidden layer of neurons
n_hidden_3 = 16 # 3rd hidden layer of neurons
n_input = 100 # number of features after LSA

# Tensorflow Graph input
x = tf.placeholder(tf.float64, shape=[None, n_input], name="x-data")
y = tf.placeholder(tf.float64, shape=[None, 1], name="y-labels")

print("Creating model.")

# Create model
def multilayer_perceptron(x, weights):
# First hidden layer with SIGMOID activation
layer_1 = tf.matmul(x, weights['h1'])
layer_1 = tf.nn.sigmoid(layer_1)
# Second hidden layer with SIGMOID activation
layer_2 = tf.matmul(layer_1, weights['h2'])
layer_2 = tf.nn.sigmoid(layer_2)
# Third hidden layer with SIGMOID activation
layer_3 = tf.matmul(layer_2, weights['h3'])
layer_3 = tf.nn.sigmoid(layer_3)
# Output layer with SIGMOID activation
out_layer = tf.matmul(layer_2, weights['out'])
return out_layer

# Layer weights, should change them to see results
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1], dtype=np.float64)),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2], dtype=np.float64)),
'h3': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_3],dtype=np.float64)),
'out': tf.Variable(tf.random_normal([n_hidden_2, 1], dtype=np.float64))
}

# Construct model
pred = multilayer_perceptron(x, weights)

# Define loss and optimizer
cost = tf.nn.l2_loss(pred-y,name="squared_error_cost")
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

# Initializing the variables
init = tf.initialize_all_variables()

print("Model ready.")

# Launch the graph
with tf.Session() as sess:
sess.run(init)

print("Starting Training.")

# Training cycle
for epoch in range(training_epochs):
#avg_cost = 0.
# minibatch loading
minibatch_x = training_set[mini_batch_size*epoch:mini_batch_size*(epoch+1)]
minibatch_y = training_labels[mini_batch_size*epoch:mini_batch_size*(epoch+1)]
# Run optimization op (backprop) and cost op
_, c = sess.run([optimizer, cost], feed_dict={x: minibatch_x, y: minibatch_y})

# Compute average loss
avg_cost = c / (minibatch_x.shape[0])

# Display logs per epoch
if (epoch) % display_step == 0:
print("Epoch:", '%05d' % (epoch), "Training error=", "{:.9f}".format(avg_cost))

print("Optimization Finished!")

# Test model
# Calculate accuracy
test_error = tf.nn.l2_loss(pred-y,name="squared_error_test_cost")/test_set.shape[0]
print("Test Error:", test_error.eval({x: test_set, y: test_labels}))

输出

python nn.py
Importing dataset.
Dataset ready.
Creating model.
Model ready.
Starting Training.
Epoch: 00000 Training error= 0.331874878
Epoch: 00500 Training error= 0.121587482
Epoch: 01000 Training error= 0.112870921
Epoch: 01500 Training error= 0.110293652
Epoch: 02000 Training error= 0.122655269
Epoch: 02500 Training error= 0.124971940
Epoch: 03000 Training error= 0.125407845
Epoch: 03500 Training error= 0.131942481
Epoch: 04000 Training error= 0.121696954
Epoch: 04500 Training error= 0.116669835
Epoch: 05000 Training error= 0.129558477
Epoch: 05500 Training error= 0.122952110
Epoch: 06000 Training error= 0.124655344
Epoch: 06500 Training error= 0.119827300
Epoch: 07000 Training error= 0.125183779
Epoch: 07500 Training error= 0.156429254
Epoch: 08000 Training error= 0.085632880
Epoch: 08500 Training error= 0.133913128
Epoch: 09000 Training error= 0.114762624
Epoch: 09500 Training error= 0.115107805
Optimization Finished!
Test Error: 0.116647016708

这是MMN建议的

weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1], stddev=0, dtype=np.float64)),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2], stddev=0.01, dtype=np.float64)),
'h3': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_3], stddev=0.01, dtype=np.float64)),
'out': tf.Variable(tf.random_normal([n_hidden_2, 1], dtype=np.float64))
}

这是输出

Epoch: 00000 Training error= 0.107566668
Epoch: 00500 Training error= 0.289380907
Epoch: 01000 Training error= 0.339091784
Epoch: 01500 Training error= 0.358559815
Epoch: 02000 Training error= 0.122639698
Epoch: 02500 Training error= 0.125160135
Epoch: 03000 Training error= 0.126219718
Epoch: 03500 Training error= 0.132500418
Epoch: 04000 Training error= 0.121795254
Epoch: 04500 Training error= 0.116499476
Epoch: 05000 Training error= 0.124532673
Epoch: 05500 Training error= 0.124484790
Epoch: 06000 Training error= 0.118491177
Epoch: 06500 Training error= 0.119977633
Epoch: 07000 Training error= 0.127532511
Epoch: 07500 Training error= 0.159053519
Epoch: 08000 Training error= 0.083876224
Epoch: 08500 Training error= 0.131488483
Epoch: 09000 Training error= 0.123161189
Epoch: 09500 Training error= 0.125011362
Optimization Finished!
Test Error: 0.129284643093

连接第三个隐藏层,感谢MMN

我的代码中有一个错误,我有两个隐藏层而不是三个。我纠正了做法:

'out': tf.Variable(tf.random_normal([n_hidden_3, 1], dtype=np.float64))

out_layer = tf.matmul(layer_3, weights['out'])

不过,我返回了 stddev 的旧值,因为它似乎导致成本函数的波动较小。

输出仍然令人不安

Epoch: 00000 Training error= 0.477673073
Epoch: 00500 Training error= 0.121848744
Epoch: 01000 Training error= 0.112854530
Epoch: 01500 Training error= 0.110597624
Epoch: 02000 Training error= 0.122603499
Epoch: 02500 Training error= 0.125051472
Epoch: 03000 Training error= 0.125400717
Epoch: 03500 Training error= 0.131999354
Epoch: 04000 Training error= 0.121850889
Epoch: 04500 Training error= 0.116551533
Epoch: 05000 Training error= 0.129749704
Epoch: 05500 Training error= 0.124600464
Epoch: 06000 Training error= 0.121600218
Epoch: 06500 Training error= 0.121249676
Epoch: 07000 Training error= 0.132656938
Epoch: 07500 Training error= 0.161801757
Epoch: 08000 Training error= 0.084197352
Epoch: 08500 Training error= 0.132197409
Epoch: 09000 Training error= 0.123249055
Epoch: 09500 Training error= 0.126602369
Optimization Finished!
Test Error: 0.129230736355

感谢史蒂文的另外两项更改于是Steven提出用ReLu改变Sigmoid激活函数,于是我就尝试了。同时,我注意到我没有为输出节点设置激活函数,所以我也这样做了(应该很容易看出我改变了什么)。

Starting Training.
Epoch: 00000 Training error= 293.245977809
Epoch: 00500 Training error= 0.290000000
Epoch: 01000 Training error= 0.340000000
Epoch: 01500 Training error= 0.360000000
Epoch: 02000 Training error= 0.285000000
Epoch: 02500 Training error= 0.250000000
Epoch: 03000 Training error= 0.245000000
Epoch: 03500 Training error= 0.260000000
Epoch: 04000 Training error= 0.290000000
Epoch: 04500 Training error= 0.315000000
Epoch: 05000 Training error= 0.285000000
Epoch: 05500 Training error= 0.265000000
Epoch: 06000 Training error= 0.340000000
Epoch: 06500 Training error= 0.180000000
Epoch: 07000 Training error= 0.370000000
Epoch: 07500 Training error= 0.175000000
Epoch: 08000 Training error= 0.105000000
Epoch: 08500 Training error= 0.295000000
Epoch: 09000 Training error= 0.280000000
Epoch: 09500 Training error= 0.285000000
Optimization Finished!
Test Error: 0.220196439287

这就是它在每个节点上使用 Sigmoid 激活函数所做的事情,包括输出

Epoch: 00000 Training error= 0.110878121
Epoch: 00500 Training error= 0.119393080
Epoch: 01000 Training error= 0.109229532
Epoch: 01500 Training error= 0.100436962
Epoch: 02000 Training error= 0.113160662
Epoch: 02500 Training error= 0.114200962
Epoch: 03000 Training error= 0.109777990
Epoch: 03500 Training error= 0.108218725
Epoch: 04000 Training error= 0.103001394
Epoch: 04500 Training error= 0.084145737
Epoch: 05000 Training error= 0.119173495
Epoch: 05500 Training error= 0.095796251
Epoch: 06000 Training error= 0.093336573
Epoch: 06500 Training error= 0.085062860
Epoch: 07000 Training error= 0.104251661
Epoch: 07500 Training error= 0.105910949
Epoch: 08000 Training error= 0.090347288
Epoch: 08500 Training error= 0.124480612
Epoch: 09000 Training error= 0.109250224
Epoch: 09500 Training error= 0.100245836
Optimization Finished!
Test Error: 0.110234139674

我发现这些数字很奇怪,在第一种情况下,它陷入了比 sigmoid 更高的成本,尽管 sigmoid 应该很早就饱和。在第二种情况下,它从一个训练错误开始,这几乎是最后一个……所以它基本上以一个小批量收敛。我开始认为我没有正确计算成本,在这一行中: avg_cost = c/(minibatch_x.shape[0])

最佳答案

所以可能有以下几件事:

  1. 您可能会使 sigmoid 单元饱和(如 MMN 提到的),我建议尝试使用 relu 单元。

替换:

tf.nn.sigmoid(layer_n)

与:

tf.nn.relu(layer_n)
  • 您的模型可能不具备实际学习数据的表达能力。 IE。它需要更深。
  • 您还可以尝试不同的优化器,例如 Adam()
  • 替换:

    optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

    与:

    optimizer = tf.train.AdamOptimizer().minimize(cost)
    <小时/>

    其他几点:

  • 您应该为权重添加一个偏差项
  • 像这样:

    biases = {
    'b1': tf.Variable(tf.random_normal([n_hidden_1], dtype=np.float64)),
    'b2': tf.Variable(tf.random_normal([n_hidden_2], dtype=np.float64)),
    'b3': tf.Variable(tf.random_normal([n_hidden_3],dtype=np.float64)),
    'bout': tf.Variable(tf.random_normal([1], dtype=np.float64))
    }

    def multilayer_perceptron(x, weights):
    # First hidden layer with SIGMOID activation
    layer_1 = tf.matmul(x, weights['h1']) + biases['b1']
    layer_1 = tf.nn.sigmoid(layer_1)
    # Second hidden layer with SIGMOID activation
    layer_2 = tf.matmul(layer_1, weights['h2']) + biases['b2']
    layer_2 = tf.nn.sigmoid(layer_2)
    # Third hidden layer with SIGMOID activation
    layer_3 = tf.matmul(layer_2, weights['h3']) + biases['b3']
    layer_3 = tf.nn.sigmoid(layer_3)
    # Output layer with SIGMOID activation
    out_layer = tf.matmul(layer_2, weights['out']) + biases['bout']
    return out_layer
  • 并且您可以随着时间的推移更新学习率
  • 像这样:

        learning_rate = tf.train.exponential_decay(INITIAL_LEARNING_RATE,
    global_step,
    decay_steps,
    LEARNING_RATE_DECAY_FACTOR,
    staircase=True)

    您只需要定义衰减步骤,即何时衰减和 LEARNING_RATE_DECAY_FACTOR,即衰减多少。

    关于machine-learning - 使用 tensorflow 设置 MLP 进行二元分类,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/39817949/

    25 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com