gpt4 book ai didi

python - 如何在Tensorflow中用Logistic层替换Softmax输出层?

转载 作者:行者123 更新时间:2023-11-30 09:11:22 25 4
gpt4 key购买 nike

我的工作需要一些帮助。现在,我使用 Softmax 层作为神经网络中分类分数的输出层。但是,我需要在输出层上用逻辑层替换 Softmax 层。我有一些属于多个类的输入。 Softmax 显示所有类别的概率,并将该类别分配给最高概率,并且很难确定一次预测多个类别的阈值。在逻辑函数的情况下,每个神经元将显示 (0-1) 之间的数字,在这种情况下我可以决定阈值。这是我的代码:

2层网络初始化

# Parameters
training_epochs = 10#100
batch_size = 64
display_step = 1
batch = tf.Variable(0, trainable=False)
regualarization = 0.009

# Network Parameters
n_hidden_1 = 250 # 1st layer num features
n_hidden_2 = 250 # 2nd layer num features

n_input = model.layer1_size # Vector input (sentence shape: 30*10)
n_classes = 12 # Sentence Category detection total classes (0-11 categories)

#History storing variables for plots
loss_history = []
train_acc_history = []
val_acc_history = []


# tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])

#Strings
trainingString = "\n\nTraining Accuracy and Confusion Matrix:"
validationString = "\n\nValidation set Accuracy and Confusion Matrix:"
testString = "\n\nTest set Accuracy and Confusion Matrix:"
goldString = "\n\nGold set Accuracy and Confusion Matrix:"

# Create model
def multilayer_perceptron(_X, _weights, _biases):
#Single Layer
#layer_1 = tf.nn.relu(tf.add(tf.matmul(_X, _weights['h1']), _biases['b1']))
#return tf.matmul(layer_1, weights['out']) + biases['out']

##2 layer
#Hidden layer with RELU activation
layer_1 = tf.nn.relu(tf.add(tf.matmul(_X, _weights['h1']), _biases['b1']))
#Hidden layer with RELU activation
layer_2 = tf.nn.relu(tf.add(tf.matmul(layer_1, _weights['h2']), _biases['b2']))
return tf.matmul(layer_2, weights['out']) + biases['out']

# Store layers weight & bias
weights = {
##1 Layer
#'h1': w2v_utils.weight_variable(n_input, n_hidden_1),
#'out': w2v_utils.weight_variable(n_hidden_1, n_classes)

##2 Layer
'h1': w2v_utils.weight_variable(n_input, n_hidden_1),
'h2': w2v_utils.weight_variable(n_hidden_1, n_hidden_2),
'out': w2v_utils.weight_variable(n_hidden_2, n_classes)
}

biases = {
##1 Layer
#'b1': w2v_utils.bias_variable([n_hidden_1]),
#'out': w2v_utils.bias_variable([n_classes])

##2 Layer
'b1': w2v_utils.bias_variable([n_hidden_1]),
'b2': w2v_utils.bias_variable([n_hidden_2]),
'out': w2v_utils.bias_variable([n_classes])
}

# Construct model
pred = multilayer_perceptron(x, weights, biases)

# Define loss and optimizer
#learning rate
# Optimizer: set up a variable that's incremented once per batch and
# controls the learning rate decay.
learning_rate = tf.train.exponential_decay(
0.02*0.01, # Base learning rate.
batch * batch_size, # Current index into the dataset.
X_train.shape[0], # Decay step.
0.96, # Decay rate.
staircase=True)

#L2 regularization
l2_loss = tf.add_n([tf.nn.l2_loss(v) for v in tf.trainable_variables()])

#Softmax loss
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))

#Total_cost
cost = cost+ (regualarization*0.5*l2_loss)

# Adam Optimizer
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost,global_step=batch)

# Initializing the variables
init = tf.initialize_all_variables()

print "Network Initialized!"

我们如何修改这个网络,使每个输出神经元的概率在 (0-1) 之间?

最佳答案

只需更改行:

# Construct model
pred = multilayer_perceptron(x, weights, biases)

# Construct model
model pred = tf.nn.sigmoid(multilayer_perceptron(x, weights, biases))

关于python - 如何在Tensorflow中用Logistic层替换Softmax输出层?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36120302/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com