gpt4 book ai didi

python - 验证前馈网络的有效性

转载 作者:太空宇宙 更新时间:2023-11-03 21:45:43 24 4
gpt4 key购买 nike

我是 tensorflow 新手,我的任务是设计一个前馈神经网络,其中包括:一个输入层、一个由 10 个神经元组成的隐藏感知器层和一个输出 softmax 层。假设学习率为 0.01,L2 正则化的权重衰减参数为 0.000001,批量大小为 32。

我想知道是否有办法知道我创建的网络是否是我想要创建的网络。就像显示节点的图表一样?

以下是对该任务的尝试,但我不确定它是否正确。

import math
import tensorflow as tf
import numpy as np
import pylab as plt


# scale data
def scale(X, X_min, X_max):
return (X - X_min)/(X_max-X_min)


def tfvariables(start_nodes, end_nodes):
W = tf.Variable(tf.truncated_normal([start_nodes, end_nodes], stddev=1.0/math.sqrt(float(start_nodes))))
b = tf.Variable(tf.zeros([end_nodes]))
return W, b


NUM_FEATURES = 36
NUM_CLASSES = 6

learning_rate = 0.01
beta = 10 ** -6
epochs = 10000
batch_size = 32
num_neurons = 10
seed = 10
np.random.seed(seed)

#read train data
train_input = np.loadtxt('sat_train.txt',delimiter=' ')
trainX, train_Y = train_input[:, :36], train_input[:, -1].astype(int)
trainX = scale(trainX, np.min(trainX, axis=0), np.max(trainX, axis=0))
# There are 6 class-labels 1,2,3,4,5,7
train_Y[train_Y == 7] = 6

trainY = np.zeros((train_Y.shape[0], NUM_CLASSES))
trainY[np.arange(train_Y.shape[0]), train_Y-1] = 1 #one matrix

# experiment with small datasets
trainX = trainX[:1000]
trainY = trainY[:1000]

n = trainX.shape[0]

# Create the model
x = tf.placeholder(tf.float32, [None, NUM_FEATURES])
y_ = tf.placeholder(tf.float32, [None, NUM_CLASSES])

# Build the graph for the deep net
W1, b1 = tfvariables(NUM_FEATURES, num_neurons)
W2, b2 = tfvariables(num_neurons, NUM_CLASSES)

logits_1 = tf.matmul(x, W1) + b1
perceptron_layer = tf.nn.sigmoid(logits_1)
logits_2 = tf.matmul(perceptron_layer, W2) + b2

cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_, logits=logits_2)
# Standard Loss
loss = tf.reduce_mean(cross_entropy)
# Loss function with L2 Regularization with beta
regularizers = tf.nn.l2_loss(W1) + tf.nn.l2_loss(W2)
loss = tf.reduce_mean(loss + beta * regularizers)

# Create the gradient descent optimizer with the given learning rate.
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
train_op = optimizer.minimize(cross_entropy)

correct_prediction = tf.cast(tf.equal(tf.argmax(logits_2, 1), tf.argmax(y_, 1)), tf.float32)
accuracy = tf.reduce_mean(correct_prediction)

config = tf.ConfigProto()
config.gpu_options.allow_growth = True

with tf.Session(config=config) as sess:
sess.run(tf.global_variables_initializer())
train_acc = []
train_loss = []
for i in range(epochs):
train_op.run(feed_dict={x: trainX, y_: trainY})
train_acc.append(accuracy.eval(feed_dict={x: trainX, y_: trainY}))
train_loss.append(loss.eval(feed_dict={x: trainX, y_: trainY}))


if i % 500 == 0:
print('iter %d: accuracy %g loss %g'%(i, train_acc[i], train_loss[i]))


# plot learning curves
plt.figure(1)
plt.plot(range(epochs), train_acc)
plt.xlabel(str(epochs) + ' iterations')
plt.ylabel('Train accuracy')
# plot learning curves
plt.figure(1)
plt.plot(range(epochs), train_loss)
plt.xlabel(str(epochs) + ' iterations')
plt.ylabel('Train loss')
plt.show()
plt.show()

最佳答案

您可以利用 Tensorboard 来可视化您创建的图形。基本上,您必须遵循几个步骤才能做到这一点:

  1. 声明一个作者为 writer = tf.summary.FileWriter('PATH/TO/A/LOGDIR')
  2. 使用writer.add_graph(sess.graph)将图表添加到编写器中sess 是您当前的tf.Session()您在其中执行图表
  3. 可能您必须使用writer.flush()立即将其写入磁盘

请注意,您必须在构建图表后添加这些行。

您可以通过在 shell 中执行以下命令来查看图表:

tensorboard --logdir=PATH/TO/A/LOGDIR

然后您会看到一个地址(通常类似于 localhost:6006),您可以在该地址上使用浏览器查看图表(Chrome 和 Firefox 保证可以工作)。

关于python - 验证前馈网络的有效性,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/52516101/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com