gpt4 book ai didi

python - Tensorflow 新手 - 尝试将 MNIST 多层网络重新用作计算器

转载 作者:行者123 更新时间:2023-11-30 09:54:18 34 4
gpt4 key购买 nike

有人可以帮助或指导我如何更好地做到这一点吗?

我将输入数量更改为 2,并生成一些随机数据“x1”和“x2”(一个数字相加到另一个数字)。这个想法是使用变量“add”和“mul”作为实际输出,并以此为基础计算成本(变量“Y”),但我在操作数据以使其正确输入时遇到了困难。

我尝试使用另一个变量

x = tf.Variable([100 * np.random.random_sample([100]), 100 * np.random.random_sample([100]))

以及其他一些替代方法,但这会导致错误。另外,如果我的代码还有什么错误的地方,请大家批评指正!任何事情都有帮助。

谢谢。

'''
A Recurrent Neural Network implementation example using TensorFlow Library.

Author: *********
'''

import numpy as np
import tensorflow as tf
from tensorflow.models.rnn import rnn, rnn_cell
# import matplotlib.pyplot as plt
# from mpl_toolkits.mplot3d import Axes3D

# Parameters
training_iters = 1000
n_epochs = 1000
batch_size = 128
display_step = 100
learning_rate = 0.001

n_observations = 100
n_input = 2 # Input data (Num + Num)
n_steps = 28 # timesteps
n_hidden_1 = 256 # 1st layer number of features
n_hidden_2 = 256 # 2nd layer number of features
n_classes = 1 # Output

X = tf.placeholder("float", [None, n_input])
X1 = tf.placeholder(tf.float32)
X2 = tf.placeholder(tf.float32)
Y = tf.placeholder(tf.float32)

# Random input data
x1 = 100 * np.random.random_sample([100,])
x2 = 100 * np.random.random_sample([100,])

add = tf.add(x1, x2)
mul = tf.mul(X1, X2)

weights = {
'hidden1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
#'hidden2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_hidden_1, n_classes]))
}

biases = {
'hidden1': tf.Variable(tf.random_normal([n_hidden_1])),
#'hidden2': tf.Variable(tf.random_normal([n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_classes]))
}

def RNN(_X1, _weights, _biases):

# Layer 1.1
layer_1 = tf.add(tf.matmul(_X1, weights['hidden1']), biases['hidden1'])
layer_1 = tf.nn.relu(layer_1)
# Layer 1.2
# layer_1_2 = tf.add(tf.matmul(_X2, weights['hidden2']), biases['hidden2'])
# layer_1_2 = tf.nn.relu(layer_1_2)
# Hidden layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, weights['out']), biases['out'])

output = tf.nn.relu(layer_2)

return output

pred = RNN(X1, weights, biases)
cost = tf.reduce_sum(tf.pow(pred - Y, 2)) / (n_observations - 1)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Adam Optimizer
correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(Y,1))

init = tf.initialize_all_variables()
# initData = tf.initialize_variables(x1.all(), x2.all())

with tf.Session() as sess:
# Here we tell tensorflow that we want to initialize all
# the variables in the graph so we can use them
sess.run(init)

# Fit all training data
prev_training_cost = 0.0

for epoch_i in range(n_epochs) :
for (_x1) in x1:
for (_x2) in x2:
print("Input 1:")
print(_x1)
print("Input 2:")
print(_x2)
print("Add function: ")
print(sess.run(add, feed_dict={X1: x1, X2: x2}))
y = sess.run(add, feed_dict={X1: x1, X2: x2})
print(y)
sess.run(optimizer, feed_dict={X: x, Y: y})


training_cost = sess.run(
cost, feed_dict={X: xs, Y: ys})
print(training_cost)

if epoch_i % 20 == 0:
ax.plot(X1, X2, pred.eval(
feed_dict={X1: x1, X2: x2}, session=sess),
'k', alpha=epoch_i / n_epochs)
fig.show()
plt.draw()

# Allow the training to quit if we've reached a minimum
if np.abs(prev_training_cost - training_cost) < 0.000001:
break
prev_training_cost = training_cost

最佳答案

那么您是在训练前馈网络还是循环神经网络?

您在 RNN() 中编写的代码让我想起了一个简单的神经网络(前馈网络)。然而你的标题却说你正在研究 RNN

您可能会发现 this实现很有趣。与您一样,它生成整数向量并使用 RNN 进行加法

关于python - Tensorflow 新手 - 尝试将 MNIST 多层网络重新用作计算器,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/37625511/

34 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com