gpt4 book ai didi

machine-learning - Tensorflow保存模型: GraphDef cannot be larger than 2GB

转载 作者:行者123 更新时间:2023-11-30 08:54:34 26 4
gpt4 key购买 nike

我收到以下错误 - 显然是在保存模型时

Step = 1799  |  Tensorflow Accuracy = 1.0
Step = 1799 | My Accuracy = 0.0363355780022
Step = 1800 | Tensorflow Accuracy = 1.0
Step = 1800 | My Accuracy = 0.0364694929089
Traceback (most recent call last):
File "CNN-LSTM-seg-reg-sigmoid.py", line 290, in <module>
saver.save(sess, save_path)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1085, in save
self.export_meta_graph(meta_graph_filename)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1103, in export_meta_graph
add_shapes=True),
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2175, in as_graph_def
result, _ = self._as_graph_def(from_version, add_shapes)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2138, in _as_graph_def
raise ValueError("GraphDef cannot be larger than 2GB.")
ValueError: GraphDef cannot be larger than 2GB.

Here建议留意tf.constant s,但我的程序中有零个常量。然而,我的weightsbiases如下所示:tf.Variable(tf.random_normal([32]),name="bc1") 。这会是一个问题吗?

如果不是这样,那么 this告诉我,我在每次循环迭代后都会添加到图表中,但我不确定它发生在哪里。

我的第一个猜测是当我做出预测时。我通过以下方式进行预测以下代码...

# Make prediction
im = Image.open('/home/volcart/Documents/Data/input_crops/temp data0001.tif')
batch_x = np.array(im)
batch_x = batch_x.reshape((1, n_input_x, n_input_y))
batch_x = batch_x.astype(float)
prediction = sess.run(pred, feed_dict={x: batch_x})
prediction = tf.sigmoid(prediction.reshape((n_input_x * n_input_y, n_classes)))
prediction = prediction.eval().reshape((n_input_x, n_input_y, n_classes))

我的第二个猜测是当我计算loss时和accuracy通过以下方式:loss, acc = sess.run([cost, accuracy], feed_dict={x: batch_x, y: batch_y})

我的整个 session 代码如下所示:

# Initializing the variables
init = tf.initialize_all_variables()
saver = tf.train.Saver()

gpu_options = tf.GPUOptions()
config = tf.ConfigProto(gpu_options=gpu_options)
config.gpu_options.allow_growth = True

# Launch the graph
with tf.Session(config=config) as sess:
sess.run(init)
summary = tf.train.SummaryWriter('/tmp/logdir/', sess.graph) #initialize graph for tensorboard
step = 1
# Import data
data = scroll_data.read_data('/home/volcart/Documents/Data/')
# Keep training until reach max iterations
while step * batch_size < training_iters:
batch_x, batch_y = data.train.next_batch(batch_size)
# Run optimization op (backprop)
batch_x = batch_x.reshape((batch_size, n_input_x, n_input_y))
batch_y = batch_y.reshape((batch_size, n_input_x, n_input_y))
batch_y = convert_to_2_channel(batch_y, batch_size)
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
step = step + 1

loss, acc = sess.run([cost, accuracy], feed_dict={x: batch_x,
y: batch_y})


# Make prediction
im = Image.open('/home/volcart/Documents/Data/input_crops/temp data0001.tif')
batch_x = np.array(im)
batch_x = batch_x.reshape((1, n_input_x, n_input_y))
batch_x = batch_x.astype(float)
prediction = sess.run(pred, feed_dict={x: batch_x})
prediction = tf.sigmoid(prediction.reshape((n_input_x * n_input_y, n_classes)))
prediction = prediction.eval().reshape((n_input_x, n_input_y, n_classes))

# Temp arrays are to splice the prediction n_input_x x n_input_y x 2
# into 2 matrices n_input_x x n_input_y
temp_arr1 = np.empty((n_input_x, n_input_y))
for i in xrange(n_input_x):
for j in xrange(n_input_x):
for k in xrange(n_classes):
if k == 0:
temp_arr1[i][j] = 1 - prediction[i][j][k]

my_acc = accuracy_custom(temp_arr1,batch_y[0,:,:,0])

print "Step = " + str(step) + " | Tensorflow Accuracy = " + str(acc)
print "Step = " + str(step) + " | My Accuracy = " + str(my_acc)

if step % 100 == 0:
save_path = "/home/volcart/Documents/CNN-LSTM-reg-model/CNN-LSTM-seg-step-" + str(step) + "-model.ckpt"
saver.save(sess, save_path)
csv_file = "/home/volcart/Documents/CNN-LSTM-reg/CNNLSTMreg-step-" + str(step) + "-accuracy-" + str(my_acc) + ".csv"
np.savetxt(csv_file, temp_arr1, delimiter=",")

最佳答案

您正在这条线上增长图表:

prediction = tf.sigmoid(prediction.reshape((n_input_x * n_input_y, n_classes)))

这会将您的预测 numpy 数组转换为 TensorFlow 常量节点,将其内联到图表中,并在其上添加 Sigmoid 节点。

您可以通过在开始训练循环之前添加 tf.get_default_graph().finalize() 来捕获此类问题

关于machine-learning - Tensorflow保存模型: GraphDef cannot be larger than 2GB,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38858385/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com