gpt4 book ai didi

python - Tensorflow:为什么声明变量后必须声明 `saver = tf.train.Saver()`?

转载 作者:太空宇宙 更新时间:2023-11-03 13:59:32 26 4
gpt4 key购买 nike

重要说明:我只是在笔记本环境中运行这一部分,图形定义。我还没有运行实际的 session 。

运行这段代码时:

with graph.as_default(): #took out " , tf.device('/cpu:0')"

saver = tf.train.Saver()
valid_examples = np.array(random.sample(range(1, valid_window), valid_size)) #put inside graph to get new words each time

train_dataset = tf.placeholder(tf.int32, shape=[batch_size, cbow_window*2 ])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
valid_datasetSM = tf.constant(valid_examples, dtype=tf.int32)

embeddings = tf.get_variable( 'embeddings',
initializer= tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))

softmax_weights = tf.get_variable( 'softmax_weights',
initializer= tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))

softmax_biases = tf.get_variable('softmax_biases',
initializer= tf.zeros([vocabulary_size]), trainable=False )

embed = tf.nn.embedding_lookup(embeddings, train_dataset) #train data set is

embed_reshaped = tf.reshape( embed, [batch_size*cbow_window*2, embedding_size] )


segments= np.arange(batch_size).repeat(cbow_window*2)

averaged_embeds = tf.segment_mean(embed_reshaped, segments, name=None)

#return tf.reduce_mean( tf.nn.sampled_softmax_loss(weights=softmax_weights, biases=softmax_biases, inputs=averaged_embeds,
#labels=train_labels, num_sampled=num_sampled, num_classes=vocabulary_size))

loss = tf.reduce_mean(
tf.nn.sampled_softmax_loss(weights=softmax_weights, biases=softmax_biases, inputs=averaged_embeds,
labels=train_labels, num_sampled=num_sampled, num_classes=vocabulary_size))

norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keepdims=True))
normSM = tf.sqrt(tf.reduce_sum(tf.square(softmax_weights), 1, keepdims=True))

normalized_embeddings = embeddings / norm
normalized_embeddingsSM = softmax_weights / normSM

valid_embeddings = tf.nn.embedding_lookup(
normalized_embeddings, valid_dataset)
valid_embeddingsSM = tf.nn.embedding_lookup(
normalized_embeddingsSM, valid_datasetSM)

similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
similaritySM = tf.matmul(valid_embeddingsSM, tf.transpose(normalized_embeddingsSM))

我遇到了这个错误

ValueError: 没有要保存的变量

同时指向这条线

saver = tf.train.Saver()

我搜索了堆栈溢出并找到了这个答案

Tensorflow ValueError: No variables to save from

所以我只是像这样将那条线放在图形定义的底部

with graph.as_default(): #took out " , tf.device('/cpu:0')"

valid_examples = np.array(random.sample(range(1, valid_window), valid_size)) #put inside graph to get new words each time

train_dataset = tf.placeholder(tf.int32, shape=[batch_size, cbow_window*2 ])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
valid_datasetSM = tf.constant(valid_examples, dtype=tf.int32)

embeddings = tf.get_variable( 'embeddings',
initializer= tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
softmax_weights = tf.get_variable( 'softmax_weights',
initializer= tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))

softmax_biases = tf.get_variable('softmax_biases',
initializer= tf.zeros([vocabulary_size]), trainable=False )

embed = tf.nn.embedding_lookup(embeddings, train_dataset) #train data set is
embed_reshaped = tf.reshape( embed, [batch_size*cbow_window*2, embedding_size] )

segments= np.arange(batch_size).repeat(cbow_window*2)

averaged_embeds = tf.segment_mean(embed_reshaped, segments, name=None)

loss = tf.reduce_mean(
tf.nn.sampled_softmax_loss(weights=softmax_weights, biases=softmax_biases, inputs=averaged_embeds,
labels=train_labels, num_sampled=num_sampled, num_classes=vocabulary_size))

norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keepdims=True))
normSM = tf.sqrt(tf.reduce_sum(tf.square(softmax_weights), 1, keepdims=True))

normalized_embeddings = embeddings / norm
normalized_embeddingsSM = softmax_weights / normSM

valid_embeddings = tf.nn.embedding_lookup(
normalized_embeddings, valid_dataset)
valid_embeddingsSM = tf.nn.embedding_lookup(
normalized_embeddingsSM, valid_datasetSM)

similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
similaritySM = tf.matmul(valid_embeddingsSM, tf.transpose(normalized_embeddingsSM))

saver = tf.train.Saver()

然后就没有错误了!

为什么会这样?图定义只是定义图,不运行任何东西。也许这是一种错误预防措施?

最佳答案

不必。 tf.train.Saver 有一个 defer_build 参数,如果设置为 True,则允许您在构建后定义变量。不过,您随后需要显式调用 build

saver = tf.train.Saver(defer_build=True)
# construct your graph, create variables...
...
saver.build()
graph.finalize()
# go on with training

关于python - Tensorflow:为什么声明变量后必须声明 `saver = tf.train.Saver()`?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50974976/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com