gpt4 book ai didi

python - 为 Tensorflow 2.0 适配 Tensorflow RNN Seq2Seq 模型代码

转载 作者:行者123 更新时间:2023-12-04 13:40:08 28 4
gpt4 key购买 nike

我对 Tensorflow 非常陌生,并且一直在处理来自 this link 的一个简单的聊天机器人构建项目。 .

有很多警告说 Tensorflow 2.0 中的东西会被弃用,我应该升级,所以我做了。然后我使用了自动 Tensorflow code upgrader将所有必要的文件更新到 2.0。这有一些错误。

处理 model.py 文件时,它返回以下警告:

133:20: WARNING: tf.nn.sampled_softmax_loss requires manual check. `partition_strategy` has been removed from tf.nn.sampled_softmax_loss.  The 'div' strategy will be used by default.
148:31: WARNING: Using member tf.contrib.rnn.DropoutWrapper in deprecated module tf.contrib.rnn. (Manual edit required) tf.contrib.rnn.* has been deprecated, and widely used cells/functions will be moved to tensorflow/addons repository. Please check it there and file Github issues if necessary.
148:31: ERROR: Using member tf.contrib.rnn.DropoutWrapper in deprecated module tf.contrib. tf.contrib.rnn.DropoutWrapper cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
171:33: ERROR: Using member tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq in deprecated module tf.contrib. tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
197:27: ERROR: Using member tf.contrib.legacy_seq2seq.sequence_loss in deprecated module tf.contrib. tf.contrib.legacy_seq2seq.sequence_loss cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.

我遇到的主要问题是使用现在不存在的代码 contrib模块。我如何调整以下三个代码块以便它们在 Tensorflow 2.0 中工作?
# Define the network
# Here we use an embedding model, it takes integer as input and convert them into word vector for
# better word representation
decoderOutputs, states = tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq(
self.encoderInputs, # List<[batch=?, inputDim=1]>, list of size args.maxLength
self.decoderInputs, # For training, we force the correct output (feed_previous=False)
encoDecoCell,
self.textData.getVocabularySize(),
self.textData.getVocabularySize(), # Both encoder and decoder have the same number of class
embedding_size=self.args.embeddingSize, # Dimension of each word
output_projection=outputProjection.getWeights() if outputProjection else None,
feed_previous=bool(self.args.test) # When we test (self.args.test), we use previous output as next input (feed_previous)
)
# Finally, we define the loss function
self.lossFct = tf.contrib.legacy_seq2seq.sequence_loss(
decoderOutputs,
self.decoderTargets,
self.decoderWeights,
self.textData.getVocabularySize(),
softmax_loss_function= sampledSoftmax if outputProjection else None # If None, use default SoftMax
)
encoDecoCell = tf.contrib.rnn.DropoutWrapper(
encoDecoCell,
input_keep_prob=1.0,
output_keep_prob=self.args.dropout
)

最佳答案

tf.contrib基本上是 TensorFlow 社区的贡献,它的工作原理如下。

  • 社区成员可以提交代码,然后与标准 TensorFlow 包一起分发。他们的代码由
    TensorFlow 团队并作为 TensorFlow 测试的一部分进行了测试。

  • 现在在 tensorflow 2 中,Tensorflow 删除了 contrib,现在 contrib 中的每个项目都有其 future 的三个选项之一:移动到核心;移动到单独的存储库;或删除。

    您可以在此 link 中查看属于哪个类别的所有项目列表。 .

    来到替代解决方案,将代码从 Tensorflow 1 迁移到 Tensorflow 2 不会自动发生,您必须手动更改。
    您可以改为遵循以下替代方案。
    tf.contrib.rnn.DropoutWrapper您可以将其更改为 tf.compat.v1.nn.rnn_cell.DropoutWrapper
    对于序列到序列,您可以使用 TensorFlow Addons .

    TensorFlow Addons 项目包括许多序列到序列
    工具,可让您轻松构建生产就绪的编码器 - 解码器。

    例如,您可以使用如下所示的内容。
    import tensorflow_addons as tfa
    encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
    decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
    sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32)
    embeddings = keras.layers.Embedding(vocab_size, embed_size)
    encoder_embeddings = embeddings(encoder_inputs)
    decoder_embeddings = embeddings(decoder_inputs)
    encoder = keras.layers.LSTM(512, return_state=True)
    encoder_outputs, state_h, state_c = encoder(encoder_embeddings)encoder_state = [state_h, state_c]
    sampler = tfa.seq2seq.sampler.TrainingSampler()
    decoder_cell = keras.layers.LSTMCell(512)
    output_layer = keras.layers.Dense(vocab_size)
    decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell, sampler,
    output_layer=output_layer)
    final_outputs, final_state, final_sequence_lengths = decoder(
    decoder_embeddings, initial_state=encoder_state,
    sequence_length=sequence_lengths)
    Y_proba = tf.nn.softmax(final_outputs.rnn_output)
    model = keras.Model(inputs=[encoder_inputs, decoder_inputs,
    sequence_lengths],
    outputs=[Y_proba])

    同样,您需要将使用 tf.contrib 的所有方法更改为兼容的方法。

    我希望这能回答你的问题。

    关于python - 为 Tensorflow 2.0 适配 Tensorflow RNN Seq2Seq 模型代码,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58326767/

    28 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com