gpt4 book ai didi

tensorflow - 检查点 keras 模型 : TypeError: can't pickle _thread. 锁定对象

转载 作者:行者123 更新时间:2023-12-04 10:03:06 26 4
gpt4 key购买 nike

过去似乎在不同的上下文中发生了错误 here ,但我没有直接转储模型——我正在使用 ModelCheckpoint 回调。知道可能出了什么问题吗?

信息:

  • Keras 2.0.8 版
  • Tensorflow 版本 1.3.0
  • Python 3.6

  • 重现错误的最小示例:
    from keras.layers import Input, Lambda, Dense
    from keras.models import Model
    from keras.callbacks import ModelCheckpoint
    from keras.optimizers import Adam
    import tensorflow as tf
    import numpy as np

    x = Input(shape=(30,3))
    low = tf.constant(np.random.rand(30, 3).astype('float32'))
    high = tf.constant(1 + np.random.rand(30, 3).astype('float32'))
    clipped_out_position = Lambda(lambda x, low, high: tf.clip_by_value(x, low, high),
    arguments={'low': low, 'high': high})(x)

    model = Model(inputs=x, outputs=[clipped_out_position])
    optimizer = Adam(lr=.1)
    model.compile(optimizer=optimizer, loss="mean_squared_error")
    checkpoint = ModelCheckpoint("debug.hdf", monitor="val_loss", verbose=1, save_best_only=True, mode="min")
    training_callbacks = [checkpoint]
    model.fit(np.random.rand(100, 30, 3), [np.random.rand(100, 30, 3)], callbacks=training_callbacks, epochs=50, batch_size=10, validation_split=0.33)

    错误输出:
    Train on 67 samples, validate on 33 samples
    Epoch 1/50
    10/67 [===>..........................] - ETA: 0s - loss: 0.1627Epoch 00001: val_loss improved from inf to 0.17002, saving model to debug.hdf
    Traceback (most recent call last):
    File "debug_multitask_inverter.py", line 19, in <module>
    model.fit(np.random.rand(100, 30, 3), [np.random.rand(100, 30, 3)], callbacks=training_callbacks, epochs=50, batch_size=10, validation_split=0.33)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/site-packages/keras/engine/training.py", line 1631, in fit


    validation_steps=validation_steps)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/site-packages/keras/engine/training.py", line 1233, in _fit_loop
    callbacks.on_epoch_end(epoch, epoch_logs)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/site-packages/keras/callbacks.py", line 73, in on_epoch_end
    callback.on_epoch_end(epoch, logs)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/site-packages/keras/callbacks.py", line 414, in on_epoch_end
    self.model.save(filepath, overwrite=True)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/site-packages/keras/engine/topology.py", line 2556, in save
    save_model(self, filepath, overwrite, include_optimizer)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/site-packages/keras/models.py", line 107, in save_model
    'config': model.get_config()
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/site-packages/keras/engine/topology.py", line 2397, in get_config
    return copy.deepcopy(config)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 215, in _deepcopy_list
    append(deepcopy(a, memo))
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
    File "/om/user/lnj/openmind_env/tensorflow-gpu/lib/python3.6/copy.py", line 169, in deepcopy
    rv = reductor(4)
    TypeError: can't pickle _thread.lock objects

    最佳答案

    保存 Lambda 时层,arguments传入的也会被保存。在这种情况下,它包含两个 tf.Tensor s。 Keras好像不支持序列化tf.Tensor现在在模型配置中。

    但是,numpy 数组可以毫无问题地序列化。所以,而不是通过 tf.Tensorarguments ,你可以传入numpy数组,并将它们转换成tf.Tensor s 在 lambda 函数中。

    x = Input(shape=(30,3))
    low = np.random.rand(30, 3)
    high = 1 + np.random.rand(30, 3)
    clipped_out_position = Lambda(lambda x, low, high: tf.clip_by_value(x, tf.constant(low, dtype='float32'), tf.constant(high, dtype='float32')),
    arguments={'low': low, 'high': high})(x)

    上面几行的一个问题是,在尝试加载此模型时,您可能会看到 NameError: name 'tf' is not defined .那是因为在 Lambda 所在的文件中没有导入 TensorFlow。层被重建(core.py)。

    tf进入 K.tf可以解决问题。您也可以替换 tf.constant()来自 K.constant() , 转换 lowhigh自动转换为 float32 张量。

    from keras import backend as K
    x = Input(shape=(30,3))
    low = np.random.rand(30, 3)
    high = 1 + np.random.rand(30, 3)
    clipped_out_position = Lambda(lambda x, low, high: K.tf.clip_by_value(x, K.constant(low), K.constant(high)),
    arguments={'low': low, 'high': high})(x)

    关于tensorflow - 检查点 keras 模型 : TypeError: can't pickle _thread. 锁定对象,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47066635/

    26 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com