gpt4 book ai didi

tensorflow - 如何仅在应用程序的特定部分使用 tensorflow 急切执行?

转载 作者:行者123 更新时间:2023-12-03 23:13:25 24 4
gpt4 key购买 nike

我有几个不同文件的文件:

  • main.py
  • watch.py​​
  • 读取.py
  • detect.py <-- 使用基于 tensorflow 的库darkflow依赖于图形模式
  • translate.py <-- 使用 tf
    急切执行

  • 在暗流的 TFNet 初始化期间,我收到此错误:
    Traceback (most recent call last):
    File "/home/justin/Projects/comp3931/main.py", line 6, in <module>
    watcher = Watcher('res/vid/planet_earth_s01e01/video.mp4', 'res/vid/planet_earth_s01e01/english.srt')
    File "/home/justin/Projects/comp3931/watch.py", line 9, in __init__
    self.detector = Detector()
    File "/home/justin/Projects/comp3931/detect.py", line 6, in __init__
    self.tfnet = TFNet(self.options)
    File "/usr/local/lib64/python3.6/site-packages/darkflow/net/build.py", line 75, in __init__
    self.build_forward()
    File "/usr/local/lib64/python3.6/site-packages/darkflow/net/build.py", line 105, in build_forward
    self.inp = tf.placeholder(tf.float32, inp_size, 'input')
    File "/usr/local/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 1677, in placeholder
    raise RuntimeError("tf.placeholder() is not compatible with "
    RuntimeError: tf.placeholder() is not compatible with eager execution.

    所以,我假设当我实例化 Translator来自 translate.py 的类(class)文件它在整个程序上调用急切执行,然后与对暗流的 TFNet 的调用不兼容 Dectector 中使用的类来自 detect.py 的类(class)

    如果我运行 translate.py独立于其他模块可以正常工作,如果在没有 translate.py 的情况下运行其他模块也可以正常工作涉及。

    我猜他们使用不同的上下文(图形/渴望),整个事情不能在同一个程序中一起运行。我试过查看文档,但找不到在需要时切换回图形模式的方法。

    有什么方法可以在不同地方的同一个应用程序中同时运行渴望和图形模式?

    最佳答案

    最好编写与图形模式和急切执行兼容的代码。来自 documentation :

    • Use tf.data for input processing instead of queues. It's faster and easier.
    • Use object-oriented layer APIs—like tf.keras.layers and tf.keras.Model—since they have explicit storage for variables.
    • Most model code works the same during eager and graph execution, but there are exceptions. (For example, dynamic models using Python control flow to change the computation based on inputs.)
    • Once eager execution is enabled with tf.enable_eager_execution, it cannot be turned off. Start a new Python session to return to graph execution.


    也就是说,可以通过使用 tfe.py_func() 在图形模式下使用急切执行。 .这是文档中的代码示例(我刚刚添加了导入和断言):
    import tensorflow as tf
    import tensorflow.contrib.eager as tfe

    def my_py_func(x):
    assert tf.executing_eagerly()
    x = tf.matmul(x, x) # You can use tf ops
    print(x) # but it's eager!
    return x

    assert not tf.executing_eagerly()
    with tf.Session() as sess:
    x = tf.placeholder(dtype=tf.float32)
    # Call eager function in graph!
    pf = tfe.py_func(my_py_func, [x], tf.float32)
    sess.run(pf, feed_dict={x: [[2.0]]}) # [[4.0]]

    反过来也是可能的,正如 Alex Passos 在 this video 中解释的那样。 .这是一个受视频中启发的示例:
    import tensorflow as tf
    import tensorflow.contrib.eager as tfe

    tf.enable_eager_execution()

    def my_graph_func(x):
    assert not tf.executing_eagerly()
    w = tfe.Variable(2.0)
    b = tfe.Variable(4.0)
    return x * w + b

    assert tf.executing_eagerly()
    g = tfe.make_template("g", my_graph_func, create_graph_function_=True)
    print(g(3))

    还有一种非官方的方式来切换模式,使用 eager_modegraph_mode tensorflow.python.eager.context 中定义的上下文像这样:
    import tensorflow as tf
    import tensorflow.contrib.eager as tfe
    from tensorflow.python.eager.context import eager_mode, graph_mode

    with eager_mode():
    print("Eager mode")
    assert tf.executing_eagerly()
    x1 = tfe.Variable(5.0)
    print(x1.numpy())

    print()
    with graph_mode():
    print("Graph mode")
    assert not tf.executing_eagerly()

    x2 = tfe.Variable(5.0)
    with tf.Session():
    x2.initializer.run()
    print(x2.eval())

    由于它不是官方的,您可能应该在生产代码中避免使用它,但在调试时或在 Jupyter 笔记本中可能会派上用场。最后一个选项是使用 switch_to()功能:
    import tensorflow as tf
    import tensorflow.contrib.eager as tfe
    from tensorflow.python.eager.context import context, EAGER_MODE, GRAPH_MODE

    def switch_to(mode):
    ctx = context()._eager_context
    ctx.mode = mode
    ctx.is_eager = mode == EAGER_MODE

    switch_to(EAGER_MODE)
    assert tf.executing_eagerly()
    v = tfe.Variable(3.0)
    print(v.numpy())
    assert tf.get_default_graph().get_operations() == []

    switch_to(GRAPH_MODE)
    assert not tf.executing_eagerly()
    v = tfe.Variable(3.0)
    init = tf.global_variables_initializer()
    assert len(tf.get_default_graph().get_operations()) > 0
    with tf.Session():
    init.run()
    print(v.eval())

    这确实是一个 hack,但如果您不喜欢将所有代码嵌套在 with 中,它可能在 Jupyter 笔记本中很有用。 block 。

    关于tensorflow - 如何仅在应用程序的特定部分使用 tensorflow 急切执行?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/49265723/

    24 4 0
    Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
    广告合作:1813099741@qq.com 6ren.com