gpt4 book ai didi

memory - Tensorflow、Flask 和 TFLearn 内存泄漏

转载 作者:行者123 更新时间:2023-12-02 19:22:48 28 4
gpt4 key购买 nike

我正在运行以下程序,每次点击“构建”API 调用时,我都会看到进程完成后又占用了 1 GB 内存。我试图从内存中消除一切,但我不确定还剩下什么。

import tensorflow as tf
import tflearn
from flask import Flask, jsonify
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.normalization import local_response_normalization
from tflearn.layers.estimator import regression

app = Flask(__name__)

keep_prob = .8
num_labels = 3
batch_size = 64

class AlexNet():

def __init__(self):

@app.route('/build')
def build():
g = tf.Graph()
with g.as_default():
sess = tf.Session()

# Building 'AlexNet'
network = input_data(shape=[None, 227, 227, 3])
network = conv_2d(network, 96, 11, strides=4, activation='relu')
network = max_pool_2d(network, 3, strides=2)
network = local_response_normalization(network)
network = conv_2d(network, 256, 5, activation='relu')
network = max_pool_2d(network, 3, strides=2)
network = local_response_normalization(network)
network = conv_2d(network, 384, 3, activation='relu')
network = conv_2d(network, 384, 3, activation='relu')
network = conv_2d(network, 256, 3, activation='relu')
network = max_pool_2d(network, 3, strides=2)
network = local_response_normalization(network)
network = fully_connected(network, 4096, activation='tanh')
network = dropout(network, keep_prob)
network = fully_connected(network, 4096, activation='tanh')
network = dropout(network, keep_prob)
network = fully_connected(network, num_labels, activation='softmax')
network = regression(network, optimizer="adam",
loss='categorical_crossentropy',
learning_rate=0.001, batch_size=batch_size)

model = tflearn.DNN(network, tensorboard_dir="./tflearn_logs/",
checkpoint_path=None, tensorboard_verbose=0, session=sess)

sess.run(tf.initialize_all_variables())
sess.close()

tf.reset_default_graph()

del g
del sess
del model
del network
return jsonify(status=200)


if __name__ == "__main__":
AlexNet()
app.run(host='0.0.0.0', port=5000, threaded=True)

最佳答案

我不确定您是否找到了答案,但恕我直言,您不应该将长时间运行的任务放入 HTTP 请求处理程序中。因为 HTTP 是无状态的并且应该几乎立即响应调用。这就是为什么我们有任务队列、异步任务等概念。服务器端开发的经验法则是尽快响应请求。如果你尝试在 HTTP 请求中构建一个卷积深度神经网络,那么它并不真正可行也是很正常的。因为理想的 HTTP 请求应该在几秒钟内响应。您的 DNN 分类器 session 运行可能需要太多秒(需要尝试)。

最黑客的解决方案是在请求中创建一个 python 线程,并让请求响应 HTTP 调用而不阻塞。同时,您的线程可以继续构建您的模型。然后您可以在某处编写您的模型或发送邮件通知等。

给你:

How can I add a background thread to flask?

关于memory - Tensorflow、Flask 和 TFLearn 内存泄漏,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38686701/

28 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com