gpt4 book ai didi

python - 重复推理后模型推理运行时间增加

转载 作者:行者123 更新时间:2023-12-01 01:46:55 27 4
gpt4 key购买 nike

我正在编写一个 tensorflow 项目,其中手动编辑每个权重和偏差,因此我像旧 tensorflow 中那样使用字典设置权重和偏差,而不是使用 tf.layers.dense 并让 tensorflow 负责更新权重。 (这是我想出的最干净的方法,尽管它可能并不理想)

我在每次迭代中向固定模型提供相同的数据,但在整个程序执行过程中运行时间会增加。

我从代码中删除了几乎所有内容,这样我就可以看到问题所在,但我无法理解是什么导致了运行时间的增加。

---Games took   2.6591222286224365 seconds ---
---Games took 3.290001153945923 seconds ---
---Games took 4.250034332275391 seconds ---
---Games took 5.190149307250977 seconds ---

编辑:我已经通过使用不向图表添加额外节点的占位符来设法减少运行时间,但运行时间仍然以较慢的速度增加。我想消除这种运行时间的增长。 (过了一会儿,它从0.1秒变成超过1秒)

这是我的完整代码:

import numpy as np
import tensorflow as tf
import time

n_inputs = 9
n_class = 9

n_hidden_1 = 20

population_size = 10
weights = []
biases = []
game_steps = 20 #so we can see performance loss faster

# 2 games per individual
games_in_generation = population_size/2


def generate_initial_population(my_population_size):
my_weights = []
my_biases = []

for key in range(my_population_size):
layer_weights = {
'h1': tf.Variable(tf.truncated_normal([n_inputs, n_hidden_1], seed=key)),
'out': tf.Variable(tf.truncated_normal([n_hidden_1, n_class], seed=key))
}
layer_biases = {
'b1': tf.Variable(tf.truncated_normal([n_hidden_1], seed=key)),
'out': tf.Variable(tf.truncated_normal([n_class], seed=key))
}
my_weights.append(layer_weights)
my_biases.append(layer_biases)
return my_weights, my_biases


weights, biases = generate_initial_population(population_size)
data = tf.placeholder(dtype=tf.float32) #will add shape later

def model(x):
out_layer = tf.add(tf.matmul([biases[1]['b1']], weights[1]['out']), biases[1]['out'])
return out_layer


def play_game():


model_input = [0] * 9
model_out = model(data)

for game_step in range(game_steps):

move = sess.run(model_out, feed_dict={data: model_input})[0]


sess = tf.Session()
sess.run(tf.global_variables_initializer())
while True:
start_time = time.time()
for _ in range(int(games_in_generation)):
play_game()
print("---Games took %s seconds ---" % (time.time() - start_time))

最佳答案

我添加了另一个答案,因为对该问题的最新编辑产生了实质性变化。您仍然会看到运行时间有所增长,因为您仍然在 sess 中多次调用 model。您只是降低了向图中添加节点的频率。您需要做的是为要构建的每个模型创建一个新 session ,并在完成后关闭每个 session 。我已经修改了您的代码来执行此操作,此处:

import numpy as np
import tensorflow as tf
import time


n_inputs = 9
n_class = 9

n_hidden_1 = 20

population_size = 10
weights = []
biases = []
game_steps = 20 #so we can see performance loss faster

# 2 games per individual
games_in_generation = population_size/2


def generate_initial_population(my_population_size):
my_weights = []
my_biases = []

for key in range(my_population_size):
layer_weights = {
'h1': tf.Variable(tf.truncated_normal([n_inputs, n_hidden_1], seed=key)),
'out': tf.Variable(tf.truncated_normal([n_hidden_1, n_class], seed=key))
}
layer_biases = {
'b1': tf.Variable(tf.truncated_normal([n_hidden_1], seed=key)),
'out': tf.Variable(tf.truncated_normal([n_class], seed=key))
}
my_weights.append(layer_weights)
my_biases.append(layer_biases)
return my_weights, my_biases



def model(x):
out_layer = tf.add(tf.matmul([biases[1]['b1']], weights[1]['out']), biases[1]['out'])
return out_layer


def play_game(sess):

model_input = [0] * 9

model_out = model(data)

for game_step in range(game_steps):

move = sess.run(model_out, feed_dict={data: model_input})[0]

while True:

for _ in range(int(games_in_generation)):

# Reset the graph.
tf.reset_default_graph()

weights, biases = generate_initial_population(population_size)
data = tf.placeholder(dtype=tf.float32) #will add shape later

# Create session.
with tf.Session() as sess:

sess.run(tf.global_variables_initializer())

start_time = time.time()

play_game(sess)

print("---Games took %s seconds ---" % (time.time() - start_time))

sess.close()

我在这里所做的是将对 play_game 的调用包装在 with 范围中定义的 session 中,并使用 sess.close 退出该 session > 调用 play_game 之后。我还重置了默认图表。我已经运行了数百次迭代,并且没有看到运行时间增加。

关于python - 重复推理后模型推理运行时间增加,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51228131/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com