gpt4 book ai didi

python - GradienTape 收敛速度比 Keras.model.fit 慢得多

转载 作者:行者123 更新时间:2023-12-02 06:57:08 26 4
gpt4 key购买 nike

我目前正在尝试获取 TF2.0 api,但当我比较GradientTape时到常规的 keras.Model.fit 我注意到:

  1. 运行速度较慢(可能是由于 Eager Execution)

  2. 它的收敛速度要慢得多(我不确定为什么)。

+--------+--------------+--------------+------------------+
| Epoch | GradientTape | GradientTape | keras.Model.fit |
| | | shuffling | |
+--------+--------------+--------------+------------------+
| 1 | 0.905 | 0.918 | 0.8793 |
+--------+--------------+--------------+------------------+
| 2 | 0.352 | 0.634 | 0.2226 |
+--------+--------------+--------------+------------------+
| 3 | 0.285 | 0.518 | 0.1192 |
+--------+--------------+--------------+------------------+
| 4 | 0.282 | 0.458 | 0.1029 |
+--------+--------------+--------------+------------------+
| 5 | 0.275 | 0.421 | 0.0940 |
+--------+--------------+--------------+------------------+

这是我使用 GradientTape 的训练循环:


optimizer = keras.optimizers.Adam()
glove_model = GloveModel(vocab_size=len(labels))
train_loss = keras.metrics.Mean(name='train_loss')

@tf.function
def train_step(examples, labels):
with tf.GradientTape() as tape:
predictions = glove_model(examples)
loss = glove_model.glove_loss(labels, predictions)

gradients = tape.gradient(loss, glove_model.trainable_variables)
optimizer.apply_gradients(zip(gradients, glove_model.trainable_variables))

train_loss(loss)



total_step = 0
for epoch in range(epochs_number):

pbar = tqdm(train_ds.enumerate(), total=int(len(index_data) / batch_size) + 1)

for ix, (examples, labels) in pbar:

train_step(examples, labels)


print(f"Epoch {epoch + 1}, Loss {train_loss.result()}")

# Reset the metrics for the next epoch
train_loss.reset_states()

这是Keras.Model.fit训练:

glove_model.compile(optimizer, glove_model.glove_loss)
glove_model.fit(train_ds, epochs=epochs_number)

这是tf.data.Dataset

train_ds = data.Dataset.from_tensor_slices(
(np.hstack([index_rows.reshape(-1, 1), index_cols.reshape(-1, 1)]), index_data)
).shuffle(100000).batch(batch_size, drop_remainder=True)

这是模型。

class GloveModel(keras.Model):

def __init__(self, vocab_size, dim=100, a=3/4, x_max=100):
super(GloveModel, self).__init__()

self.vocab_size = vocab_size
self.dim = dim
self.a = a
self.x_max = x_max

self.target_embedding = layers.Embedding(
input_dim=self.vocab_size, output_dim=self.dim, input_length=1, name="target_embedding"
)
self.target_bias = layers.Embedding(
input_dim=self.vocab_size, output_dim=1, input_length=1, name="target_bias"
)

self.context_embedding = layers.Embedding(
input_dim=self.vocab_size, output_dim=self.dim, input_length=1, name="context_embedding"
)
self.context_bias = layers.Embedding(
input_dim=self.vocab_size, output_dim=1, input_length=1, name="context_bias"
)

self.dot_product = layers.Dot(axes=-1, name="dot")

self.prediction = layers.Add(name="add")
self.step = 0

def call(self, inputs):

target_ix = inputs[:, 0]
context_ix = inputs[:, 1]

target_embedding = self.target_embedding(target_ix)
target_bias = self.target_bias(target_ix)

context_embedding = self.context_embedding(context_ix)
context_bias = self.context_bias(context_ix)

dot_product = self.dot_product([target_embedding, context_embedding])
prediction = self.prediction([dot_product, target_bias, context_bias])

return prediction

def glove_loss(self, y_true, y_pred):

weight = tf.math.minimum(
tf.math.pow(y_true/self.x_max, self.a), 1.0
)
loss_value = tf.math.reduce_mean(weight * tf.math.pow(y_pred - tf.math.log(y_true), 2.0))

return loss_value



我尝试了多种配置和优化器,但似乎没有任何改变收敛速度。

最佳答案

Dataset.shuffle() 仅对每个小批量进行洗牌,因此每个纪元具有相同的顺序。 Keras .fit() 使用一些魔法在每个纪元之前对整个数据集进行洗牌。要在 TF 中执行此操作,您需要使用数据集 .repeat(epochs_number).shuffle(..., reshuffle_each_iteration=True):

train_ds = data.Dataset.from_tensor_slices(
(np.hstack([index_rows.reshape(-1, 1), index_cols.reshape(-1, 1)]), index_data)
).shuffle(100000, reshuffle_each_iteration=True
).batch(batch_size, drop_remainder=True
).repeat(epochs_number)

for ix, (examples, labels) in train_ds.enumerate():
train_step(examples, labels)
current_epoch = ix // (len(index_data) // batch_size)

这个解决方法既不美观也不自然,目前您可以使用它来随机播放每个纪元。这是一个已知问题,将会得到修复,将来您可以使用 for epoch in range(epochs_number) 而不是 .repeat()

关于python - GradienTape 收敛速度比 Keras.model.fit 慢得多,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58584359/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com