gpt4 book ai didi

python - TensorFlow Keras CuDNNGRU 到 GRU 转换

转载 作者:行者123 更新时间:2023-11-30 08:47:57 26 4
gpt4 key购买 nike

我有一个在 TensorFlow 1.14 中使用(现已弃用)tf.keras.layers.CuDNNGRU 构建的经过训练的模型层(在 tf.compat.v1 中的 TensorFlow 2.0 中可用),我正在尝试将旧层的权重移植到使用 tf.keras.layers.GRU 构建的新 TensorFlow 2.0 模型中以获得等效模型。

这样做的一个动机是能够在 CPU 上进行推理(tf.compat.v1.keras.layers.CuDNNGRU 层仅在 GPU 上运行)。另一个动机是让模型面向 future 。

问题

如何将经过训练的 tf.contrib.v1.keras.layers.CuDNNGRU 层转换为等效的 tf.keras.layers.GRU 层?

最佳答案

tensorflow.python.keras.saving.hdf5_format 中的以下私有(private)辅助函数似乎可以解决问题。该函数执行更一般的任务,即在 CuDNNGRU/GRUCuDNNLSTM/LSTM 格式之间转换权重,因此它不仅仅适用于我的用例。该函数似乎起源于 this pull request在独立的 Keras 中。

import numpy as np


def _convert_rnn_weights(layer, weights):
"""Converts weights for RNN layers between native and CuDNN format.

Input kernels for each gate are transposed and converted between Fortran
and C layout, recurrent kernels are transposed. For LSTM biases are summed/
split in half, for GRU biases are reshaped.

Weights can be converted in both directions between `LSTM` and`CuDNNSLTM`
and between `CuDNNGRU` and `GRU(reset_after=True)`. Default `GRU` is not
compatible with `CuDNNGRU`.

For missing biases in `LSTM`/`GRU` (`use_bias=False`) no conversion is made.

Arguments:
layer: Target layer instance.
weights: List of source weights values (input kernels, recurrent
kernels, [biases]) (Numpy arrays).

Returns:
A list of converted weights values (Numpy arrays).

Raises:
ValueError: for incompatible GRU layer/weights or incompatible biases
"""


def transform_kernels(kernels, func, n_gates):
"""Transforms kernel for each gate separately using given function.

Arguments:
kernels: Stacked array of kernels for individual gates.
func: Function applied to kernel of each gate.
n_gates: Number of gates (4 for LSTM, 3 for GRU).

Returns:
Stacked array of transformed kernels.
"""
return np.hstack([func(k) for k in np.hsplit(kernels, n_gates)])


def transpose_input(from_cudnn):
"""Makes a function that transforms input kernels from/to CuDNN format.

It keeps the shape, but changes between the layout (Fortran/C). Eg.:

```
Keras CuDNN
[[0, 1, 2], <---> [[0, 2, 4],
[3, 4, 5]] [1, 3, 5]]
```

It can be passed to `transform_kernels()`.

Arguments:
from_cudnn: `True` if source weights are in CuDNN format, `False`
if they're in plain Keras format.

Returns:
Function that converts input kernel to the other format.
"""
order = 'F' if from_cudnn else 'C'


def transform(kernel):
return kernel.T.reshape(kernel.shape, order=order)


return transform


target_class = layer.__class__.__name__


# convert the weights between CuDNNLSTM and LSTM
if target_class in ['LSTM', 'CuDNNLSTM'] and len(weights) == 3:
# determine if we're loading a CuDNNLSTM layer
# from the number of bias weights:
# CuDNNLSTM has (units * 8) weights; while LSTM has (units * 4)
# if there's no bias weight in the file, skip this conversion
units = weights[1].shape[0]
bias_shape = weights[2].shape
n_gates = 4


if bias_shape == (2 * units * n_gates,):
source = 'CuDNNLSTM'
elif bias_shape == (units * n_gates,):
source = 'LSTM'
else:
raise ValueError('Invalid bias shape: ' + str(bias_shape))


def convert_lstm_weights(weights, from_cudnn=True):
"""Converts the weights between CuDNNLSTM and LSTM.

Arguments:
weights: Original weights.
from_cudnn: Indicates whether original weights are from CuDNN layer.

Returns:
Updated weights compatible with LSTM.
"""


# Transpose (and reshape) input and recurrent kernels
kernels = transform_kernels(weights[0], transpose_input(from_cudnn),
n_gates)
recurrent_kernels = transform_kernels(weights[1], lambda k: k.T, n_gates)
if from_cudnn:
# merge input and recurrent biases into a single set
biases = np.sum(np.split(weights[2], 2, axis=0), axis=0)
else:
# Split single set of biases evenly to two sets. The way of
# splitting doesn't matter as long as the two sets sum is kept.
biases = np.tile(0.5 * weights[2], 2)
return [kernels, recurrent_kernels, biases]


if source != target_class:
weights = convert_lstm_weights(weights, from_cudnn=source == 'CuDNNLSTM')


# convert the weights between CuDNNGRU and GRU(reset_after=True)
if target_class in ['GRU', 'CuDNNGRU'] and len(weights) == 3:
# We can determine the source of the weights from the shape of the bias.
# If there is no bias we skip the conversion since
# CuDNNGRU always has biases.


units = weights[1].shape[0]
bias_shape = weights[2].shape
n_gates = 3


def convert_gru_weights(weights, from_cudnn=True):
"""Converts the weights between CuDNNGRU and GRU.

Arguments:
weights: Original weights.
from_cudnn: Indicates whether original weights are from CuDNN layer.

Returns:
Updated weights compatible with GRU.
"""


kernels = transform_kernels(weights[0], transpose_input(from_cudnn),
n_gates)
recurrent_kernels = transform_kernels(weights[1], lambda k: k.T, n_gates)
biases = np.array(weights[2]).reshape((2, -1) if from_cudnn else -1)
return [kernels, recurrent_kernels, biases]


if bias_shape == (2 * units * n_gates,):
source = 'CuDNNGRU'
elif bias_shape == (2, units * n_gates):
source = 'GRU(reset_after=True)'
elif bias_shape == (units * n_gates,):
source = 'GRU(reset_after=False)'
else:
raise ValueError('Invalid bias shape: ' + str(bias_shape))


if target_class == 'CuDNNGRU':
target = 'CuDNNGRU'
elif layer.reset_after:
target = 'GRU(reset_after=True)'
else:
target = 'GRU(reset_after=False)'


# only convert between different types
if source != target:
types = (source, target)
if 'GRU(reset_after=False)' in types:
raise ValueError('%s is not compatible with %s' % types)
if source == 'CuDNNGRU':
weights = convert_gru_weights(weights, from_cudnn=True)
elif source == 'GRU(reset_after=True)':
weights = convert_gru_weights(weights, from_cudnn=False)


return weights

对于我的用例(将 CuDNNGRU 权重放入 GRU),使用此函数的解决方案如下:

# cudnn_gru and gru are built CuDNNGRU and GRU layers, respectively
kernel, recurrent_kernel, bias = _convert_rnn_weights(
layer=gru,
weights=[
cudnn_gru.kernel.numpy(),
cudnn_gru.recurrent_kernel.numpy(),
cudnn_gru.bias.numpy(),
],
)
gru.cell.kernel.assign(kernel)
gru.cell.recurrent_kernel.assign(recurrent_kernel)
gru.cell.bias.assign(bias)

请注意,要使用 cuDNN 兼容的 tf.keras.layers.GRU 实现,必须 use a specific combination of parameters (特别是use_bias=True)。

关于python - TensorFlow Keras CuDNNGRU 到 GRU 转换,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58807467/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com