gpt4 book ai didi

python - RNN 中的隐藏大小与输入大小

转载 作者:太空宇宙 更新时间:2023-11-03 20:02:00 27 4
gpt4 key购买 nike

前提1:

关于 RNN 层中的神经元 - 我的理解是,在“每个时间步,每个神经元都接收输入向量 x (t) 和前一个时间步的输出向量 y (t –1)”[1]:

https://github.com/ebolotin6/ebolotin6.github.io/blob/master/images/rnn.png

前提2:

据我了解,在 Pytorch 的 GRU 层中,input_sizehidden_​​size 含义如下:

  • input_size – The number of expected features in the input x
  • hidden_size – The number of features in the hidden state h

自然地,hidden_​​size 应该表示 GRU 层中神经元的数量。

我的问题:

给定以下 GRU 层:

# assume that hidden_size = 3

class Encoder(nn.Module):
def __init__(self, src_dictionary_size, hidden_size):
super(Encoder, self).__init__()
self.embedding = nn.Embedding(src_dictionary_size, hidden_size)
self.gru = nn.GRU(input_size = hidden_size, hidden_size = hidden_size)

假设hidden_​​size为3,我的理解是上面的GRU层将有3个神经元,每个神经元在每个时间步同时接受大小为3的输入向量。

我的问题是:为什么 hidden_​​sizeinput_size 的参数必须相等? IE。为什么 3 个神经元中的每一个都不能接受大小为 5 的输入向量?

举个例子:以下两种情况都会导致尺寸不匹配:

self.gru = nn.GRU(input_size = hidden_size, hidden_size = hidden_size-1)
self.gru = nn.GRU(input_size = hidden_size, hidden_size = hidden_size+1)

[1] 杰龙,奥雷利安。使用 Scikit-Learn 和 TensorFlow 进行机器学习实践(第 388 页)。奥莱利媒体。 Kindle版。

[3] https://pytorch.org/docs/stable/nn.html#torch.nn.GRU

<小时/>

添加完整代码以实现可重复性:

import torch
import torch.nn as nn

class Encoder(nn.Module):
def __init__(self, src_dictionary_size, hidden_size):
super(Encoder, self).__init__()
self.hidden_size = hidden_size
self.embedding = nn.Embedding(src_dictionary_size, hidden_size)
self.gru = nn.GRU(input_size = hidden_size, hidden_size = hidden_size-1)

def forward(self, pad_seqs, seq_lengths, hidden):
"""
Args:
pad_seqs of shape (max_seq_length, batch_size, 1): Padded source sequences.
seq_lengths: List of sequence lengths.
hidden of shape (1, batch_size, hidden_size): Initial states of the GRU.

Returns:
outputs of shape (max_seq_length, batch_size, hidden_size): Padded outputs of GRU at every step.
hidden of shape (1, batch_size, hidden_size): Updated states of the GRU.
"""
embedded_sqs = self.embedding(pad_seqs).squeeze(2)
packed_sqs = pack_padded_sequence(embedded_sqs, seq_lengths)
packed_output, h_n = self.gru(packed_sqs, hidden)
output, input_sizes = pad_packed_sequence(packed_output)

return output, h_n

def init_hidden(self, batch_size=1):
return torch.zeros(1, batch_size, self.hidden_size)

def test_Encoder_shapes():
hidden_size = 5
encoder = Encoder(src_dictionary_size=5, hidden_size=hidden_size)

# maximum word count
max_seq_length = 4

# num sentences
batch_size = 2
hidden = encoder.init_hidden(batch_size=batch_size)

# these are padded sequences (sentences of words). There are 2 sentences (i.e. 2 batches) with a maximum of 4 words.
pad_seqs = torch.tensor([
[1, 2],
[2, 3],
[3, 0],
[4, 0]
]).view(max_seq_length, batch_size, 1)

outputs, new_hidden = encoder.forward(pad_seqs=pad_seqs, seq_lengths=[4, 2], hidden=hidden)
assert outputs.shape == torch.Size([4, batch_size, hidden_size]), f"Bad outputs.shape: {outputs.shape}"
assert new_hidden.shape == torch.Size([1, batch_size, hidden_size]), f"Bad new_hidden.shape: {new_hidden.shape}"
print('Success')

test_Encoder_shapes()

最佳答案

我刚刚解决了这个问题,这个错误是我自己造成的。

结论:input_sizehidden_​​size 的大小可能不同,这没有固有的问题。问题中的前提陈述正确。

上面(完整)代码的问题是 GRU 的初始隐藏状态没有正确的维度。初始隐藏状态必须与后续隐藏状态具有相同的维度。就我而言,初始隐藏状态的形状为 (1,2,5) 而不是 (1,2,4)。在前者中,5表示嵌入向量的维数。 4 表示 GRU 中的hidden_​​size(神经元数量)。正确的代码如下:

import torch
import torch.nn as nn

class Encoder(nn.Module):
def __init__(self, src_dictionary_size, input_size, hidden_size):
super(Encoder, self).__init__()
self.hidden_size = hidden_size
self.embedding = nn.Embedding(src_dictionary_size, input_size)
self.gru = nn.GRU(input_size = input_size, hidden_size = hidden_size)

def forward(self, pad_seqs, seq_lengths, hidden):
"""
Args:
pad_seqs of shape (max_seq_length, batch_size, 1): Padded source sequences.
seq_lengths: List of sequence lengths.
hidden of shape (1, batch_size, hidden_size): Initial states of the GRU.

Returns:
outputs of shape (max_seq_length, batch_size, hidden_size): Padded outputs of GRU at every step.
hidden of shape (1, batch_size, hidden_size): Updated states of the GRU.
"""
embedded_sqs = self.embedding(pad_seqs).squeeze(2)
packed_sqs = pack_padded_sequence(embedded_sqs, seq_lengths)
packed_output, h_n = self.gru(packed_sqs, hidden)
output, input_sizes = pad_packed_sequence(packed_output)

return output, h_n

def init_hidden(self, batch_size=1):
return torch.zeros(1, batch_size, self.hidden_size)

def test_Encoder_shapes():
hidden_size = 4
embedding_size = 5
encoder = Encoder(src_dictionary_size=5, input_size = embedding_size, hidden_size = hidden_size)
print(encoder)

max_seq_length = 4
batch_size = 2
hidden = encoder.init_hidden(batch_size=batch_size)
pad_seqs = torch.tensor([
[1, 2],
[2, 3],
[3, 0],
[4, 0]
]).view(max_seq_length, batch_size, 1)

outputs, new_hidden = encoder.forward(pad_seqs=pad_seqs, seq_lengths=[4, 2], hidden=hidden)
assert outputs.shape == torch.Size([4, batch_size, hidden_size]), f"Bad outputs.shape: {outputs.shape}"
assert new_hidden.shape == torch.Size([1, batch_size, hidden_size]), f"Bad new_hidden.shape: {new_hidden.shape}"
print('Success')

test_Encoder_shapes()

关于python - RNN 中的隐藏大小与输入大小,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59182518/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com