gpt4 book ai didi

python - 由于 tf.keras.preprocessing.text.Tokenizer.texts_to_sequences 上的 np.hstack 导致尺寸(形状)发生变化

转载 作者:行者123 更新时间:2023-12-01 06:24:21 24 4
gpt4 key购买 nike

我已申请 np.hstack tensorflow.keras.preprocessing.text.Tokenizer.texts_to_sequences 对于培训标签和验证(测试)标签。

令人惊讶且神秘的是,在我应用训练标签之后,输出的大小与应用之前不同 np.hstack 。但是,在应用 tensorflow.keras.preprocessing.text.Tokenizer.texts_to_sequences 之前和之后,验证标签的形状没有变化。 np.hstack

这是 Google Colab 的链接,轻松重现错误。

下面给出了重现错误的完整代码(以防万一链接不起作用):

!pip install tensorflow==2.1

# For Preprocessing the Text => To Tokenize the Text
from tensorflow.keras.preprocessing.text import Tokenizer
# If the Two Articles are of different length, pad_sequences will make the length equal
from tensorflow.keras.preprocessing.sequence import pad_sequences

# Package for performing Numerical Operations
import numpy as np

Unique_Labels_List = ['India', 'USA', 'Australia', 'Germany', 'Bhutan', 'Nepal', 'New Zealand', 'Israel', 'Canada', 'France', 'Ireland', 'Poland', 'Egypt', 'Greece', 'China', 'Spain', 'Mexico']


Train_Labels = Unique_Labels_List[0:14]
#print('Train Labels = {}'.format(Train_Labels))

Val_Labels = Unique_Labels_List[14:]
#print('Val_Labels = {}'.format(Val_Labels))

No_Of_Train_Items = [248, 200, 200, 218, 248, 248, 249, 247, 220, 200, 200, 211, 224, 209]
No_Val_Items = [212, 200, 219]

T_L = []
for Each_Label, Item in zip(Train_Labels, No_Of_Train_Items):
T_L.append([Each_Label] * Item)

T_L = [item for sublist in T_L for item in sublist]

V_L = []
for Each_Label, Item in zip(Val_Labels, No_Val_Items):
V_L.append([Each_Label] * Item)

V_L = [item for sublist in V_L for item in sublist]


len(T_L)

len(V_L)

label_tokenizer = Tokenizer()

label_tokenizer.fit_on_texts(Unique_Labels_List)

# Since it should be a Numpy Array, we should Convert the Sequences to Numpy Array, for both Training and
# Test Labels

training_label_list = label_tokenizer.texts_to_sequences(T_L)

validation_label_list = label_tokenizer.texts_to_sequences(V_L)

training_label_seq = np.hstack(training_label_list)

validation_label_seq = np.hstack(validation_label_list)

print('Actual Number of Train Labels before np.hstack are {}'.format(len(training_label_list)))
print('Change in the Number of Train Labels because of np.hstack are {}'.format(len(training_label_seq)))

print('-------------------------------------------------------------------------------------------------------')

print('Actual Number of Validation Labels before np.hstack are {}'.format(len(validation_label_list)))
print('However, there is no change in the Number of Validation Labels because of np.hstack {}'.format(len(validation_label_seq)))

提前谢谢您。

最佳答案

这是因为 training_label_list 中有包含多个值的列表。您可以通过 sorted(training_label_list, key=lambda x: len(x),verse = True) 进行验证。

发生这种情况是因为 label_tokenizer 按以下方式考虑新西兰

>>>label_tokenizer.index_word
{1: 'india',
2: 'usa',
3: 'australia',
4: 'germany',
5: 'bhutan',
6: 'nepal',
7: 'new',
8: 'zealand',
9: 'israel',
10: 'canada',
11: 'france',
12: 'ireland',
13: 'poland',
14: 'egypt',
15: 'greece',
16: 'china',
17: 'spain',
18: 'mexico'}

检查索引 7 和 8。

关于python - 由于 tf.keras.preprocessing.text.Tokenizer.texts_to_sequences 上的 np.hstack 导致尺寸(形状)发生变化,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60237754/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com