gpt4 book ai didi

python - 为什么 Keras Tokenizer 文本到序列对所有文本返回相同的值?

转载 作者:行者123 更新时间:2023-11-30 08:40:19 25 4
gpt4 key购买 nike

我正在尝试创建一个 Keras LSTM,它将单词分类为 0 或 1。但是,无论我输入什么文本,网络都会返回一个接近于零的值。我已将问题范围缩小到与 Keras 分词器相关的问题。我添加了调试打印语句并注释了 model.predict() 代码来测试此问题。所有单词都返回数组[[208]]

下面的代码

from builtins import len

from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras import layers
from sklearn.model_selection import train_test_split
import pandas as pd
import numpy as np
import enchant
import re

d = enchant.Dict("en_US")

df = pd.read_csv('sentiments.csv')
df.columns = ["label", "text"]
x = df['text'].values
y = df['label'].values

x_train, x_test, y_train, y_test = \
train_test_split(x, y, test_size=0.1, random_state=123)

tokenizer = Tokenizer(num_words=100)

tokenizer.fit_on_texts(x)
xtrain = tokenizer.texts_to_sequences(x_train)
xtest = tokenizer.texts_to_sequences(x_test)

vocab_size = len(tokenizer.word_index) + 1

maxlen = 10
xtrain = pad_sequences(xtrain, padding='post', maxlen=maxlen)
xtest = pad_sequences(xtest, padding='post', maxlen=maxlen)

print(x_train[3])
print(xtrain[3])

embedding_dim = 50
model = Sequential()
model.add(layers.Embedding(input_dim=(vocab_size+1),
output_dim=embedding_dim,
input_length=maxlen))
model.add(layers.LSTM(units=50, return_sequences=True))
model.add(layers.LSTM(units=10))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(8))
model.add(layers.Dense(1, activation="sigmoid"))
model.compile(optimizer="adam", loss="binary_crossentropy",
metrics=['accuracy'])
model.summary()
model.fit(xtrain, y_train, epochs=20, batch_size=16, verbose=False)

loss, acc = model.evaluate(xtrain, y_train, verbose=False)
print("Training Accuracy: ", acc)
loss, acc = model.evaluate(xtest, y_test, verbose=False)
print("Test Accuracy: ", acc)

text_input = str(input("Enter a word for analysis: "))

if d.check(text_input):
word_Arr = []
word_Arr.append(text_input)
tokenizer.fit_on_texts(word_Arr)
word_final = tokenizer.texts_to_sequences(word_Arr)
word_final_final = np.asarray(word_final)

print(word_final_final)

# newArr = np.zeros(shape=(6, 10))
# newArr[0] = word_final_final

# print(model.predict(newArr))

我该如何继续?

最佳答案

您始终要重新调整您的 Tokenizer 实例:

tokenizer = Tokenizer(num_words=100)

tokenizer.fit_on_texts(x)

与新输入的单词本身:

tokenizer.fit_on_texts(word_Arr)

因此,您创建的用于训练模型的标记将被删除,并且新安装的 Token 实例将根据您输入的单词的标记化对您的单词进行标记。

示例:

tokenizer = Tokenizer(num_words=100)
tokenizer.fit_on_texts(["dog, cat, horse"])
ext_input = str(input("Enter a word for analysis: "))

word_Arr = []
word_Arr.append(text_input)

# here is your problem!!!
tokenizer.fit_on_texts(word_Arr)

word_final = tokenizer.texts_to_sequences(word_Arr)
word_final_final = np.asarray(word_final)

print(word_final_final)

输出:

Enter a word for analysis: dog
[[1]]
Enter a word for analysis: cat
[[1]]

注释掉有问题的代码部分:

tokenizer = Tokenizer(num_words=100)

tokenizer.fit_on_texts(["dog, cat, horse"])
ext_input = str(input("Enter a word for analysis: "))

word_Arr = []
word_Arr.append(text_input)

# commenting out your problem!!!
# tokenizer.fit_on_texts(word_Arr)

word_final = tokenizer.texts_to_sequences(word_Arr)
word_final_final = np.asarray(word_final)

print(word_final_final)

出局

Enter a word for analysis: cat
[[2]]
Enter a word for analysis: dog
[[1]]

关于python - 为什么 Keras Tokenizer 文本到序列对所有文本返回相同的值?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59513102/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com