gpt4 book ai didi

python - 使用 LSTM 网络和 Keras 进行文本分类 0.0% 准确度

转载 作者:太空宇宙 更新时间:2023-11-03 21:24:12 25 4
gpt4 key购买 nike

我有包含两列的 csv 文件:

category, description

文件中有 1030 个类别,只有大约 12,600 行

我需要获得一个文本分类模型,并根据这些数据进行训练。我使用 keras 和 LSTM 模型。

我找到了一篇描述如何进行二元分类的文章,并对其进行了稍微修改以使用多个类别。

我的代码:

import pandas as pd
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM
from numpy import array
from keras.preprocessing.text import one_hot
from sklearn.preprocessing import LabelEncoder
from keras.preprocessing import sequence
import keras

df = pd.read_csv('/tmp/input_data.csv')

#one hot encode your documents

# integer encode the documents
vocab_size = 2000
encoded_docs = [one_hot(d, vocab_size) for d in df['description']]

def load_data_from_arrays(strings, labels, train_test_split=0.9):
data_size = len(strings)
test_size = int(data_size - round(data_size * train_test_split))
print("Test size: {}".format(test_size))

print("\nTraining set:")
x_train = strings[test_size:]
print("\t - x_train: {}".format(len(x_train)))
y_train = labels[test_size:]
print("\t - y_train: {}".format(len(y_train)))

print("\nTesting set:")
x_test = strings[:test_size]
print("\t - x_test: {}".format(len(x_test)))
y_test = labels[:test_size]
print("\t - y_test: {}".format(len(y_test)))

return x_train, y_train, x_test, y_test


encoder = LabelEncoder()
categories = encoder.fit_transform(df['category'])
num_classes = np.max(categories) + 1
print('Categories count: {}'.format(num_classes))
#Categories count: 1030

X_train, y_train, x_test, y_test = load_data_from_arrays(encoded_docs, categories, train_test_split=0.8)

# Truncate and pad the review sequences

max_review_length = 500
X_train = sequence.pad_sequences(X_train, maxlen=max_review_length)
x_test = sequence.pad_sequences(x_test, maxlen=max_review_length)

y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print('y_train shape:', y_train.shape)
print('y_test shape:', y_test.shape)

# Build the model
embedding_vector_length = 32
top_words = 10000

model = Sequential()
model.add(Embedding(top_words, embedding_vector_length, input_length=max_review_length))
model.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy',optimizer='adam', metrics=['accuracy'])
print(model.summary())

_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_8 (Embedding) (None, 500, 32) 320000
_________________________________________________________________
lstm_8 (LSTM) (None, 100) 53200
_________________________________________________________________
dense_8 (Dense) (None, 1030) 104030
=================================================================
Total params: 477,230
Trainable params: 477,230
Non-trainable params: 0
_________________________________________________________________
None

#Train the model
model.fit(X_train, y_train, validation_data=(x_test, y_test), epochs=5, batch_size=64)

Train on 10118 samples, validate on 2530 samples
Epoch 1/5
10118/10118 [==============================] - 60s 6ms/step - loss: 6.5086 - acc: 0.0019 - val_loss: 10.0911 - val_acc: 0.0000e+00
Epoch 2/5
10118/10118 [==============================] - 63s 6ms/step - loss: 6.3281 - acc: 0.0028 - val_loss: 10.8270 - val_acc: 0.0000e+00
Epoch 3/5
10118/10118 [==============================] - 63s 6ms/step - loss: 6.3120 - acc: 0.0024 - val_loss: 11.0078 - val_acc: 0.0000e+00
Epoch 4/5
10118/10118 [==============================] - 64s 6ms/step - loss: 6.2891 - acc: 0.0030 - val_loss: 11.8264 - val_acc: 0.0000e+00
Epoch 5/5
10118/10118 [==============================] - 69s 7ms/step - loss: 6.2559 - acc: 0.0032 - val_loss: 12.1625 - val_acc: 0.0000e+00

#Evaluate the model
scores = model.evaluate(x_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))

Accuracy: 0.00%

我在准备数据时犯了什么错误?为什么准确率总是0?

最佳答案

我用我自己的一些输入策划了端到端代码,并在 this 上进行了测试。数据,您可以对您的数据使用相同的内容,无需进行任何更改或进行很少的更改,因为我已经删除了细节并使其通用。最后,我还强调了我在您上面提供的代码之上所做的工作。

代码

import pandas as pd
from sklearn.preprocessing import OneHotEncoder
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.preprocessing.text import one_hot
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, Flatten, Dense
from nltk.tokenize import word_tokenize

def load_data_from_arrays(strings, labels, train_test_split=0.9):
data_size = len(strings)
test_size = int(data_size - round(data_size * train_test_split))
print("Test size: {}".format(test_size))

print("\nTraining set:")
x_train = strings[test_size:]
print("\t - x_train: {}".format(len(x_train)))
y_train = labels[test_size:]
print("\t - y_train: {}".format(len(y_train)))

print("\nTesting set:")
x_test = strings[:test_size]
print("\t - x_test: {}".format(len(x_test)))
y_test = labels[:test_size]
print("\t - y_test: {}".format(len(y_test)))

return x_train, y_train, x_test, y_test

# estimating the vocab length with the help of nltk
def get_vocab_length(strings):
vocab = []
for sent in strings:
words = word_tokenize(sent)
vocab.extend(words)
vocab = list(set(vocab))
vocab_length = len(vocab)
return vocab_length

def clean_text(sent):

# <your cleaning code here>
# clean func 1
# clean func 2
# ...
# clean func n

return sent

# load input data
df = pd.read_csv('/tmp/input_data.csv')
strings = df['description'].values
labels = df['category'].values

clean_strings = [clean_text(sent) for sent in strings]

vocab_length = get_vocab_length(clean_strings)

# create onehot encodings of strings
encoded_docs = [one_hot(sent, vocab_length) for sent in strings]

# create onehot encodings of labels
ohe = OneHotEncoder()
categories = ohe.fit_transform(labels.reshape(-1,1)).toarray()

# split data
X_train, y_train, X_test, y_test = load_data_from_arrays(encoded_docs, categories, train_test_split=0.8)

# assuming max input to be not more than 512 words
max_input_len = 512

# padding data
X_train = pad_sequences(X_train, maxlen=max_input_len, padding= 'post')
X_test = pad_sequences(X_test, maxlen=max_input_len, padding= 'post')

# setting embedding vector length
embedding_vector_length = 32

model = Sequential()
model.add(Embedding(vocab_length, embedding_vector_length, input_length=max_input_len, name= 'embedding') )
model.add(Flatten())
model.add(Dense(5, activation= 'softmax'))
model.compile('adam', loss= 'categorical_crossentropy', metrics= ['accuracy'])
model.summary()

# training the model
model.fit(X_train, y_train, epochs= 10, batch_size= 128, validation_split= 0.2, verbose= 1)

# evaluating the model
score = model.evaluate(X_test, y_test, verbose=0)
print("Test Loss:", score[0])
print("Test Acc:", score[1])

我从事过的其他领域

1。文本清理

创建了一个函数来清理文本。这非常重要,因为它将消除数据中不必要的噪音,并且还要注意此步骤将完全取决于您拥有的数据类型。为了帮助您简化,我在上面的代码中创建了一个 clean_text 函数,您可以在其中放置清理代码。它的使用方式应该是接收原始文本并提供干净的文本。您可能想要研究的一些库是 re、string 和 emoji。

2.估计词汇量

如果您有足够的数据,最好估计词汇大小,而不是在将其传递给 Keras one_hot 函数时直接输入一些数字。我使用 nltk word_tokenize 创建了一个基本的 get_vocab_length 函数。您可以使用相同的或根据您的数据进一步增强它。

还有什么?

您可以进一步进行超参数调整和一些不同的神经网络设计。

最后的话

它可能仍然不起作用,因为它完全取决于您拥有的数据质量和数据量。如果您的数据质量较差或数据量非常少,那么在尝试所有方法后,您很可能无法获得结果。

然后我建议您尝试在一些预训练模型上进行迁移学习,例如 BERT、RoBERTa 等。HuggingFace 为最先进的预训练模型提供了良好的支持,您可以从以下链接开始 -

关于python - 使用 LSTM 网络和 Keras 进行文本分类 0.0% 准确度,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53969966/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com