gpt4 book ai didi

r - R中的文本分类

转载 作者:行者123 更新时间:2023-12-04 04:38:30 25 4
gpt4 key购买 nike

我的目标是将反馈电子邮件自动路由到各个部门。
我的字段是FNUMBERCATEGORYSUBCATEGORYDescription
我有最近6个月的上述格式数据-整个电子邮件都与DescriptionCATEGORY一起存储在SUBCATEGORY中。

我必须分析DESCRIPTION列并找到KeywordsEach Category/subcategory,当下一个反馈电子邮件进入时,它应该根据从历史数据生成的Keyword自动分类为类别和子类别。

我已经将XML文件导入R-用于R中的文本分类,然后将XML转换为带有必填字段的数据框。我有一个特定月份的23017条记录-我仅在下面的数据帧中列出了前20列。

我有100多个类别和子类别。
我是文本挖掘概念的新手-但是在SO和tm包的帮助下-我尝试了以下代码:

step1 <-  structure(list(FNUMBER = structure(1:20, .Label = c(" 20131202-0885 ", 
"20131202-0886 ", "20131202-0985 ", "20131202-1145 ", "20131202-1227 ",
"20131202-1228 ", "20131202-1235 ", "20131202-1236 ", "20131202-1247 ",
"20131202-1248 ", "20131202-1249 ", "20131222-0157 ", "20131230-0668 ",
"20131230-0706 ", "20131230-0776 ", "20131230-0863 ", "20131230-0865 ",
"20131230-0866 ", "20131230-0868 ", "20131230-0874 "), class = "factor"),
CATEGORY = structure(c(9L, 14L, 11L, 6L, 10L, 12L, 7L, 11L,
13L, 13L, 6L, 1L, 2L, 5L, 4L, 8L, 8L, 3L, 11L, 11L), .Label = c(" BVL-Vocational Licence (VL) Investigation ",
" BVL - Bus Licensing ", " Corporate Transformation Office (CTO) ",
" CSV - Customer Service ", " Deregistration - Transfer/Split/Encash Rebates ",
" ENF - Enforcement Matters ", " ENF - Illegal Parking ",
" Marina Coastal Expressway ", " PTQ - Public Transport Quality ",
" Road Asset Management ", " Service Quality (SQ) ", " Traffic Management & Cycling ",
" VR - Issuance/disputes of bookings by vendors ", " VRLSO - Update Owner's Particulars "
), class = "factor"), SUBCATEGORY = structure(c(2L, 15L,
5L, 1L, 3L, 14L, 6L, 12L, 8L, 8L, 18L, 17L, 11L, 10L, 16L,
7L, 9L, 4L, 13L, 12L), .Label = c(" Abandoned Vehicles ",
" Bus driver behaviour ", " Claims for accident ", " Corporate Development ",
" FAQ ", " Illegal Parking ", " Intra Group (Straddling Case) ",
" Issuance/disputes of bookings by vendors ", " MCE ", " PARF (Transfer/Split/Encash) ",
" Private bus related matters ", " Referrals ", " Straddle Cases (Across Groups) ",
" Traffic Flow ", " Update Owner Particulars ", " Vehicle Related Matters ",
" VL Holders (Complaint/Investigation/Appeal) ", " Warrant of Arrrest "
), class = "factor"), Description = structure(c(3L, 1L, 2L,
9L, 4L, 7L, 8L, 6L, 5L, 3L, 1L, 2L, 9L, 4L, 7L, 8L, 6L, 5L,
7L, 8L), .Label = c(" The street is the ONLY road leading to &amp; exit for vehicles and buses to (I think) four temples and, with the latest addition of 8B, four (!!) industrial estate.",
"Could you kindly increase the frequencies for Service 58. All my colleagues who travelled AVOID 58!!!\nThey would rather take 62-87 instead of 3-58",
"I saw bus no. 169A approaching the bus stop. At that time, the passengers had already boarded and alighted from the bus.",
"I want to apologise and excuse about my summon because I dont know can&apos;t park my motorcycle at the double line when I friday prayer ..please forgive me",
"Many thanks for the prompt action. However please note that the rectification could rather short term as it&apos;s just replacing the bulb but without the proper cover to protect against the elements.PS. the same job was done i.e. without installing a cover a few months back; and the same problem happen again.",
"Placed in such a manner than it cannot be seen properly due to the background ahead; colours blend.There is not much room angle to divert from 1st lane to 2nd lane. The outer most cone covers more than 1st lane",
"The vehicle GX3368K was observed to be driving along PIE towards Changi on 28th November 2013, 3:48pm without functioning braking lights during the day.",
"The vehicle was behaving suspiciously with many sudden brakes - which caused vehicles behind to do heavy &quot;jam brakes&quot; due to no warnings at all (no brake lights).",
"We have received a feedback regarding the back lane of the said address being blocked up by items.\nKindly investigate and keep us in the loop on the actions taken while we look into any fire safety issues on this case again."
), class = "factor")), .Names = c("FNUMBER", "CATEGORY",
"SUBCATEGORY", "Description"), class = "data.frame", row.names = c(NA,
-20L))

dim(step1)
names(step1)
library(tm)
m <- list(ID = "FNUMBER", Content = "Description")
myReader <- readTabular(mapping = m)
txt <- Corpus(DataframeSource(step1), readerControl = list(reader = myReader))

summary(txt)
txt <- tm_map(txt,tolower)
txt <- tm_map(txt,removeNumbers)
txt <- tm_map(txt,removePunctuation)
txt <- tm_map(txt,stripWhitespace)
txt <- tm_map(txt,removeWords,stopwords("english"))
txt <- tm_map(txt,stemDocument)


tdm <- TermDocumentMatrix(txt,
control = list(removePunctuation = TRUE,
stopwords = TRUE))
tdm

更新:
现在,我得到了整个数据集上出现频率最高的关键字:
tdm3 <-removeSparseTerms(tdm, 0.98)
TDM.dense <- as.matrix(tdm3)
TDM.dense = melt(TDM.dense, value.name = "count")
attach(TDM.dense)
TDM_Final <- aggregate(count, list(Terms), sum)
colnames(TDM_Final) <- c("Words","Word_Freq")

我被困在这之后。我不确定如何获得:

1.通过生成 Keywords(带有类别/子类别的关键字)来生成 Each Category/subcategory的相关 Taxonomy list(字母,二元词和三元组和Trigrams)。

2.何时输入下一个反馈电子邮件,如何将其分类为“类别”和“子类别”。 (有100+个类别),基于在上述步骤中生成的关键字分类列表。
3.或者,如果我的理解和解决方案部分不正确,请就其他可能的选择建议我。

我已经遍历了Internet上的 Material (我只能看到文本inot的分类,只有两个类别,不能超过2类)-但是我无法继续进行。我是R语言中的Text Mining的新手,请问好,如果这很幼稚。

任何帮助或起点都将是巨大的。

最佳答案

我在这里给您一个简短的答案,因为您的问题有点含糊。

下面的代码将为每个类别2克快速创建一个TDM。

library(RWeka)
library(SnowballC)

#Create a function that will produce a 'nvalue'-gram for the underlying dataset. Notice that the function accesses step1 data.frame external (it's not fed into the function). I'll leave it to someone else to fix this up!
makeNgramFeature=function(nvalue){

tokenize=function(x){NGramTokenizer(x,Weka_control(min=nvalue,max=nvalue))}

m <- list(ID = "FNUMBER", Content = "Description")
myReader <- readTabular(mapping = m)
txt <- Corpus(DataframeSource(step1), readerControl = list(reader = myReader))

summary(txt)
txt <- tm_map(txt,tolower)
txt <- tm_map(txt,removeNumbers)
txt <- tm_map(txt,removePunctuation)
txt <- tm_map(txt,stripWhitespace)
txt <- tm_map(txt,removeWords,stopwords("english"))
txt <- tm_map(txt,stemDocument)


tdm <- TermDocumentMatrix(txt,
control = list(removePunctuation = TRUE,
stopwords = TRUE,
tokenize=tokenize))
return(tdm)
}

# All is a list of tdm for each category. You could simply create a 'cascade' of by functions, or create a unique list of category/sub-category pairs to analyse.
all=by(step1,INDICES=step1$CATEGORY,FUN=function(x){makeNgramFeature(2)})

结果列表“全部”有点难看。您可以运行 names(all)来查看类别。我敢肯定有一个更干净的方法可以解决这个问题,但是希望这可以让您走上许多正确的道路之一...

关于r - R中的文本分类,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/22292150/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com