gpt4 book ai didi

java - 使用 Apache Tika 提取文本,然后在删除停用词后获取频繁出现的单词

转载 作者:太空宇宙 更新时间:2023-11-04 07:32:47 25 4
gpt4 key购买 nike

我已经使用 Tika 和 lucene 提取了 example.pdf 文件的文本,并且尝试删除停用词,然后从文本中获取剩余单词(不包括停用词)的字数。

我的sample.pdf包含

This is java related information it contains java prg.

下面是我的代码

String[] stopwords ={"a", "about", "above", "above", "across", "after", "afterwards", "again", "against", "all", "almost", 
"alone", "along", "already", "also","although","always","am","among", "amongst", "amoungst", "amount", "an", "and",
"another", "any","anyhow","anyone","anything","anyway", "anywhere", "are", "around", "as", "at", "back","be","became",
"because","become","becomes", "becoming", "been", "before", "beforehand", "behind", "being", "below", "beside", "besides",
"between", "beyond", "bill", "both", "bottom","but", "by", "call", "can", "cannot", "cant", "co", "con", "could", "couldnt",
"cry", "de", "describe", "detail", "do", "done", "down", "due", "during", "each", "eg", "eight", "either", "eleven","else",
"elsewhere", "empty", "enough", "etc", "even", "ever", "every", "everyone", "everything", "everywhere", "except", "few",
"fifteen", "fify", "fill", "find", "fire", "first", "five", "for", "former", "formerly", "forty", "found", "four", "from",
"front", "full", "further", "get", "give", "go", "had", "has", "hasnt",
"have", "he", "hence", "her", "here", "hereafter", "hereby", "herein", "hereupon", "hers", "herself",
"him", "himself", "his", "how", "however", "hundred", "ie", "if", "in", "inc", "indeed", "interest", "into",
"is", "it", "its", "itself", "keep", "last", "latter", "latterly", "least", "less", "ltd", "made", "many",
"may", "me", "meanwhile", "might", "mill", "mine", "more", "moreover", "most", "mostly", "move", "much", "must",
"my", "myself", "name", "namely", "neither", "never", "nevertheless", "next", "nine", "no", "nobody", "none",
"noone", "nor", "not", "nothing", "now", "nowhere", "of", "off", "often", "on", "once", "one", "only", "onto",
"or", "other", "others", "otherwise", "our", "ours", "ourselves", "out", "over", "own","part", "per", "perhaps",
"please", "put", "rather", "re", "same", "see", "seem", "seemed", "seeming", "seems", "serious", "several", "she",
"should", "show", "side", "since", "sincere", "six", "sixty", "so", "some", "somehow", "someone", "something",
"sometime", "sometimes", "somewhere", "still", "such", "system", "take", "ten", "than", "that", "the", "their",
"them", "themselves", "then", "thence", "there", "thereafter", "thereby", "therefore", "therein", "thereupon",
"these", "they", "thickv", "thin", "third", "this", "those", "though", "three", "through", "throughout", "thru",
"thus", "to", "together", "too", "top", "toward", "towards", "twelve", "twenty", "two", "un", "under", "until",
"up", "upon", "us", "very", "via", "was", "we", "well", "were", "what", "whatever", "when", "whence", "whenever",
"where", "whereafter", "whereas", "whereby", "wherein", "whereupon", "wherever", "whether", "which", "while",
"whither", "who", "whoever", "whole", "whom", "whose", "why", "will", "with", "within", "without", "would", "yet",
"you", "your", "yours", "yourself", "yourselves","1","2","3","4","5","6","7","8","9","10","1.","2.","3.","4.","5.","6.","11",
"7.","8.","9.","12","13","14","A","B","C","D","E","F","G","H","I","J","K","L","M","N","O","P","Q","R","S","T","U","V","W","X","Y","Z",
"terms","CONDITIONS","conditions","values","interested.","care","sure",".","!","@","#","$","%","^","&","*","(",")","{","}","[","]",":",";",",","<",".",">","/","?","_","-","+","=",
"a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z",
"contact","grounds","buyers","tried","said,","plan","value","principle.","forces","sent:","is,","was","like",
"discussion","tmus","diffrent.","layout","area.","thanks","thankyou","hello","bye","rise","fell","fall","psqft.","http://","km","miles"};

Map map = new TreeMap();
File file1 = new File("C://sample.pdf");
InputStream input = new FileInputStream(file1);
Metadata metadata = new Metadata();
BodyContentHandler handler = new BodyContentHandler(10*1024*1024);
AutoDetectParser parser = new AutoDetectParser();
parser.parse(input, handler, metadata);
Document doc = new Document();
doc.add(new Field("contents",handler.toString(),Field.Store.NO,Field.Index.ANALYZED));
String result = doc.toString();
String[] res=result.split(" ");
for (int i=0;i<res.length;i++)
{
int flag=1;
String s1=res[i].toLowerCase();

for(int j=0;j<stopwords.length;j++){
if(s1.equals(stopwords[j]))
{
flag=0;
}
if(flag!=0)
{
if (s1.length() > 0) {

Integer frequency = (Integer) map.get(s1);
if (frequency == null) {
frequency = ONE;
} else {

int value = frequency.intValue();
frequency = new Integer(value + 1);
}
map.put(s1, frequency);
}
}
}
}
input.close();
System.out.println("Finalresult:"+map);
}

我得到的以下输出不正确

Finalresult:{contains=456, document<indexed,tokenized<contents:this=456, information=456, is=139, it=140, java=912, prg=456, related=456}

我应该得到以下输出

information=1,java=2, prg=1, related=1

你能建议我获得所需的输出吗?谢谢

最佳答案

看起来像是一个说明为什么一致的代码格式很重要的例子。良好的缩进可能会让这个问题的原因对您来说更加明显。

for (int i=0;i<res.length;i++)
{
int flag=1;
String s1=res[i].toLowerCase();

for(int j=0;j<stopwords.length;j++)
{
if(s1.equals(stopwords[j]))
{
flag=0;
}
// -------- We are still looping through stopwords! This for loop should be closed here! ---------
if(flag!=0)
{
if (s1.length() > 0)
{
//Now this is going to add to the list for every entry in stopwords, until we find a match!
Integer frequency = (Integer) map.get(s1);
if (frequency == null)
{
frequency = ONE;
} else
{
int value = frequency.intValue();
frequency = new Integer(value + 1);
}
map.put(s1, frequency);
}
}
}
}

正如我们所看到的,您有 456 个停用词条目。您看到的行为都是由于缺少 }

关于java - 使用 Apache Tika 提取文本,然后在删除停用词后获取频繁出现的单词,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/17442341/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com