gpt4 book ai didi

java - LUCENE:如何通过 docNr 获取给定文档的所有术语,而不存储数据或术语 vector (LUKE 能够显示这一点,如何?)

转载 作者:行者123 更新时间:2023-12-01 14:18:51 30 4
gpt4 key购买 nike

在我的代码示例中,我在 lucene 索引中创建了三个文档。其中两个不存储 LASTNAME 字段,但存储了 termvector,一个没有存储。有了 LUKE,我就能够迭代该字段中的所有术语(姓氏)。在我的代码示例中迭代 TermFreqVectors,这对于具有存储的 TermVectors 的文档效果很好。

我怎样才能获得所有这些非存储条款?卢克是怎么做到的?

我原来的问题是,我想用另一个字段扩展一个包含近 100 个字段的大索引(60GB),而无需从头开始重新创建索引,因为使用我们的数据库设置,它需要 40 个并行计算服务器天。从索引中读取所有数据并将这个新字段添加到所有存储的文档中非常快。

import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.MockAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.RandomIndexWriter;
import org.apache.lucene.index.TermFreqVector;
import org.apache.lucene.queryParser.ParseException;
import org.apache.lucene.queryParser.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.NIOFSDirectory;
import org.apache.lucene.util.LuceneTestCase;

import java.io.File;
import java.io.IOException;
import java.util.Arrays;


public class TestDocTerms extends LuceneTestCase {

public void testDocTerms() throws IOException, ParseException {
Analyzer analyzer = new MockAnalyzer(random);

String fieldF = "FIRSTNAME";
String fieldL = "LASTNAME";

// To store an index on disk, use this instead:
Directory directory = NIOFSDirectory.open(new File("/tmp/_index_tester/"));
RandomIndexWriter iwriter = new RandomIndexWriter(random, directory, analyzer);
iwriter.w.setInfoStream(VERBOSE ? System.out : null);
Document doc = new Document();
doc.add(newField(fieldF, "Alex", Field.Store.YES, Field.Index.ANALYZED));
doc.add(newField(fieldL, "Miller", Field.Store.NO,Field.Index.ANALYZED,Field.TermVector.YES));
iwriter.addDocument(doc);
doc = new Document();
doc.add(newField(fieldF, "Chris", Field.Store.YES, Field.Index.ANALYZED));
doc.add(newField(fieldL, "Smith", Field.Store.NO, Field.Index.ANALYZED));
iwriter.addDocument(doc);
doc = new Document();
doc.add(newField(fieldF, "Alex", Field.Store.YES, Field.Index.ANALYZED));
doc.add(newField(fieldL, "Beatle", Field.Store.NO, Field.Index.ANALYZED,Field.TermVector.YES));
iwriter.addDocument(doc);
iwriter.close();

// Now search the index:
IndexSearcher isearcher = new IndexSearcher(directory, true); // read-only=true
QueryParser parser = new QueryParser(TEST_VERSION_CURRENT, fieldF, analyzer);
Query query = parser.parse(fieldF + ":" + "Alex");
TopDocs hits = isearcher.search(query, null, 2);
assertEquals(2, hits.totalHits);
// Iterate through the results:
for (int i = 0; i < hits.scoreDocs.length; i++) {
Document hitDoc = isearcher.doc(hits.scoreDocs[i].doc);
assertEquals("Alex", hitDoc.get(fieldF));
System.out.println("query for:" +query.toString()+ " with this results firstN:" + hitDoc.get(fieldF) + " and lastN:" + hitDoc.get(fieldL));
}
parser = new QueryParser(TEST_VERSION_CURRENT, fieldL, analyzer);
query = parser.parse(fieldL + ":" + "Miller");
hits = isearcher.search(query, null, 2);
assertEquals(1, hits.totalHits);
// Iterate through the results:
for (int i = 0; i < hits.scoreDocs.length; i++) {
Document hitDoc = isearcher.doc(hits.scoreDocs[i].doc);
assertEquals("Alex", hitDoc.get(fieldF));
System.out.println("query for:" + query.toString() + " with this results firstN:" +hitDoc.get(fieldF)+ " and lastN:" +hitDoc.get(fieldL));
}
isearcher.close();

// examine terms
IndexReader ireader = IndexReader.open(directory, true); // read-only=true
int numDocs = ireader.numDocs();

for (int i = 0; i < numDocs; i++) {
doc = ireader.document(i);
System.out.println("docNum:" + i + " with:" + doc.toString());
TermFreqVector t = ireader.getTermFreqVector(i, fieldL);
if (t != null){
System.out.println("Field:" + fieldL + " contains terms:" + t.toString());
}
TermFreqVector[] termFreqVectors = ireader.getTermFreqVectors(i);
if (termFreqVectors != null){
for (TermFreqVector tfv : termFreqVectors){
String[] terms = tfv.getTerms();
String field = tfv.getField();
System.out.println("Field:" +field+ " contains terms:" + Arrays.toString(terms));
}
}
}
ireader.close();
}


}

最佳答案

重建未存储的文档必然是最好的努力。您通常无法撤消分析器对值所做的更改。

当术语 vector 不可用时,Luke 会枚举与该字段关联的术语。这可能不尊重术语的顺序或任何格式。不过,这可能既不在这里也不在那里。我不知道您的 newField 方法到底是做什么的,但我怀疑它的默认值不是 Field.TermVector.NO

如果您想了解更多实现细节,我会获取 Luke 源代码,并阅读 org.getopt.luke.DocReconstructor

关于java - LUCENE:如何通过 docNr 获取给定文档的所有术语,而不存储数据或术语 vector (LUKE 能够显示这一点,如何?),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/17831957/

30 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com