gpt4 book ai didi

带有词干分析器的 Lucene Highlighter

转载 作者:行者123 更新时间:2023-12-04 17:54:49 25 4
gpt4 key购买 nike

我正在使用 Lucene 的 Highlighter 类来突出显示匹配搜索结果的片段,并且效果很好。我想从使用 StandardAnalyzer 搜索切换到 EnglishAnalyzer,它将执行词干提取。

搜索结果很好,但现在荧光笔并不总能找到匹配项。这是我正在查看的示例:

document field text 1: Everyone likes goats.

document field text 2: I have a goat that eats everything.

使用 EnglishAnalyzer 并搜索“goat”,两个文档都匹配,但荧光笔只能从文档 2 中找到匹配的片段。有没有办法让荧光笔返回两个文档的数据?

我知道 token 的字符不同,但相同的 token 仍然存在,因此只突出显示该位置存在的任何 token 似乎是合理的。

如果有帮助,这是使用 Lucene 3.5。

最佳答案

我找到了解决这个问题的方法。我改用 Highlighter使用 FastVectorHighlighter 的类.看起来我也会进行一些速度改进(以存储术语向量数据为代价)。为了以后遇到这个问题的任何人的利益,这里有一个单元测试,展示了这一切是如何协同工作的:

package com.sample.index;

import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.en.EnglishAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.queryParser.ParseException;
import org.apache.lucene.queryParser.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.search.vectorhighlight.*;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.Version;
import org.junit.Before;
import org.junit.Test;

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

import static junit.framework.Assert.assertEquals;

public class TestIndexStuff {
public static final String FIELD_NORMAL = "normal";
public static final String[] PRE_TAGS = new String[]{"["};
public static final String[] POST_TAGS = new String[]{"]"};
private IndexSearcher searcher;
private Analyzer analyzer = new EnglishAnalyzer(Version.LUCENE_35);

@Before
public void init() throws IOException {
RAMDirectory idx = new RAMDirectory();
IndexWriterConfig config = new IndexWriterConfig(Version.LUCENE_35, analyzer);

IndexWriter writer = new IndexWriter(idx, config);
addDocs(writer);
writer.close();

searcher = new IndexSearcher(IndexReader.open(idx));
}

private void addDocs(IndexWriter writer) throws IOException {
for (String text : new String[] {
"Pretty much everyone likes goats.",
"I have a goat that eats everything.",
"goats goats goats goats goats"}) {
Document doc = new Document();
doc.add(new Field(FIELD_NORMAL, text, Field.Store.YES,
Field.Index.ANALYZED, Field.TermVector.WITH_POSITIONS_OFFSETS));
writer.addDocument(doc);
}
}

private FastVectorHighlighter makeHighlighter() {
FragListBuilder fragListBuilder = new SimpleFragListBuilder(200);
FragmentsBuilder fragmentBuilder = new SimpleFragmentsBuilder(PRE_TAGS, POST_TAGS);
return new FastVectorHighlighter(true, true, fragListBuilder, fragmentBuilder);
}

@Test
public void highlight() throws ParseException, IOException {
Query query = new QueryParser(Version.LUCENE_35, FIELD_NORMAL, analyzer)
.parse("goat");
FastVectorHighlighter highlighter = makeHighlighter();
FieldQuery fieldQuery = highlighter.getFieldQuery(query);

TopDocs topDocs = searcher.search(query, 10);
List<String> fragments = new ArrayList<String>();
for (ScoreDoc scoreDoc : topDocs.scoreDocs) {
fragments.add(highlighter.getBestFragment(fieldQuery, searcher.getIndexReader(),
scoreDoc.doc, FIELD_NORMAL, 10000));
}

assertEquals(3, fragments.size());
assertEquals("[goats] [goats] [goats] [goats] [goats]", fragments.get(0).trim());
assertEquals("Pretty much everyone likes [goats].", fragments.get(1).trim());
assertEquals("I have a [goat] that eats everything.", fragments.get(2).trim());
}
}

关于带有词干分析器的 Lucene Highlighter,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/10339704/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com