gpt4 book ai didi

python - 部分搜索返回零命中

转载 作者:太空宇宙 更新时间:2023-11-03 14:08:46 24 4
gpt4 key购买 nike

我已经成功使用elasticsearch(V6.1.3)进行精确搜索。但是当我尝试部分或忽略大小写时(例如:{"query": {"match": {"demodata": "Hello"}}}{"query": {"match": {"demodata": "ell"}}}),获得零命中。不知道为什么?我根据以下提示设置了分析器: Partial search

from elasticsearch import Elasticsearch
es = Elasticsearch()
settings={
"mappings": {
"my-type": {
'properties': {"demodata": {
"type": "string",
"search_analyzer": "search_ngram",
"index_analyzer": "index_ngram"
}
}},

},
"settings": {
"analysis": {
"filter": {
"ngram_filter": {
"type": "ngram",
"min_gram": 3,
"max_gram": 8
}
},
"analyzer": {
"index_ngram": {
"type": "custom",
"tokenizer": "keyword",
"filter": [ "ngram_filter", "lowercase" ]
},
"search_ngram": {
"type": "custom",
"tokenizer": "keyword",
"filter": "lowercase"
}
}
}
}
}
es.indices.create(index="my-index", body=settings, ignore=400)
docs=[
{ "demodata": "hello" },
{ "demodata": "hi" },
{ "demodata": "bye" },
{ "demodata": "HelLo WoRld!" }
]
for doc in docs:
res = es.index(index="my-index", doc_type="my-type", body=doc)

res = es.search(index="my-index", body={"query": {"match": {"demodata": "Hello"}}})
print("Got %d Hits:" % res["hits"]["total"])
print (res)

根据Piotr Pradzynski输入更新了代码,但它不起作用!!!

from elasticsearch import Elasticsearch
es = Elasticsearch()
if not es.indices.exists(index="my-index"):
customset={
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "ngram",
"min_gram": 3,
"max_gram": 20,
"token_chars": [
"letter",
"digit"
]
}
}
}
}
}


es.indices.create(index="my-index", body=customset, ignore=400)
docs=[
{ "demodata": "hELLO" },
{ "demodata": "hi" },
{ "demodata": "bye" },
{ "demodata": "HeLlo WoRld!" },
{ "demodata": "xyz@abc.com" }
]
for doc in docs:
res = es.index(index="my-index", doc_type="my-type", body=doc)



es.indices.refresh(index="my-index")
res = es.search(index="my-index", body={"query": {"match": {"demodata":{"query":"ell","analyzer": "my_analyzer"}}}})

#print res
print("Got %d Hits:" % res["hits"]["total"])
print (res)

最佳答案

我认为你应该使用NGram Tokenizer而不是NGram Token Filter并添加multi field它将使用这个分词器。

类似的事情:

PUT my-index
{
"settings": {
"analysis": {
"analyzer": {
"ngram_analyzer": {
"tokenizer": "ngram_tokenizer",
"filter": [
"lowercase",
"asciifolding"
]
}
},
"tokenizer": {
"ngram_tokenizer": {
"type": "ngram",
"min_gram": 3,
"max_gram": 15,
"token_chars": [
"letter",
"digit"
]
}
}
}
},
"mappings": {
"my-type": {
"properties": {
"demodata": {
"type": "text",
"fields": {
"ngram": {
"type": "text",
"analyzer": "ngram_analyzer",
"search_analyzer": "standard"
}
}
}
}
}
}
}

然后您必须在搜索中使用添加的多字段demodata.ngram:

res = es.search(index="my-index", body={"query": {"match": {"demodata.ngram": "Hello"}}})

关于python - 部分搜索返回零命中,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48675586/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com