gpt4 book ai didi

java - Elasticsearch 错误: Custom Analyzer [custom_analyzer] failed to find tokenizer under name [my_tokenizer]

转载 作者:行者123 更新时间:2023-12-02 02:04:40 26 4
gpt4 key购买 nike

正在尝试与我的 custom_analyzertokenizer 一起进行字段映射,但出现一些错误。

请找出映射字段时从 kibana 收到的以下错误

自定义分析器 [custom_analyzer] 未能在名称 [my_tokenizer] 下找到标记生成器

请查找我的 map 详细信息。

PUT attach_local
{
"settings": {
"analysis": {
"analyzer": {
"custom_analyzer": {
"type": "custom",
"tokenizer": "my_tokenizer",
"char_filter": [
"html_strip"
],
"filter": [
"lowercase",
"asciifolding"
]
}
}
}
},
"tokenizer": {
"my_tokenizer": {
"type": "ngram",
"min_gram": 3,
"max_gram": 3,
"token_chars": [
"letter",
"digit"
]
}
},

"mappings" : {
"doc" : {
"properties" : {
"attachment" : {
"properties" : {
"content" : {
"type" : "text",
"analyzer": "custom_analyzer"
},
"content_length" : {
"type" : "long"
},
"content_type" : {
"type" : "text"
},
"language" : {
"type" : "text"
}
}
},
"resume" : {
"type" : "text"
}
}
}
}
}

最佳答案

正确缩进 JSON 非常重要。您会发现您的分词器未正确位于 analysis 部分内。这是正确的定义:

{
"settings": {
"analysis": {
"analyzer": {
"custom_analyzer": {
"type": "custom",
"tokenizer": "my_tokenizer",
"char_filter": [
"html_strip"
],
"filter": [
"lowercase",
"asciifolding"
]
}
},
"tokenizer": {
"my_tokenizer": {
"type": "ngram",
"min_gram": 3,
"max_gram": 3,
"token_chars": [
"letter",
"digit"
]
}
}
}
},
"mappings": {
"doc": {
"properties": {
"attachment": {
"properties": {
"content": {
"type": "text",
"analyzer": "custom_analyzer"
},
"content_length": {
"type": "long"
},
"content_type": {
"type": "text"
},
"language": {
"type": "text"
}
}
},
"resume": {
"type": "text"
}
}
}
}
}

关于java - Elasticsearch 错误: Custom Analyzer [custom_analyzer] failed to find tokenizer under name [my_tokenizer],我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50997561/

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com