gpt4 book ai didi

elasticsearch - 尝试批量索引时,ElasticSearch引发达到索引字段限制的情况

转载 作者:行者123 更新时间:2023-12-02 23:56:31 24 4
gpt4 key购买 nike

我正在使用以下python代码:

from elasticsearch import helpers, Elasticsearch
import csv

es = Elasticsearch(hosts="localhost:9200/")

with open('data.csv') as f:
reader = csv.DictReader(f)
helpers.bulk(es, reader, index='my-index', doc_type='my-type')
data.csv是具有5004个 header 和200万行( len(reader.fieldnames) = 5004)的csv。

当我运行此代码时,我得到:
[2018-10-30T12:20:59,448][DEBUG][o.e.a.b.TransportShardBulkAction] [my-index][3] failed to execute bulk item (index) BulkShardRequest [[my-index][3]] containing [101] requests    
java.lang.IllegalArgumentException: Limit of total fields [5500] in index [my-index] has been exceeded
at org.elasticsearch.index.mapper.MapperService.checkTotalFieldsLimit(MapperService.java:580) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:463) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:355) ~[elasticsearch-6.4.2.jar:6.4.2]
...

我的索引设置:
{
"my-index": {
"settings": {
"index": {
"mapping": {
"total_fields": {
"limit": "5500"
}
},
"number_of_shards": "5",
"provided_name": "my-index",
"creation_date": "1540894469635",
"number_of_replicas": "1",
"uuid": "wl1k8NZRR7GUwfMCgwpPMQ",
"version": {
"created": "6040299"
}
}
}
}
}

我真的不明白这一点,似乎一切都准备就绪,应该可以正常工作。

最佳答案

您遇到默认的ES "mapping explosion" protection.

似乎您已经知道要查询的设置,因为限制是5500,而不是默认值1000。请检查索引中的映射是否确实与csv结构匹配?到目前为止,似乎映射字段和csv header 的超集超过5500

关于elasticsearch - 尝试批量索引时,ElasticSearch引发达到索引字段限制的情况,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53062083/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com