gpt4 book ai didi

curl - 如何通过curl查询Logstash并仅返回特定字段

转载 作者:行者123 更新时间:2023-12-03 00:57:12 25 4
gpt4 key购买 nike

现在,我正在使用“match_all”查询来获取Logstash正在处理的数据。我得到的输出是应该作为事件一部分的每个字段。这是我的查询:

{
"query": {
"match_all" : { }
},
"size": 1,
"sort": [
{
"@timestamp": {
"order": "desc"
}
}
]
}

如您所见,我还在对我的结果进行排序,以便始终获得最新输出的结果。

这是我的输出示例:
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 15768,
"max_score" : null,
"hits" : [
{
"_index" : "filebeat-2017.02.24",
"_type" : "bro",
"_id" : "AVpx-pFtiEtl3Zqhg8tF",
"_score" : null,
"_source" : {
"resp_pkts" : 0,
"source" : "/usr/local/bro/logs/current/conn.log",
"type" : "bro",
"id_orig_p" : 56058,
"duration" : 848.388112,
"local_resp" : true,
"uid" : "CPndOf4NNf9CzTILFi",
"id_orig_h" : "192.168.137.130",
"conn_state" : "OTH",
"@version" : "1",
"beat" : {
"hostname" : "localhost.localdomain",
"name" : "localhost.localdomain",
"version" : "5.2.0"
},
"host" : "localhost.localdomain",
"id_resp_h" : "192.168.137.141",
"id_resp_p" : 22,
"resp_ip_bytes" : 0,
"offset" : 115612,
"orig_bytes" : 32052,
"local_orig" : true,
"input_type" : "log",
"orig_ip_bytes" : 102980,
"orig_pkts" : 1364,
"missed_bytes" : 0,
"history" : "DcA",
"tunnel_parents" : [ ],
"message" : "{\"ts\":1487969779.653504,\"uid\":\"CPndOf4NNf9CzTILFi\",\"id_orig_h\":\"192.168.137.130\",\"id_orig_p\":56058,\"id_resp_h\":\"192.168.137.141\",\"id_resp_p\":22,\"proto\":\"tcp\",\"duration\":848.388112,\"orig_bytes\":32052,\"resp_bytes\":0,\"conn_state\":\"OTH\",\"local_orig\":true,\"local_resp\":true,\"missed_bytes\":0,\"history\":\"DcA\",\"orig_pkts\":1364,\"orig_ip_bytes\":102980,\"resp_pkts\":0,\"resp_ip_bytes\":0,\"tunnel_parents\":[]}",
"tags" : [
"beats_input_codec_plain_applied"
],
"@timestamp" : "2017-02-24T21:15:29.414Z",
"resp_bytes" : 0,
"proto" : "tcp",
"fields" : {
"sensorType" : "networksensor"
},
"ts" : 1.487969779653504E9
},
"sort" : [
1487970929414
]
}
]
}
}

如您所见,在外部应用程序(用C#编写,因此所有这些字符串上的垃圾回收非常大)中要处理的输出很多,而我只是不需要。

我的问题是,如何设置查询,以便仅获取所需的字段?

最佳答案

对于5.x,有一项更改允许您执行_source过滤。该文档是here,看起来像这样:

{ 
"query": {
"match_all" : { }
},
"size": 1,
"_source": ["a","b"],
...

结果看起来像:
{
"took" : 2,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 1,
"max_score" : 1.0,
"hits" : [
{
"_index" : "xxx",
"_type" : "xxx",
"_id" : "xxx",
"_score" : 1.0,
"_source" : {
"a" : 1,
"b" : "2"
}
}
]
}
}

对于5之前的版本,可以使用fields参数来实现:

您的查询可以在查询的根级别传递 ,"fields": ["field1","field2"...]。它返回的格式会有所不同,但是会起作用。
{ 
"query": {
"match_all" : { }
},
"size": 1,
"fields": ["a","b"],
...

这将产生如下输出:
  {
"took": 9,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"failed": 0
},
"hits": {
"total": 2077,
"max_score": 1,
"hits": [
{
"_index": "xxx",
"_type": "xxx",
"_id": "xxxx",
"_score": 1,
"fields": {
"a": [
0
],
"b": [
"xyz"
]
}
}
]
}
}

字段始终是数组(自1.0 API起),并且没有任何方法可以更改,因为Elasticsearch本质上是多值感知的。

关于curl - 如何通过curl查询Logstash并仅返回特定字段,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42448454/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com