gpt4 book ai didi

elasticsearch - 监视Elasticsearch中执行的搜索查询

转载 作者:行者123 更新时间:2023-12-03 02:25:45 25 4
gpt4 key购买 nike

我正在尝试为创建管道以监控Elasticsearch中的用户执行的查询

我遵循了这个(以某种方式过时?)教程

https://www.elastic.co/blog/monitoring-the-search-queries

我成功安装了logstash,packetbeat和kibana。这两个应用程序现在都已在本地配置,并且我能够访问它们。我验证了elastisearch中是否存在新索引,名称为logstash-2020.04.06-000001。我使用与教程中给出的配置几乎相同的配置,唯一的区别是我的IP地址是本地的,在这种情况下,我使用127.0.0.1代替10.255.4.165,只是因为它不能正常工作。所有应用程序均在默认端口上工作。

整个设置似乎工作得很好,但是我看不到任何保存在任何地方的查询。

而且,我无法在Kibana中创建索引模式来跟踪用户查询。我可以看到Rest API和Kibana Index Managment中都存在索引,但是它是一个空索引,没有任何文档。我已经尝试在使用systemd配置后重新启动所有服务。

我正在使用 Elasticsearch 7.6.2。

我该如何解决?

可能还有其他解决方案来记录执行的查询吗?

我不一定需要Kibana可视化,但我需要了解用户查询以修正结果的相关性得分。

Packetbeat配置文件/etc/packetbeat/packetbeat.yml:

# Select the network interfaces to sniff the data. You can use the "any"
# keyword to sniff on all connected interfaces.
interfaces:
device: any

http:
# Configure the ports where to listen for HTTP traffic. You can disable
# the HTTP protocol by commenting out the list of ports.
ports: [9200]
send_request: true
include_body_for: ["application/json", "x-www-form-urlencoded"]

#elasticsearch:
# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (http and 9200)
# In case you specify and additional path, the scheme is required: http://localhost:9200/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
# hosts: ["Localhost:9200"]

### Logstash as output
logstash:
# The Logstash hosts
hosts: ["127.0.0.1:5044"]

output.logstash:
hosts: ["127.0.0.1:5044"]

这是我的packetbeat控制台输出:
user@host:/usr/share/packetbeat$ sudo ./bin/packetbeat -e -c /etc/packetbeat/packetbeat.yml -d "publish"
2020-04-07T15:59:50.855+0200 INFO instance/beat.go:622 Home path: [/usr/share/packetbeat/bin] Config path: [/usr/share/packetbeat/bin] Data path: [/usr/share/packetbeat/bin/data] Logs path: [/usr/share/packetbeat/bin/logs]
2020-04-07T15:59:50.873+0200 INFO instance/beat.go:630 Beat ID: 25a0570e-3395-4a8b-8dcf-38c19560eb44
2020-04-07T15:59:50.897+0200 INFO [api] api/server.go:62 Starting stats endpoint
2020-04-07T15:59:50.906+0200 INFO [api] api/server.go:64 Metrics endpoint listening on: 127.0.0.1:5066 (configured: localhost)
2020-04-07T15:59:50.907+0200 INFO [seccomp] seccomp/seccomp.go:124 Syscall filter successfully installed
2020-04-07T15:59:50.907+0200 INFO [beat] instance/beat.go:958 Beat info {"system_info": {"beat": {"path": {"config": "/usr/share/packetbeat/bin", "data": "/usr/share/packetbeat/bin/data", "home": "/usr/share/packetbeat/bin", "logs": "/usr/share/packetbeat/bin/logs"}, "type": "packetbeat", "uuid": "25a0570e-3395-4a8b-8dcf-38c19560eb44"}}}
2020-04-07T15:59:50.907+0200 INFO [beat] instance/beat.go:967 Build info {"system_info": {"build": {"commit": "d57bcf8684602e15000d65b75afcd110e2b12b59", "libbeat": "7.6.2", "time": "2020-03-26T05:09:32.000Z", "version": "7.6.2"}}}
2020-04-07T15:59:50.907+0200 INFO [beat] instance/beat.go:970 Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":8,"version":"go1.13.8"}}}
2020-04-07T15:59:50.908+0200 INFO [beat] instance/beat.go:974 Host info {"system_info": {"host": {"architecture":"x86_64","boot_time":"2020-04-07T11:24:50+02:00","containerized":false,"name":"host","ip":["127.0.0.1/8","::1/128","192.168.1.11/24","fe80::dacb:8aff:fe80:d3f5/64","172.17.0.1/16"],"kernel_version":"4.9.0-12-amd64","mac":["XXXXXXXXXXXXXXX"],"os":{"family":"debian","platform":"debian","name":"Debian GNU/Linux","version":"9 (stretch)","major":9,"minor":0,"patch":0,"codename":"stretch"},"timezone":"CEST","timezone_offset_sec":7200,"id":"414bf25d70c54332b8cf4d2a82ee0108"}}}
2020-04-07T15:59:50.908+0200 INFO [beat] instance/beat.go:1003 Process info {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"ambient":null}, "cwd": "/usr/share/packetbeat", "exe": "/usr/share/packetbeat/bin/packetbeat", "name": "packetbeat", "pid": 9679, "ppid": 9678, "seccomp": {"mode":"filter"}, "start_time": "2020-04-07T15:59:50.259+0200"}}}
2020-04-07T15:59:50.908+0200 INFO instance/beat.go:298 Setup Beat: packetbeat; Version: 7.6.2
2020-04-07T15:59:50.908+0200 INFO [publisher] pipeline/module.go:110 Beat name: host
2020-04-07T15:59:50.908+0200 INFO procs/procs.go:105 Process watcher disabled
2020-04-07T15:59:50.924+0200 INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
2020-04-07T15:59:50.924+0200 INFO instance/beat.go:439 packetbeat start running.
2020-04-07T16:00:20.926+0200 INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":40,"time":{"ms":44}},"total":{"ticks":160,"time":{"ms":172},"value":160},"user":{"ticks":120,"time":{"ms":128}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":8},"info":{"ephemeral_id":"0634aeb9-6358-4947-a908-7043973876dc","uptime":{"ms":30132}},"memstats":{"gc_next":39455344,"memory_alloc":21608032,"memory_total":26051720,"rss":66273280},"runtime":{"goroutines":14}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"logstash"},"pipeline":{"clients":0,"events":{"active":0}}},"system":{"cpu":{"cores":8},"load":{"1":0.5,"15":0.64,"5":0.75,"norm":{"1":0.0625,"15":0.08,"5":0.0938}}}}}}

Logstash配置文件/usr/share/logstash/es-first-config.conf:
input {
beats {
port => 5044
}
}
filter {
if "search" in [request]{
grok {
match => { "request" => ".*\n\{(?<query_body>.*)"}
}
grok {
match => { "path" => "\/(?<index>.*)\/_search"}
}
if [index] {
}
else {
mutate {
add_field => { "index" => "All" }
}
}
mutate {
update => { "query_body" => "{%{query_body}" }
}
}
}
output {
if "search" in [request] and "ignore_unmapped" not in [query_body]{
elasticsearch {
hosts => "127.0.0.1:9200"
}
}
}


这是我的logstash控制台输出:
user@host:/usr/share/logstash$ sudo ./bin/logstash --path.settings /etc/logstash -f es-first-config.conf
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2020-04-07T15:58:33,546][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-04-07T15:58:33,731][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.6.2"}
[2020-04-07T15:58:35,916][INFO ][org.reflections.Reflections] Reflections took 33 ms to scan 1 urls, producing 20 keys and 40 values
[2020-04-07T15:58:37,443][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/]}}
[2020-04-07T15:58:37,630][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"}
[2020-04-07T15:58:37,688][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2020-04-07T15:58:37,694][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-04-07T15:58:37,789][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//127.0.0.1:9200"]}
[2020-04-07T15:58:37,850][INFO ][logstash.outputs.elasticsearch][main] Using default mapping template
[2020-04-07T15:58:37,935][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1, "index.lifecycle.name"=>"logstash-policy", "index.lifecycle.rollover_alias"=>"logstash"}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2020-04-07T15:58:38,108][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2020-04-07T15:58:38,114][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/usr/share/logstash/es-first-config.conf"], :thread=>"#<Thread:0x7437b24 run>"}
[2020-04-07T15:58:39,246][INFO ][logstash.inputs.beats ][main] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2020-04-07T15:58:39,314][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2020-04-07T15:58:39,388][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-04-07T15:58:39,480][INFO ][org.logstash.beats.Server][main] Starting server on port: 5044
[2020-04-07T15:58:39,762][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

最佳答案

我和你一样,但是更加先进。出现以下问题的原因是,您缺少此段代码:

packetbeat.flows:
   timeout: 30s
   period: 10s

processors:
- add_cloud_metadata: ~

显然,packetbeat必须在同一 flex 节点上运行才能捕获端口9200

没有这个,它不会开始向logstash发送数据,我不知道为什么,但是尝试一下(级别stackoverflow,对不起)

数据发送完毕后,您将看到一个字段,下面是以下内容:

您将需要放大几个字段的容量(使用 vector ),以便不会出现错误(映射索引-> request和http ... request字段)

后来,这折叠了我的节点,就在那里。寻找有关在2020年监控Elasticsearch中的搜索查询的最新信息。

关于elasticsearch - 监视Elasticsearch中执行的搜索查询,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/61104860/

25 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com