- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试为创建管道以监控Elasticsearch中的用户执行的查询。
我遵循了这个(以某种方式过时?)教程
https://www.elastic.co/blog/monitoring-the-search-queries
我成功安装了logstash,packetbeat和kibana。这两个应用程序现在都已在本地配置,并且我能够访问它们。我验证了elastisearch中是否存在新索引,名称为logstash-2020.04.06-000001
。我使用与教程中给出的配置几乎相同的配置,唯一的区别是我的IP地址是本地的,在这种情况下,我使用127.0.0.1
代替10.255.4.165
,只是因为它不能正常工作。所有应用程序均在默认端口上工作。
整个设置似乎工作得很好,但是我看不到任何保存在任何地方的查询。
而且,我无法在Kibana中创建索引模式来跟踪用户查询。我可以看到Rest API和Kibana Index Managment中都存在索引,但是它是一个空索引,没有任何文档。我已经尝试在使用systemd配置后重新启动所有服务。
我正在使用 Elasticsearch 7.6.2。
我该如何解决?
可能还有其他解决方案来记录执行的查询吗?
我不一定需要Kibana可视化,但我需要了解用户查询以修正结果的相关性得分。
Packetbeat配置文件/etc/packetbeat/packetbeat.yml:
# Select the network interfaces to sniff the data. You can use the "any"
# keyword to sniff on all connected interfaces.
interfaces:
device: any
http:
# Configure the ports where to listen for HTTP traffic. You can disable
# the HTTP protocol by commenting out the list of ports.
ports: [9200]
send_request: true
include_body_for: ["application/json", "x-www-form-urlencoded"]
#elasticsearch:
# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (http and 9200)
# In case you specify and additional path, the scheme is required: http://localhost:9200/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
# hosts: ["Localhost:9200"]
### Logstash as output
logstash:
# The Logstash hosts
hosts: ["127.0.0.1:5044"]
output.logstash:
hosts: ["127.0.0.1:5044"]
user@host:/usr/share/packetbeat$ sudo ./bin/packetbeat -e -c /etc/packetbeat/packetbeat.yml -d "publish"
2020-04-07T15:59:50.855+0200 INFO instance/beat.go:622 Home path: [/usr/share/packetbeat/bin] Config path: [/usr/share/packetbeat/bin] Data path: [/usr/share/packetbeat/bin/data] Logs path: [/usr/share/packetbeat/bin/logs]
2020-04-07T15:59:50.873+0200 INFO instance/beat.go:630 Beat ID: 25a0570e-3395-4a8b-8dcf-38c19560eb44
2020-04-07T15:59:50.897+0200 INFO [api] api/server.go:62 Starting stats endpoint
2020-04-07T15:59:50.906+0200 INFO [api] api/server.go:64 Metrics endpoint listening on: 127.0.0.1:5066 (configured: localhost)
2020-04-07T15:59:50.907+0200 INFO [seccomp] seccomp/seccomp.go:124 Syscall filter successfully installed
2020-04-07T15:59:50.907+0200 INFO [beat] instance/beat.go:958 Beat info {"system_info": {"beat": {"path": {"config": "/usr/share/packetbeat/bin", "data": "/usr/share/packetbeat/bin/data", "home": "/usr/share/packetbeat/bin", "logs": "/usr/share/packetbeat/bin/logs"}, "type": "packetbeat", "uuid": "25a0570e-3395-4a8b-8dcf-38c19560eb44"}}}
2020-04-07T15:59:50.907+0200 INFO [beat] instance/beat.go:967 Build info {"system_info": {"build": {"commit": "d57bcf8684602e15000d65b75afcd110e2b12b59", "libbeat": "7.6.2", "time": "2020-03-26T05:09:32.000Z", "version": "7.6.2"}}}
2020-04-07T15:59:50.907+0200 INFO [beat] instance/beat.go:970 Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":8,"version":"go1.13.8"}}}
2020-04-07T15:59:50.908+0200 INFO [beat] instance/beat.go:974 Host info {"system_info": {"host": {"architecture":"x86_64","boot_time":"2020-04-07T11:24:50+02:00","containerized":false,"name":"host","ip":["127.0.0.1/8","::1/128","192.168.1.11/24","fe80::dacb:8aff:fe80:d3f5/64","172.17.0.1/16"],"kernel_version":"4.9.0-12-amd64","mac":["XXXXXXXXXXXXXXX"],"os":{"family":"debian","platform":"debian","name":"Debian GNU/Linux","version":"9 (stretch)","major":9,"minor":0,"patch":0,"codename":"stretch"},"timezone":"CEST","timezone_offset_sec":7200,"id":"414bf25d70c54332b8cf4d2a82ee0108"}}}
2020-04-07T15:59:50.908+0200 INFO [beat] instance/beat.go:1003 Process info {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"ambient":null}, "cwd": "/usr/share/packetbeat", "exe": "/usr/share/packetbeat/bin/packetbeat", "name": "packetbeat", "pid": 9679, "ppid": 9678, "seccomp": {"mode":"filter"}, "start_time": "2020-04-07T15:59:50.259+0200"}}}
2020-04-07T15:59:50.908+0200 INFO instance/beat.go:298 Setup Beat: packetbeat; Version: 7.6.2
2020-04-07T15:59:50.908+0200 INFO [publisher] pipeline/module.go:110 Beat name: host
2020-04-07T15:59:50.908+0200 INFO procs/procs.go:105 Process watcher disabled
2020-04-07T15:59:50.924+0200 INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
2020-04-07T15:59:50.924+0200 INFO instance/beat.go:439 packetbeat start running.
2020-04-07T16:00:20.926+0200 INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":40,"time":{"ms":44}},"total":{"ticks":160,"time":{"ms":172},"value":160},"user":{"ticks":120,"time":{"ms":128}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":8},"info":{"ephemeral_id":"0634aeb9-6358-4947-a908-7043973876dc","uptime":{"ms":30132}},"memstats":{"gc_next":39455344,"memory_alloc":21608032,"memory_total":26051720,"rss":66273280},"runtime":{"goroutines":14}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"logstash"},"pipeline":{"clients":0,"events":{"active":0}}},"system":{"cpu":{"cores":8},"load":{"1":0.5,"15":0.64,"5":0.75,"norm":{"1":0.0625,"15":0.08,"5":0.0938}}}}}}
input {
beats {
port => 5044
}
}
filter {
if "search" in [request]{
grok {
match => { "request" => ".*\n\{(?<query_body>.*)"}
}
grok {
match => { "path" => "\/(?<index>.*)\/_search"}
}
if [index] {
}
else {
mutate {
add_field => { "index" => "All" }
}
}
mutate {
update => { "query_body" => "{%{query_body}" }
}
}
}
output {
if "search" in [request] and "ignore_unmapped" not in [query_body]{
elasticsearch {
hosts => "127.0.0.1:9200"
}
}
}
user@host:/usr/share/logstash$ sudo ./bin/logstash --path.settings /etc/logstash -f es-first-config.conf
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2020-04-07T15:58:33,546][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-04-07T15:58:33,731][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.6.2"}
[2020-04-07T15:58:35,916][INFO ][org.reflections.Reflections] Reflections took 33 ms to scan 1 urls, producing 20 keys and 40 values
[2020-04-07T15:58:37,443][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/]}}
[2020-04-07T15:58:37,630][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"}
[2020-04-07T15:58:37,688][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2020-04-07T15:58:37,694][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-04-07T15:58:37,789][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//127.0.0.1:9200"]}
[2020-04-07T15:58:37,850][INFO ][logstash.outputs.elasticsearch][main] Using default mapping template
[2020-04-07T15:58:37,935][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1, "index.lifecycle.name"=>"logstash-policy", "index.lifecycle.rollover_alias"=>"logstash"}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2020-04-07T15:58:38,108][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2020-04-07T15:58:38,114][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/usr/share/logstash/es-first-config.conf"], :thread=>"#<Thread:0x7437b24 run>"}
[2020-04-07T15:58:39,246][INFO ][logstash.inputs.beats ][main] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2020-04-07T15:58:39,314][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2020-04-07T15:58:39,388][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-04-07T15:58:39,480][INFO ][org.logstash.beats.Server][main] Starting server on port: 5044
[2020-04-07T15:58:39,762][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
最佳答案
我和你一样,但是更加先进。出现以下问题的原因是,您缺少此段代码:
packetbeat.flows:
timeout: 30s
period: 10s
processors:
- add_cloud_metadata: ~
关于elasticsearch - 监视Elasticsearch中执行的搜索查询,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/61104860/
我需要记录在网页上执行的事件。 例如。填写登记表。 记录器应捕获关键字条目和在页面上执行的点击,并在请求时回放。同时记录器还应该捕获生成事件的实际元素。说当我在 firstName 中键入 记录器应
我是一个 Jest 新手,我正在为我的 React 应用程序编写单元测试,该应用程序使用 redux 并使用 Typescript 编写。 我的容器组件包含这段代码: const mapDispatc
我想将一些批处理类型的作业从 cron 转移到 Monit,但正在努力让它们正常工作。这些脚本通常每天运行一次,但有时必须在当天晚些时候重新运行。目标是利用 monit 和 m/monit 前端重新运
我正在尝试测试一个组件调用 detectChanges上面注入(inject)了ChangeDetectorRef 我已经逐步完成了代码,它肯定被调用了,但似乎我在组件和测试中得到了不同的 Chang
我想知道是否有一种很好的方法来监视 SharePoint 上的文档库的更改(添加新文件、更改/ checkin 文件、删除文件等) 基本上,什么System.IO.FileSystemWatcher在
是否可以监视 R 正在使用或已用于调用函数的内存量?例如,我有一个任意函数,例如: smallest.sv <- function(){ A <- matrix(rnorm(1e6), 1e3);
这是一个简单的问题,但令人费解...... Azure 服务中是否有统计数据来监控数据工厂被访问的次数? 那么,举个例子,如果一个自动化系统被设置为通过恶意意图耗尽对 ADF 进行持续的 API 调用
Kafka提供了监控当前偏移量和最新偏移量的能力。同样,azure eventhub是否公开任何api来持续监视分区的当前偏移量和最新可用偏移量? 最佳答案 扩展上述答案,您可以看到两种方式的偏移。
是否有系统 View 或 DMV 记录我的数据仓库恢复和暂停的时间以及执行恢复和暂停的帐户?我环顾四周,似乎找不到具有开/关时间戳的 View 。或者甚至是显示放大和缩小的历史时间戳的 View 。
我一直在研究Microsoft Azure 事件中心。我的目标是找到一种提供自动可扩展性的方法。这是一项实验性工作,我实际上只是想知道我可以使用 Azure 事件中心做什么。我无法访问 Azure 平
我有一个在 azure 中运行的辅助角色。 我正在使用标准跟踪诊断,我可以使用 Visual Studio 中的服务器资源管理器查看该诊断。 但是,它很难涉水,速度很慢等等。 谁能推荐一个插件、工具、
我们将 Azure Function 与 Node.js 结合使用。 在Azure门户UI中,在每个函数调用日志列表旁边(在“监视器”选项卡中),我们看到两个计数器:“最近成功计数”和“最近错误计数”
是否有系统 View 或 DMV 记录我的数据仓库恢复和暂停的时间以及执行恢复和暂停的帐户?我环顾四周,似乎找不到具有开/关时间戳的 View 。或者甚至是显示放大和缩小的历史时间戳的 View 。
我一直在研究Microsoft Azure 事件中心。我的目标是找到一种提供自动可扩展性的方法。这是一项实验性工作,我实际上只是想知道我可以使用 Azure 事件中心做什么。我无法访问 Azure 平
我有一个在 azure 中运行的辅助角色。 我正在使用标准跟踪诊断,我可以使用 Visual Studio 中的服务器资源管理器查看该诊断。 但是,它很难涉水,速度很慢等等。 谁能推荐一个插件、工具、
是否可以获取 channel 消息的副本? (而不是从 channel 接收和删除消息) 这个想法是记录一个 channel 的消息。 最佳答案 Is it possible to get copy
我正在尝试使用 Mockito监视路径 em> dirSpy = spy(Files.createTempDirectory(DIR_NAME)); 我收到一条错误消息 Mockito cannot
我的组件具有以下功能: updateTransactions() { let notes = this.createNotes() let delTransactions = th
我想测试一些在 React 组件的 componentDidMount 生命周期方法中调用的自定义方法。 componentDidMount() { getData().then(res
我的 $scope 中有一个对象,其中包含一些属性,例如: $scope.content = { name : 'myname', description : 'mydescrip
我是一名优秀的程序员,十分优秀!