- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我正在尝试在ubuntu沙箱上设置ELK堆栈,并遇到了一些问题。问题是Logstash没有将数据发送到Elasticsearch。我引用了Elasticsearch文档。
看起来Kibana和Elasticsearch的连接正常,我认为Kibana报告的是找不到数据。花了几个小时找出来,但没有运气。
感谢任何解决此问题。非常感谢你!
这是我的设置详细信息,
Logstash设置:
sirishg@sirishg-vm:/u02/app/logstash-2.1.1/bin$ ./logstash -f /u02/app/logstash-2.1.1/first-pipeline.conf
Settings: Default filter workers: 1
Logstash startup completed
# The # character at the beginning of a line indicates a comment. Use comments to describe your configuration.
input {
file {
path => "/u02/app/logstash-tutorial-dataset.log"
start_position => beginning
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}"}
}
geoip {
source => "clientip"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
stdout {
codec => rubydebug
}
}
{"cluster_name":"my-application","status":"yellow","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":50.0}
sirishg@sirishg-vm:/u02/app/elasticsearch-2.1.1/bin$ ./elasticsearch
[2016-01-16 18:17:36,591][INFO ][node ] [node-1] version[2.1.1], pid[3596], build[40e2c53/2015-12-15T13:05:55Z]
[2016-01-16 18:17:36,594][INFO ][node ] [node-1] initializing ...
[2016-01-16 18:17:36,798][INFO ][plugins ] [node-1] loaded [], sites []
[2016-01-16 18:17:36,907][INFO ][env ] [node-1] using [1] data paths, mounts [[/ (/dev/sda1)]], net usable_space [12.6gb], net total_space [45.1gb], spins? [possibly], types [ext4]
[2016-01-16 18:17:43,349][INFO ][node ] [node-1] initialized
[2016-01-16 18:17:43,350][INFO ][node ] [node-1] starting ...
[2016-01-16 18:17:43,693][INFO ][transport ] [node-1] publish_address {localhost/127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2016-01-16 18:17:43,713][INFO ][discovery ] [node-1] my-application/8bfTdwZcSzaNC9_P2VYYvw
[2016-01-16 18:17:46,878][INFO ][cluster.service ] [node-1] new_master {node-1}{8bfTdwZcSzaNC9_P2VYYvw}{127.0.0.1}{localhost/127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-01-16 18:17:46,980][INFO ][http ] [node-1] publish_address {localhost/127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2016-01-16 18:17:46,991][INFO ][node ] [node-1] started
[2016-01-16 18:17:47,318][INFO ][gateway ] [node-1] recovered [1] indices into cluster_state
[2016-01-16 18:20:03,866][INFO ][rest.suppressed ] /logstash-*/_mapping/field/* Params: {ignore_unavailable=false, allow_no_indices=false, index=logstash-*, include_defaults=true, fields=*, _=1452986403826}
[logstash-*] IndexNotFoundException[no such index]
at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:636)
at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:133)
at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:77)
at org.elasticsearch.action.admin.indices.mapping.get.TransportGetFieldMappingsAction.doExecute(TransportGetFieldMappingsAction.java:57)
at org.elasticsearch.action.admin.indices.mapping.get.TransportGetFieldMappingsAction.doExecute(TransportGetFieldMappingsAction.java:40)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:70)
at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:58)
sirishg@sirishg-vm:/u02/app/kibana-4.3.1-linux-x86/bin$ ./kibana
log [18:18:36.697] [info][status][plugin:kibana] Status changed from uninitialized to green - Ready
log [18:18:36.786] [info][status][plugin:elasticsearch] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log [18:18:36.852] [info][status][plugin:kbn_vislib_vis_types] Status changed from uninitialized to green - Ready
log [18:18:36.875] [info][status][plugin:markdown_vis] Status changed from uninitialized to green - Ready
log [18:18:36.883] [info][status][plugin:metric_vis] Status changed from uninitialized to green - Ready
log [18:18:36.907] [info][status][plugin:spyModes] Status changed from uninitialized to green - Ready
log [18:18:36.936] [info][status][plugin:statusPage] Status changed from uninitialized to green - Ready
log [18:18:36.950] [info][status][plugin:table_vis] Status changed from uninitialized to green - Ready
log [18:18:37.078] [info][listening] Server running at http://0.0.0.0:5601
log [18:18:37.446] [info][status][plugin:elasticsearch] Status changed from yellow to green - Kibana index ready
Error: Please specify a default index pattern
KbnError@http://localhost:5601/bundles/commons.bundle.js:58172:21
NoDefaultIndexPattern@http://localhost:5601/bundles/commons.bundle.js:58325:6
loadDefaultIndexPattern/<@http://localhost:5601/bundles/kibana.bundle.js:97911:1
processQueue@http://localhost:5601/bundles/commons.bundle.js:42358:29
scheduleProcessQueue/<@http://localhost:5601/bundles/commons.bundle.js:42374:28
$RootScopeProvider/this.$get</Scope.prototype.$eval@http://localhost:5601/bundles/commons.bundle.js:43602:17
$RootScopeProvider/this.$get</Scope.prototype.$digest@http://localhost:5601/bundles/commons.bundle.js:43413:16
$RootScopeProvider/this.$get</Scope.prototype.$apply@http://localhost:5601/bundles/commons.bundle.js:43710:14
$LocationProvider/this.$get</<@http://localhost:5601/bundles/commons.bundle.js:39839:14
jQuery.event.dispatch@http://localhost:5601/bundles/commons.bundle.js:22720:16
jQuery.event.add/elemData.handle@http://localhost:5601/bundles/commons.bundle.js:22407:7
{:timestamp=>"2016-01-17T11:07:06.287000-0500", :message=>"Reading config file", :config_file=>"/u02/app/logstash-2.1.1/first-pipeline.conf", :level=>:debug, :file=>"logstash/agent.rb", :line=>"325", :method=>"local_config"}
{:timestamp=>"2016-01-17T11:07:06.420000-0500", :message=>"Compiled pipeline code:\n @inputs = []\n @filters = []\n @outputs = []\n @periodic_flushers = []\n @shutdown_flushers = []\n\n @input_file_1 = plugin(\"input\", \"file\", LogStash::Util.hash_merge_many({ \"path\" => (\"/u02/app/logstash-tutorial-dataset.log\") }, { \"start_position\" => (\"beginning\") }))\n\n @inputs << @input_file_1\n\n @filter_grok_2 = plugin(\"filter\", \"grok\", LogStash::Util.hash_merge_many({ \"match\" => {(\"message\") => (\"%{COMBINEDAPACHELOG}\")} }))\n\n @filters << @filter_grok_2\n\n @filter_grok_2_flush = lambda do |options, &block|\n @logger.debug? && @logger.debug(\"Flushing\", :plugin => @filter_grok_2)\n\n events = @filter_grok_2.flush(options)\n\n return if events.nil? || events.empty?\n\n @logger.debug? && @logger.debug(\"Flushing\", :plugin => @filter_grok_2, :events => events)\n\n events = @filter_geoip_3.multi_filter(events)\n \n\n\n events.each{|e| block.call(e)}\n end\n\n if @filter_grok_2.respond_to?(:flush)\n @periodic_flushers << @filter_grok_2_flush if @filter_grok_2.periodic_flush\n @shutdown_flushers << @filter_grok_2_flush\n end\n\n @filter_geoip_3 = plugin(\"filter\", \"geoip\", LogStash::Util.hash_merge_many({ \"source\" => (\"clientip\") }))\n\n @filters << @filter_geoip_3\n\n @filter_geoip_3_flush = lambda do |options, &block|\n @logger.debug? && @logger.debug(\"Flushing\", :plugin => @filter_geoip_3)\n\n events = @filter_geoip_3.flush(options)\n\n return if events.nil? || events.empty?\n\n @logger.debug? && @logger.debug(\"Flushing\", :plugin => @filter_geoip_3, :events => events)\n\n \n\n events.each{|e| block.call(e)}\n end\n\n if @filter_geoip_3.respond_to?(:flush)\n @periodic_flushers << @filter_geoip_3_flush if @filter_geoip_3.periodic_flush\n @shutdown_flushers << @filter_geoip_3_flush\n end\n\n @output_elasticsearch_4 = plugin(\"output\", \"elasticsearch\", LogStash::Util.hash_merge_many({ \"hosts\" => [(\"localhost:9200\")] }))\n\n @outputs << @output_elasticsearch_4\n\n @output_stdout_5 = plugin(\"output\", \"stdout\", LogStash::Util.hash_merge_many({ \"codec\" => (\"rubydebug\") }))\n\n @outputs << @output_stdout_5\n\n def filter_func(event)\n events = [event]\n @logger.debug? && @logger.debug(\"filter received\", :event => event.to_hash)\n events = @filter_grok_2.multi_filter(events)\n events = @filter_geoip_3.multi_filter(events)\n \n events\n end\n def output_func(event)\n @logger.debug? && @logger.debug(\"output received\", :event => event.to_hash)\n @output_elasticsearch_4.handle(event)\n @output_stdout_5.handle(event)\n \n end", :level=>:debug, :file=>"logstash/pipeline.rb", :line=>"38", :method=>"initialize"}
{:timestamp=>"2016-01-17T11:07:06.426000-0500", :message=>"Plugin not defined in namespace, checking for plugin file", :type=>"input", :name=>"file", :path=>"logstash/inputs/file", :level=>:debug, :file=>"logstash/plugin.rb", :line=>"76", :method=>"lookup"}
{:timestamp=>"2016-01-17T11:07:06.451000-0500", :message=>"Plugin not defined in namespace, checking for plugin file", :type=>"codec", :name=>"plain", :path=>"logstash/codecs/plain", :level=>:debug, :file=>"logstash/plugin.rb", :line=>"76", :method=>"lookup"}
{:timestamp=>"2016-01-17T11:07:06.465000-0500", :message=>"config LogStash::Codecs::Plain/@charset = \"UTF-8\"", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.468000-0500", :message=>"config LogStash::Inputs::File/@path = [\"/u02/app/logstash-tutorial-dataset.log\"]", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.469000-0500", :message=>"config LogStash::Inputs::File/@start_position = \"beginning\"", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.472000-0500", :message=>"config LogStash::Inputs::File/@codec = <LogStash::Codecs::Plain charset=>\"UTF-8\">", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.480000-0500", :message=>"config LogStash::Inputs::File/@add_field = {}", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.481000-0500", :message=>"config LogStash::Inputs::File/@stat_interval = 1", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.492000-0500", :message=>"config LogStash::Inputs::File/@discover_interval = 15", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.493000-0500", :message=>"config LogStash::Inputs::File/@sincedb_write_interval = 15", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.496000-0500", :message=>"config LogStash::Inputs::File/@delimiter = \"\\n\"", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.498000-0500", :message=>"Plugin not defined in namespace, checking for plugin file", :type=>"filter", :name=>"grok", :path=>"logstash/filters/grok", :level=>:debug, :file=>"logstash/plugin.rb", :line=>"76", :method=>"lookup"}
{:timestamp=>"2016-01-17T11:07:06.515000-0500", :message=>"config LogStash::Filters::Grok/@match = {\"message\"=>\"%{COMBINEDAPACHELOG}\"}", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.524000-0500", :message=>"config LogStash::Filters::Grok/@add_tag = []", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.532000-0500", :message=>"config LogStash::Filters::Grok/@remove_tag = []", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.535000-0500", :message=>"config LogStash::Filters::Grok/@add_field = {}", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.536000-0500", :message=>"config LogStash::Filters::Grok/@remove_field = []", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
sirishg@sirishg-vm:/u02/app/elasticsearch-2.1.1/bin$ ./elasticsearch
[2016-01-17 11:00:23,467][INFO ][node ] [node-1] version[2.1.1], pid[3418], build[40e2c53/2015-12-15T13:05:55Z]
[2016-01-17 11:00:23,470][INFO ][node ] [node-1] initializing ...
[2016-01-17 11:00:23,698][INFO ][plugins ] [node-1] loaded [], sites []
[2016-01-17 11:00:23,853][INFO ][env ] [node-1] using [1] data paths, mounts [[/ (/dev/sda1)]], net usable_space [12.6gb], net total_space [45.1gb], spins? [possibly], types [ext4]
[2016-01-17 11:00:27,412][INFO ][node ] [node-1] initialized
[2016-01-17 11:00:27,412][INFO ][node ] [node-1] starting ...
[2016-01-17 11:00:27,605][INFO ][transport ] [node-1] publish_address {localhost/127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2016-01-17 11:00:27,616][INFO ][discovery ] [node-1] my-application/rd4S1ZOdQXOj3_g-N22NnQ
[2016-01-17 11:00:31,121][INFO ][cluster.service ] [node-1] new_master {node-1}{rd4S1ZOdQXOj3_g-N22NnQ}{127.0.0.1}{localhost/127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-01-17 11:00:31,259][INFO ][http ] [node-1] publish_address {localhost/127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2016-01-17 11:00:31,260][INFO ][node ] [node-1] started
[2016-01-17 11:00:31,830][INFO ][gateway ] [node-1] recovered [2] indices into cluster_state
最佳答案
您能使它正常工作吗?一些评论:
1)您有时会在“0.0.0.0”上运行kibana,这表明出现了问题,请检查配置和与Elasticsearch的连接。
2)您将信息放入什么索引? logstash *?
3)如果其他所有操作均失败,请更新至当前的2.3。*(Elasticsearch)和4.4。*(Kibana)。
4)为了使logstash实际上捕获并读取文件(并将数据发送到Elasticsearch),您应该再次写入文件(更改文件创建/修改时间戳)。这部分并不总是那么容易,因为logstash(文件输入-输入)实际上就像指向添加到文件的最后一行的指针一样。
您可能现在已经使它工作了,所以也许我在吹牛,但是另一方面,这可能会对某人有所帮助。
关于elasticsearch - 在Ubuntu上安装Logstash-Elasticsearch-Kibana,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34833423/
我使用 certbot/Letsencrypt 为我的应用程序生成了证书。我在我的 kibana.yml 文件中添加了两个 pem 文件。当我尝试通过 https://domainName:5601
问题:我如何调试 kibana?有错误日志吗? 问题 1:kibana 4 不会熬夜 问题 2:我不知道 kibana 4 在哪里/是否记录错误 细节: 这是我启动 kibana,向端口发出请求,什么
我是ElasticSearch和Kibana的新手,我在ElasticSearch中的数据看起来像 { "_index": "ec", "_type": "monitor", "_id":
我目前正在使用 Kibana 3,并且想升级到 Kibana 4。但是,我想保留 Kibana 3,直到我在 Kibana 4 中配置好我的仪表板。两者可以并排运行而不相互干扰吗? 我知道 Kiban
我正在使用 Kibana 5.4.3 并构建分布图(如下图中绿色部分所示)。这看起来像一个高斯分布,因此我想要两个添加两条垂直红线来指示标准偏差。你知道我如何实现这一点吗? 非常感谢! 最佳答案 如果
我在RHEL 7.2上运行kibana 4.4.1 当kibana.yml文件不包含设置server.basePath时,一切正常。 Kibana成功启动并吐出消息 [info][listening]
我在 Kibana 中有一个字段,它有时是字符串,有时是数组。 例如: { "fld1": "val1", "fld2": "val2"} { "fld1": "val3", "fld2": [ "v
我想在 kibana dasgboard 中添加图像,但它给了我错误。 enter image description here 我曾使用 Markdown 添加图像,但给了我错误,如附图所示。 最佳
我尝试使用部分服务器名称在 kibana 中搜索主机名: B-wit-a2pgw-* 我也试过: hostname: B-wit-a2pgw-* 和: instance: B-wit-a2pgw-*
我的elasticsearch 数据库中有一个名为rpc 的字段,我使用Kibana 显示它。当我在 kibana 的搜索栏中搜索时,如下所示: rpc:* 它显示 rpc 字段的所有值,但我只想显示
我是 Kibana 新手,需要一些帮助。 我可以为单个查询绘制此折线图(java): 现在我想在同一图表中使用另一行进行另一个查询(例如 python)。我不太确定该怎么做。另外“Markdown w
我每 30 秒使用 logstash 向 elasticsearch 发送一个事件。此事件称为“测试”并具有数字字段“值”。所以我有一个事件列表,其中“值”字段有时会发生变化。例如: 时间戳:2-04
在我的 Kibana 中,当我搜索我的文档时,我需要寻找完全匹配: 在我的文档中,我有一个名为 message 的字段。 所以如果我搜索(使用 Kibana)类似的东西: message: "Prov
我已经阅读了 Kibana 网站,所以我从理论上知道 Kibana 的用途,但我对真实世界的故事很感兴趣。 Kibana 只是用来做日志分析的吗?它可以用作跨实际数据集的某种 BI 工具吗?有兴趣了解
在与他人共享 kibana 仪表板时,有没有办法设置某种权限。我担心有人会删除它或进行更改并保存它。我用谷歌搜索但没有找到任何东西。 最佳答案 自从提出这个问题以来,发生了很多事情。自 5 月以来,社
我想知道有什么方法(或解决方法)可以在 Kibana 仪表板上绘制移动平均线? 我已经阅读了官方网站上的所有文件,但没有提到移动平均线(或高级图表)。 任何信息或关键字都会有所帮助,提前致谢:) 最佳
我有一个包含许多字段名称的索引。我使用 Kibana 3 来可视化来自所述索引的分析。由于我有很多字段,我发现很难设置字段名称,每次我在 Kibana 中进行一些分析时。 Kibana 是否提供自动完
我有一个想要拆分的图表,所以我看到了 3 个不同行的图表。但是,它将所有三个图形的 y 轴自动缩放到所有图形的最小值和最大值。这是非常有问题的,因为其中一个图的比例要大得多,而其他图看起来就像零。 我
关闭。这个问题不满足Stack Overflow guidelines .它目前不接受答案。 想改善这个问题吗?更新问题,使其成为 on-topic对于堆栈溢出。 2年前关闭。 Improve thi
我想创建一个仪表板,显示有关一组有限请求值的信息: request:("/path1" OR "/path2" OR "/path3") 到目前为止我尝试过的: 我可以通过单击饼图的一部分将过滤器添加
我是一名优秀的程序员,十分优秀!