`
sillycat
  • 浏览: 2527608 次
  • 性别: Icon_minigender_1
  • 来自: 成都
社区版块
存档分类
最新评论

ElasticSearch(5)Logstash

 
阅读更多
ElasticSearch(5)Logstash

Install Logstash
Find the binary file here https://www.elastic.co/downloads/logstash

Download the version 7.0.1 to match elasticsearch and Kibana
> wget https://artifacts.elastic.co/downloads/logstash/logstash-7.0.1.tar.gz

Starting guide https://www.elastic.co/guide/en/logstash/current/getting-started-with-logstash.html

Unzip the file and place in the working directory
> sudo ln -s /home/carl/tool/logstash-7.0.1 /opt/logstash-7.0.1
> sudo ln -s /opt/logstash-7.0.1 /opt/logstash

Data Source —> Inputs —> Filters —> Outputs —> Elasticsearch

Standard input and outputs
> bin/logstash -e 'input { stdin {} } output { stdout {} }'

Type
hello world
{
       "message" => "hello world",
    "@timestamp" => 2019-05-21T03:25:26.369Z,
      "@version" => "1",
          "host" => "ubuntu-master"
}

Fix the configuration issue
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

Change the configuration
> sudo vi /etc/sysctl.conf
vm.max_map_count=262144


Here is the sample configuration
> cat config/logstash-sample.conf
input {
  file {
    path =>["/opt/logstash/log1.log", "/opt/logstash/log2.log"]
    type => "demo_log"
    start_position => "beginning"
  }
}
output {
  elasticsearch {
    hosts => ["http://ubuntu-master:9200"]
    index => "demolog-%{+YYYY.MM.dd}"
  }
}

How to run. that
> bin/logstash -f config/logstash-sample.conf

Use pattern to store different logs into different outputs
> cat config/logstash-multi.conf
input {
  file {
    path =>["/opt/logstash/log1.log", "/opt/logstash/log1.txt"]
    type => "demolog1"
    start_position => "beginning"
  }
  file {
    path =>["/opt/logstash/log2.log"]
    type => "demolog2"
    start_position => "beginning"
  }
}
output {
  if [type] == "demolog1" {
  elasticsearch {
    hosts => ["http://ubuntu-master:9200"]
    index => "demolog1-%{+YYYY.MM.dd}"
  }
  }
  if [type] == "demolog2" {
    elasticsearch {
      hosts => ["http://ubuntu-master:9200"]
      index => "demolog2-%{+YYYY-MM-dd}"
    }
  }
}

Clean the logging index if needed
https://blog.csdn.net/xuezhangjun0121/article/details/80913678
> cat es-index-clear.sh
#/bin/bash
#es-index-clear
LAST_DATA=`date -d "-15 days" "+%Y.%m.%d"`
curl -XDELETE 'http://ubuntu-master:9200/*-'${LAST_DATA}'*'

We can cron job the deleting task
0 1 * * * /opt/logstash/es-index-clear.sh

> date -d "-15 days" "+%Y.%m.%d"
2019.05.07

References:
https://www.cnblogs.com/yincheng/p/logstash.html
https://www.jianshu.com/p/ef6a57309c72
https://www.jianshu.com/p/25ed5ed46682
https://www.elastic.co/guide/en/logstash/current/installing-logstash.html
https://blog.csdn.net/dwyane__wade/article/details/80168926
https://stackoverflow.com/questions/42889241/how-to-increase-vm-max-map-count

Spark and Elasticsearch
https://www.elastic.co/guide/en/elasticsearch/hadoop/master/spark.html#CO35-1
https://blog.csdn.net/lsshlsw/article/details/49007787
https://www.cnblogs.com/hapjin/p/9550430.html
https://www.jianshu.com/p/a5c669d0ceba
http://txworking.github.io/blog/2015/01/14/use-spark-with-elasticsearch/


分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics