- 浏览: 565224 次
- 性别:
- 来自: 北京
文章分类
- 全部博客 (267)
- 随笔 (4)
- Spring (13)
- Java (61)
- HTTP (3)
- Windows (1)
- CI(Continuous Integration) (3)
- Dozer (1)
- Apache (11)
- DB (7)
- Architecture (41)
- Design Patterns (11)
- Test (5)
- Agile (1)
- ORM (3)
- PMP (2)
- ESB (2)
- Maven (5)
- IDE (1)
- Camel (1)
- Webservice (3)
- MySQL (6)
- CentOS (14)
- Linux (19)
- BI (3)
- RPC (2)
- Cluster (9)
- NoSQL (7)
- Oracle (25)
- Loadbalance (7)
- Web (5)
- tomcat (1)
- freemarker (1)
- 制造 (0)
最新评论
-
panamera:
如果设置了连接需要密码,Dynamic Broker-Clus ...
ActiveMQ 集群配置 -
panamera:
请问你的最后一种模式Broker-C节点是不是应该也要修改持久 ...
ActiveMQ 集群配置 -
maosheng:
longshao_feng 写道楼主使用 文件共享 模式的ma ...
ActiveMQ 集群配置 -
longshao_feng:
楼主使用 文件共享 模式的master-slave,produ ...
ActiveMQ 集群配置 -
tanglanwen:
感触很深,必定谨记!
少走弯路的十条忠告
相关组件:
1)filebeat。用于收集日志组件,经测试其使用简单,占用资源比flume更少。但是对资源的占用不是那么智能,需要调整一些参数。filebeat会同时耗费内存和cpu资源,需要小心。
2)kafka。流行的消息队列,在日志收集里有存储+缓冲的功能。kafka的topic过多,会有严重的性能问题,所以需要对收集的信息进行归类。更进一步,直接划分不同的kafka集群。kafka对cpu要求较低,大内存和高速磁盘会显著增加它的性能。
3)logstash。主要用于数据的过滤和整形。这个组件非常贪婪,会占用大量资源,千万不要和应用进程放在一块。不过它算是一个无状态的计算节点,可以根据需要随时扩容。
4)elasticsearch。可以存储容量非常大的日志数据。注意单个索引不要过大,可以根据量级进行按天索引或者按月索引,同时便于删除。
5)kibana。和es集成度非常好的展示组件
选择的组件越多,整个过程会越优雅。
一、elasticsearch
1、安装
rpm -ivh elasticsearch-7.11.2-x86_64.rpm
2、修改elasticsearch配置文件
# vi /etc/elasticsearch/elasticsearch.yml
cluster.name: ycyt-es
node.name: node-2
path.data: /home/elk/es-data
path.logs: /home/elk/es-logs
network.host: 192.101.11.161
http.port: 9200
discovery.seed_hosts: ["192.101.11.159", "192.101.11.161", "192.101.11.233"]
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
3、启动elasticsearch
# systemctl start elasticsearch
# systemctl enable elasticsearch
# systemctl status elasticsearch
4、验证
可以打开浏览器,输入机器的IP地址加端口号。
http://192.101.11.233:9200
http://192.101.11.233:9200/_cat/nodes
http://192.101.11.233:9200/_cat/indices?v
通过使用以下命令,验证 Elasticsearch 是否接收到 Filebeat>logstash 数据:
http://192.101.11.233:9200/filebeat-*/_search?pretty
二、kibana
1、安装
# rpm -ivh kibana-7.11.2-x86_64.rpm
2、配置
# vi /etc/kibana/kibana.yml
server.port: 5601
server.host: "192.101.11.231"
server.name: "ycyt-kibana"
elasticsearch.hosts: ["http://192.101.11.159:9200","http://192.101.11.161:9200","http://192.101.11.233:9200"]
i18n.locale: "zh-CN"
xpack.encryptedSavedObjects.encryptionKey: encryptedSavedObjects12345678909876543210
xpack.security.encryptionKey: encryptionKeysecurity12345678909876543210
xpack.reporting.encryptionKey: encryptionKeyreporting12345678909876543210
xpack.reporting.capture.browser.chromium.disableSandbox: true
xpack.reporting.capture.browser.chromium.proxy.enabled: false
xpack.reporting.enabled: false
3、启动
# systemctl start kibana
# systemctl enable kibana
# systemctl status kibana
4、访问查看Kibana启动是否成功,并检索查看数据
http://192.101.11.231:5601
三、logstash
1、安装
# rpm -ivh logstash-7.11.2-x86_64.rpm
2、创建配置
# cd /etc/logstash/conf.d
# touch ycyt_tpl_01.conf
# vi ycyt_tpl_01.conf
input {
beats {
port => 8031
}
}
filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:log_date} \[%{DATA:app.thread}\] %{LOGLEVEL:app.level}%{SPACE}*\[%{DATA:app.class}\] %{DATA:app.java_file}:%{DATA:app.code_line} - %{GREEDYDATA:app.message}"}
# remove_field => ["message","log_date"]
}
date {
# match => [ "log_date", "yyyy-MM-dd HH:mm:ss,SSS" ]
match => [ "log_date", "ISO8601" ]
target => "@timestamp"
}
# mutate {
# gsub => ["log.file.path", "[\\]", "/"]
# }
ruby {
code => "
path = event.get('log')['file']['path']
puts format('path = %<path>s', path: path)
if (!path.nil?) && (!path.empty?)
fileFullName = path.split('/')[-1]
event.set('file_full_name', fileFullName)
event.set('file_name', fileFullName.split('.')[0])
else
event.set('file_full_name', event.get('beat'))
event.set('file_name', event.get('beat'))
end
"
}
# geoip {
# source => "clientIp"
# }
}
output {
elasticsearch {
hosts => ["192.101.11.230:9200","192.101.11.232:9200"]
index => "%{[file_name]}-%{+YYYY.MM.dd}"
# index => "%{@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
3、启动
# systemctl start logstash
# systemctl enable logstash
# systemctl status logstash
四、filebeat
1、安装
# rpm -ivh /opt/filebeat-7.11.2-x86_64.rpm
2、配置
# cd /etc/filebeat
# vi filebeat.yml
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /home/admin/logs/ycyt/*.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
multiline.pattern: ^.{24}\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
multiline.negate: true
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
multiline.match: after
# filestream is an experimental input. It is going to replace log input in the future.
- type: filestream
# Change to true to enable this input configuration.
enabled: false
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#prospector.scanner.exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
# ================================== General ===================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
# =============================== Elastic Cloud ================================
# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
# ================================== Outputs ===================================
# Configure what output to use when sending the data collected by the beat.
# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["localhost:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
#username: "elastic"
#password: "changeme"
# ------------------------------ Logstash Output -------------------------------
output.logstash:
# The Logstash hosts
hosts: ["localhost:8031"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
# ================================== Logging ===================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]
# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:
# ============================== Instrumentation ===============================
# Instrumentation support for the filebeat.
#instrumentation:
# Set to true to enable instrumentation of filebeat.
#enabled: false
# Environment in which filebeat is running on (eg: staging, production, etc.)
#environment: ""
# APM Server hosts to report instrumentation results to.
#hosts:
# - http://localhost:8200
# API Key for the APM Server(s).
# If api_key is set then secret_token will be ignored.
#api_key:
# Secret token for the APM Server(s).
#secret_token:
# ================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
3、启动
# systemctl start filebeat
# systemctl enable filebeat
# systemctl status filebeat
#####################################使用kafka模式#########################################
Kafka 集群搭建参见:
https://www.iteye.com/blog/user/maosheng/blog/2520386
logstash 配置:
# cd /etc/logstash/conf.d
# vi ycyt_tpl_01.conf
input {
kafka {
enable_auto_commit => true
auto_commit_interval_ms => "1000"
codec => "json"
bootstrap_servers => "192.101.11.159:9092,192.101.11.161:9092,192.101.11.231:9092"
topics => ["logs"]
}
}
filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:log_date} \[%{DATA:app.thread}\] %{LOGLEVEL:app.level}%{SPACE}*\[%{DATA:app.class}\] %{DATA:app.java_file}:%{DATA:app.code_line} - %{GREEDYDATA:app.message}"}
# remove_field => ["message","log_date"]
}
date {
# match => [ "log_date", "yyyy-MM-dd HH:mm:ss,SSS" ]
match => [ "log_date", "ISO8601" ]
target => "@timestamp"
}
# mutate {
# gsub => ["log.file.path", "[\\]", "/"]
# }
ruby {
code => "
path = event.get('log')['file']['path']
puts format('path = %<path>s', path: path)
if (!path.nil?) && (!path.empty?)
fileFullName = path.split('/')[-1]
event.set('file_full_name', fileFullName)
event.set('file_name', fileFullName.split('.')[0])
else
event.set('file_full_name', event.get('beat'))
event.set('file_name', event.get('beat'))
end
"
}
# geoip {
# source => "clientIp"
# }
}
output {
elasticsearch {
hosts => ["192.101.11.159:9200","192.101.11.161:9200","192.101.11.233:9200"]
index => "%{[file_name]}-%{+YYYY.MM.dd}"
# index => "%{@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
filebeat 配置:
# cd /etc/filebeat
# vi filebeat.yml
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /home/admin/logs/ycyt/*.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
multiline.pattern: ^.{24}\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
multiline.negate: true
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
multiline.match: after
# filestream is an experimental input. It is going to replace log input in the future.
- type: filestream
# Change to true to enable this input configuration.
enabled: false
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#prospector.scanner.exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
# ==================================Kafka Outputs ===================================
# Configure what output to use when sending the data collected by the beat.
output.kafka:
enable: true
hosts: ["192.101.11.159:9092","192.101.11.161:9092","192.101.11.231:9092"]
topic: 'logs'
#version: '0.10.2.0'
# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["localhost:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
#username: "elastic"
#password: "changeme"
# ------------------------------ Logstash Output -------------------------------
#output.logstash:
# The Logstash hosts
# hosts: ["localhost:8031"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
##############################################################################
1)filebeat。用于收集日志组件,经测试其使用简单,占用资源比flume更少。但是对资源的占用不是那么智能,需要调整一些参数。filebeat会同时耗费内存和cpu资源,需要小心。
2)kafka。流行的消息队列,在日志收集里有存储+缓冲的功能。kafka的topic过多,会有严重的性能问题,所以需要对收集的信息进行归类。更进一步,直接划分不同的kafka集群。kafka对cpu要求较低,大内存和高速磁盘会显著增加它的性能。
3)logstash。主要用于数据的过滤和整形。这个组件非常贪婪,会占用大量资源,千万不要和应用进程放在一块。不过它算是一个无状态的计算节点,可以根据需要随时扩容。
4)elasticsearch。可以存储容量非常大的日志数据。注意单个索引不要过大,可以根据量级进行按天索引或者按月索引,同时便于删除。
5)kibana。和es集成度非常好的展示组件
选择的组件越多,整个过程会越优雅。
一、elasticsearch
1、安装
rpm -ivh elasticsearch-7.11.2-x86_64.rpm
2、修改elasticsearch配置文件
# vi /etc/elasticsearch/elasticsearch.yml
cluster.name: ycyt-es
node.name: node-2
path.data: /home/elk/es-data
path.logs: /home/elk/es-logs
network.host: 192.101.11.161
http.port: 9200
discovery.seed_hosts: ["192.101.11.159", "192.101.11.161", "192.101.11.233"]
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
3、启动elasticsearch
# systemctl start elasticsearch
# systemctl enable elasticsearch
# systemctl status elasticsearch
4、验证
可以打开浏览器,输入机器的IP地址加端口号。
http://192.101.11.233:9200
http://192.101.11.233:9200/_cat/nodes
http://192.101.11.233:9200/_cat/indices?v
通过使用以下命令,验证 Elasticsearch 是否接收到 Filebeat>logstash 数据:
http://192.101.11.233:9200/filebeat-*/_search?pretty
二、kibana
1、安装
# rpm -ivh kibana-7.11.2-x86_64.rpm
2、配置
# vi /etc/kibana/kibana.yml
server.port: 5601
server.host: "192.101.11.231"
server.name: "ycyt-kibana"
elasticsearch.hosts: ["http://192.101.11.159:9200","http://192.101.11.161:9200","http://192.101.11.233:9200"]
i18n.locale: "zh-CN"
xpack.encryptedSavedObjects.encryptionKey: encryptedSavedObjects12345678909876543210
xpack.security.encryptionKey: encryptionKeysecurity12345678909876543210
xpack.reporting.encryptionKey: encryptionKeyreporting12345678909876543210
xpack.reporting.capture.browser.chromium.disableSandbox: true
xpack.reporting.capture.browser.chromium.proxy.enabled: false
xpack.reporting.enabled: false
3、启动
# systemctl start kibana
# systemctl enable kibana
# systemctl status kibana
4、访问查看Kibana启动是否成功,并检索查看数据
http://192.101.11.231:5601
三、logstash
1、安装
# rpm -ivh logstash-7.11.2-x86_64.rpm
2、创建配置
# cd /etc/logstash/conf.d
# touch ycyt_tpl_01.conf
# vi ycyt_tpl_01.conf
input {
beats {
port => 8031
}
}
filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:log_date} \[%{DATA:app.thread}\] %{LOGLEVEL:app.level}%{SPACE}*\[%{DATA:app.class}\] %{DATA:app.java_file}:%{DATA:app.code_line} - %{GREEDYDATA:app.message}"}
# remove_field => ["message","log_date"]
}
date {
# match => [ "log_date", "yyyy-MM-dd HH:mm:ss,SSS" ]
match => [ "log_date", "ISO8601" ]
target => "@timestamp"
}
# mutate {
# gsub => ["log.file.path", "[\\]", "/"]
# }
ruby {
code => "
path = event.get('log')['file']['path']
puts format('path = %<path>s', path: path)
if (!path.nil?) && (!path.empty?)
fileFullName = path.split('/')[-1]
event.set('file_full_name', fileFullName)
event.set('file_name', fileFullName.split('.')[0])
else
event.set('file_full_name', event.get('beat'))
event.set('file_name', event.get('beat'))
end
"
}
# geoip {
# source => "clientIp"
# }
}
output {
elasticsearch {
hosts => ["192.101.11.230:9200","192.101.11.232:9200"]
index => "%{[file_name]}-%{+YYYY.MM.dd}"
# index => "%{@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
3、启动
# systemctl start logstash
# systemctl enable logstash
# systemctl status logstash
四、filebeat
1、安装
# rpm -ivh /opt/filebeat-7.11.2-x86_64.rpm
2、配置
# cd /etc/filebeat
# vi filebeat.yml
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /home/admin/logs/ycyt/*.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
multiline.pattern: ^.{24}\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
multiline.negate: true
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
multiline.match: after
# filestream is an experimental input. It is going to replace log input in the future.
- type: filestream
# Change to true to enable this input configuration.
enabled: false
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#prospector.scanner.exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
# ================================== General ===================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
# =============================== Elastic Cloud ================================
# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
# ================================== Outputs ===================================
# Configure what output to use when sending the data collected by the beat.
# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["localhost:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
#username: "elastic"
#password: "changeme"
# ------------------------------ Logstash Output -------------------------------
output.logstash:
# The Logstash hosts
hosts: ["localhost:8031"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
# ================================== Logging ===================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]
# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:
# ============================== Instrumentation ===============================
# Instrumentation support for the filebeat.
#instrumentation:
# Set to true to enable instrumentation of filebeat.
#enabled: false
# Environment in which filebeat is running on (eg: staging, production, etc.)
#environment: ""
# APM Server hosts to report instrumentation results to.
#hosts:
# - http://localhost:8200
# API Key for the APM Server(s).
# If api_key is set then secret_token will be ignored.
#api_key:
# Secret token for the APM Server(s).
#secret_token:
# ================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
3、启动
# systemctl start filebeat
# systemctl enable filebeat
# systemctl status filebeat
#####################################使用kafka模式#########################################
Kafka 集群搭建参见:
https://www.iteye.com/blog/user/maosheng/blog/2520386
logstash 配置:
# cd /etc/logstash/conf.d
# vi ycyt_tpl_01.conf
input {
kafka {
enable_auto_commit => true
auto_commit_interval_ms => "1000"
codec => "json"
bootstrap_servers => "192.101.11.159:9092,192.101.11.161:9092,192.101.11.231:9092"
topics => ["logs"]
}
}
filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:log_date} \[%{DATA:app.thread}\] %{LOGLEVEL:app.level}%{SPACE}*\[%{DATA:app.class}\] %{DATA:app.java_file}:%{DATA:app.code_line} - %{GREEDYDATA:app.message}"}
# remove_field => ["message","log_date"]
}
date {
# match => [ "log_date", "yyyy-MM-dd HH:mm:ss,SSS" ]
match => [ "log_date", "ISO8601" ]
target => "@timestamp"
}
# mutate {
# gsub => ["log.file.path", "[\\]", "/"]
# }
ruby {
code => "
path = event.get('log')['file']['path']
puts format('path = %<path>s', path: path)
if (!path.nil?) && (!path.empty?)
fileFullName = path.split('/')[-1]
event.set('file_full_name', fileFullName)
event.set('file_name', fileFullName.split('.')[0])
else
event.set('file_full_name', event.get('beat'))
event.set('file_name', event.get('beat'))
end
"
}
# geoip {
# source => "clientIp"
# }
}
output {
elasticsearch {
hosts => ["192.101.11.159:9200","192.101.11.161:9200","192.101.11.233:9200"]
index => "%{[file_name]}-%{+YYYY.MM.dd}"
# index => "%{@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
filebeat 配置:
# cd /etc/filebeat
# vi filebeat.yml
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /home/admin/logs/ycyt/*.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
multiline.pattern: ^.{24}\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
multiline.negate: true
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
multiline.match: after
# filestream is an experimental input. It is going to replace log input in the future.
- type: filestream
# Change to true to enable this input configuration.
enabled: false
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#prospector.scanner.exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
# ==================================Kafka Outputs ===================================
# Configure what output to use when sending the data collected by the beat.
output.kafka:
enable: true
hosts: ["192.101.11.159:9092","192.101.11.161:9092","192.101.11.231:9092"]
topic: 'logs'
#version: '0.10.2.0'
# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["localhost:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
#username: "elastic"
#password: "changeme"
# ------------------------------ Logstash Output -------------------------------
#output.logstash:
# The Logstash hosts
# hosts: ["localhost:8031"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
##############################################################################
发表评论
-
HTTPS的加密原理解读
2021-12-31 11:25 275一、为什么需要加密? 因为http的内容是明文传输的,明文数据 ... -
容器技术的基石: cgroup、namespace和联合文件系统
2021-12-09 10:47 675Docker 是基于 Linux Kernel 的 Names ... -
链路追踪skywalking安装部署
2021-10-21 12:06 788APM 安装部署: 一、下载 版本目录地址:http://a ... -
自动化运维 Ansible 安装部署
2021-08-20 19:06 817一、概述 Ansible 实现了批量系统配置、批量程序部署、 ... -
Linux 下 Kafka Cluster 搭建
2021-07-08 11:23 951概述 http://kafka.apachecn.org/q ... -
在Kubernetes上部署 Redis 三主三从 集群
2021-03-10 16:25 623NFS搭建见: Linux NFS搭建与配置(https:// ... -
docker-compose 部署ELK(logstash->elasticsearch->kibana)
2020-11-11 18:02 1545概述: ELK是三个开源软件的缩写,分别表示:elastic ... -
Kubernetes1.16.3下部署node-exporter+alertmanager+prometheus+grafana 监控系统
2020-10-28 10:48 1029准备工作 建议将所有的yaml文件存在如下目录: # mkd ... -
Linux NFS 搭建与配置
2020-10-21 17:58 402一、NFS 介绍 NFS 是 Network FileSys ... -
K8S 备份及升级
2020-10-20 15:48 851一、准备工作 查看集群版本: # kubectl get no ... -
API 网关 kong 的 konga 配置使用
2020-09-23 10:46 4081一、Kong 概述: kong的 ... -
云原生技术 Docker、K8S
2020-09-02 16:53 532容器的三大好处 1.资源 ... -
Kubernetes 应用编排、管理与运维
2020-08-24 16:40 558一、kubectl 运维命令 kubectl control ... -
API 网关 kong/konga 安装部署
2020-08-25 17:34 554一、概述 Kong是Mashape开 ... -
Linux 下 Redis Cluster 搭建
2020-08-13 09:14 699Redis集群演变过程: 单 ... -
Kubernetes离线安装的本地yum源构建
2020-08-08 22:41 492一、需求场景 在K8S的使用过程中有时候会遇到在一些无法上网 ... -
Kubernetes 证书延期
2020-08-01 22:28 427一、概述 kubeadm 是 kubernetes 提供的一 ... -
kubeadm方式部署安装kubernetes
2020-07-29 08:01 2320一、前提准备: 0、升级更新系统(切记升级一下,曾被坑过) ... -
Kubernetes 部署 Nginx 集群
2020-07-20 09:32 825一.设置标签 为了保证nginx之能分配到nginx服务器需要 ... -
Prometheus 外部监控 Kubernetes 集群
2020-07-10 15:59 1988大多情况都是将 Prometheus 通过 yaml 安装在 ...
相关推荐
### ELK 5.0.1安装配置指南 #### 一、环境准备 ELK(Elasticsearch, Logstash, Kibana)是用于搜索、分析日志数据的强大工具组合。本文档将详细介绍如何在CentOS 7.2环境下安装与配置ELK 5.0.1版本。 **基础环境...
在介绍ELK-2.3.3安装步骤时,主要围绕着Elasticsearch和Kibana的安装配置进行,同时也提及了环境变量设置、系统防火墙安全加固等高级步骤。 1. 安装Java环境: 在安装Elasticsearch之前,需要先确保系统安装了Java...
### ELK Stack 安装与配置详解 #### 一、ELK Stack 概述 **ELK Stack** 是一套流行的开源日志管理解决方案,由 **Elasticsearch**、**Logstash** 和 **Kibana** 三个核心组件组成。近年来,随着对日志收集和分析...
IT运维监控,ELK日志审计分析系统,elk四部分rpm安装包 ELK架构: 数据采集和传输—filebeat-8.1.2-x86_64.rpm l数据分析和传输工具—ogstash-8.1.2-x86_64.rpm 数据存储和分析—elasticsearch-8.1.2-x86_64.rpm ...
在实际部署过程中,你需要先确保系统满足ELK堆栈的硬件和软件要求,然后依次安装JDK1.8、Elasticsearch、Logstash和Kibana的RPM包。接下来,配置每个组件的设置文件以适应你的环境,比如Elasticsearch的集群配置、...
在本文中,我们将详细探讨如何在CentOS 7系统上使用RPM方式安装ELK 7.11,并配置其收集syslog和eventlog日志。 首先,我们从Elasticsearch的安装开始。确保系统已安装Java 8或更高版本,因为Elasticsearch需要Java...
3. **安装 Elasticsearch**:你可以从 Elasticsearch 官方网站下载最新版本的 RPM 包,或者添加官方 GPG 密钥和 YUM 存储库,然后使用 `sudo yum install elasticsearch` 命令安装。 4. **配置 Elasticsearch**:...
本手册将指导用户从头开始搭建 ELK 集成环境,包括安装 Sun JDK、Elasticsearch 伪集群搭建等步骤。 Knowledge Point 1:安装 Sun JDK 在搭建 ELK 集成环境之前,需要先安装 Sun JDK。首先,卸载 Open JDK,然后...
3. 配置 Elasticsearch 服务:安装完成后,需要配置 Elasticsearch 服务以便于自动启动。在 CentOS 7 上,可以使用以下命令配置服务: ``` sudo systemctl daemon-reload sudo systemctl enable elasticsearch....
- 安装RPM包:`rpm -i elasticsearch-5.0.0.rpm` - **源码安装**: - 下载Elasticsearch源码包:`curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.0.0.tar.gz` - 解压并进入...
在部署ELK 7.12.1时,你需要分别安装elasticsearch-7.12.1-amd64.deb、logstash-7.12.1-amd64.deb和kibana-7.12.1-amd64.deb这些Debian包。安装过程中,注意配置文件的设置,如端口监听、内存限制、网络访问等。同时...
9. **配置文件格式**:ELK 的配置文件为 yml 文件,编写时需遵循正确的格式,例如正确处理缩进和空格。 #### 二、环境搭建步骤 ##### 1. JAVA 安装 确保系统中没有低于 1.8 版本的 JDK,可以通过以下命令卸载旧版...
安装过程包括下载 RPM 包,然后使用 yum 或 rpm 进行安装,配置输入、输出插件,并启动服务。 8. **安装 Kibana**: Kibana 是一个用于可视化的组件,同样需要下载 RPM 包,然后进行安装。启动 Kibana 服务,并...
#### 二、ELK Stack安装与配置 ##### 2.1 ELK Stack简介 ELK Stack是目前业界广泛采用的日志管理方案之一,能够帮助用户高效地处理海量日志数据。该技术栈中的三个组件各司其职: - **Logstash**:负责日志数据的...
- 在node1(20.0.0.30)上安装Elasticsearch 5.5.0,通过RPM包进行安装。 - 加载系统服务,启用Elasticsearch服务:`systemctl daemon-reload` 和 `systemctl enable elasticsearch.service`。 - 修改`elastic...
#### 3.2 配置rpm配置文件java_home 为了指定Java环境,需要在`/etc/sysconfig/elasticsearch`中设置`JAVA_HOME`变量。 #### 3.3 修改elasticsearch.yml配置文件 - 使用文本编辑器打开`/etc/elasticsearch/elastic...
ELK单点部署文档 本文档旨在指导用户在单点环境中...本文档详细介绍了ELK单点部署的过程,包括环境准备、Elasticsearch、Logstash和Kibana的安装、配置和启动。按照本文档的步骤,用户可以轻松地部署ELK日志分析系统。
官网地址备注:操作系统Centos71.使用yum直接安装导入私钥添加yum源修改配置文件启动服务查看状态2.使用rpm安装安装java并配置环境变量安装es修
- 使用`elastic/puppet-logstash`模块来安装并配置Logstash。 - 设置输入插件(如syslog或filebeat)、过滤器和输出插件(指向Elasticsearch)。 3. **配置Kibana**: - 安装Kibana,并配置连接到Elasticsearch...
ELK Stack作为一套成熟的日志分析解决方案,以其强大的功能和灵活的配置受到了广泛的关注。 #### 二、ELK Stack介绍 ELK Stack是由三个开源组件组成的:Elasticsearch、Logstash 和 Kibana。它们各自承担着不同的...