- 浏览: 568064 次
- 性别:
- 来自: 北京
文章分类
- 全部博客 (267)
- 随笔 (4)
- Spring (13)
- Java (61)
- HTTP (3)
- Windows (1)
- CI(Continuous Integration) (3)
- Dozer (1)
- Apache (11)
- DB (7)
- Architecture (41)
- Design Patterns (11)
- Test (5)
- Agile (1)
- ORM (3)
- PMP (2)
- ESB (2)
- Maven (5)
- IDE (1)
- Camel (1)
- Webservice (3)
- MySQL (6)
- CentOS (14)
- Linux (19)
- BI (3)
- RPC (2)
- Cluster (9)
- NoSQL (7)
- Oracle (25)
- Loadbalance (7)
- Web (5)
- tomcat (1)
- freemarker (1)
- 制造 (0)
最新评论
-
panamera:
如果设置了连接需要密码,Dynamic Broker-Clus ...
ActiveMQ 集群配置 -
panamera:
请问你的最后一种模式Broker-C节点是不是应该也要修改持久 ...
ActiveMQ 集群配置 -
maosheng:
longshao_feng 写道楼主使用 文件共享 模式的ma ...
ActiveMQ 集群配置 -
longshao_feng:
楼主使用 文件共享 模式的master-slave,produ ...
ActiveMQ 集群配置 -
tanglanwen:
感触很深,必定谨记!
少走弯路的十条忠告
概述:
ELK是三个开源软件的缩写,分别表示:elasticsearch、logstash、kibana
Elasticsearch是个开源分布式搜索引擎,提供搜集、分析、存储数据三大功能。它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等
Logstash 主要是用来日志的搜集、分析、过滤日志的工具,支持大量的数据获取方式。一般工作方式为c/s架构,client端安装在需要收集日志的主机上,server端负责将收到的各节点日志进行过滤、修改等操作在一并发往elasticsearch上去。
Logstash收集AppServer产生的Log,并存放到ElasticSearch集群中,而Kibana则从ES集群中查询数据生成图表,再返回给Browser。
Logstash工作原理:
Logstash事件处理有三个阶段:inputs → filters → outputs。是一个接收,处理,转发日志的工具。支持系统日志,webserver日志,错误日志,应用日志,总之包括所有可以抛出来的日志类型。
Kibana可以为 Logstash 和 ElasticSearch 通过报表、图形化数据进行可视化展示 Web 界面,可以帮助汇总、分析和搜索重要数据日志。
方案一:logstash->elasticsearch->kibana(本次方案)
方案二:filebeat->kafka->logstash->elasticSearch->kibana
一、镜像环境准备:
docker pull elasticsearch:7.9.2
docker pull logstash:7.9.2
docker pull kibana:7.9.2
docker pull mobz/elasticsearch-head:5
docker pull openjdk:15.0.1
docker pull nginx:1.17.2
docker save -o /opt/images/elasticsearch-7.9.2.tar elasticsearch:7.9.2
docker save -o /opt/images/logstash-7.9.2.tar logstash:7.9.2
docker save -o /opt/images/kibana-7.9.2.tar kibana:7.9.2
docker save -o /opt/images/elasticsearch-head-5.tar mobz/elasticsearch-head:5
docker save -o /opt/images/openjdk-15.0.1.tar openjdk:15.0.1
docker save -o /opt/images/nginx-1.17.2.tar nginx:1.17.2
下载到本地,然后从本地上传到安装服务器并加载镜像
docker load < elasticsearch-7.9.2.tar
docker load < logstash-7.9.2.tar
docker load < kibana-7.9.2.tar
docker load < elasticsearch-head-5.tar
docker load < openjdk-15.0.1.tar
docker load < nginx-1.17.2.tar
下载jdk-15.0.1_linux-x64_bin.tar.gz 安装服务器安装java 15.0.1
# java -version
java version "15.0.1" 2020-10-20
Java(TM) SE Runtime Environment (build 15.0.1+9-18)
Java HotSpot(TM) 64-Bit Server VM (build 15.0.1+9-18, mixed mode, sharing)
二、docker安装
1、yum 包更新
# yum update
2、检查是否安装过
# rpm -qa|grep docker
3、卸载旧版本 Docker
# yum remove docker docker-common docker-selinux docker-engine
4、安装软件包
# yum install -y yum-utils device-mapper-persistent-data lvm2
5、添加 Docker yum 源
# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
6、安装 Docker
# yum -y install docker-ce
7、# 启动 Docker
# systemctl start docker
8、查看 Docker 版本号
# docker version
三、docker-compose安装
1、下载指定版本的docker-compose
从地址 https://github.com/docker/compose/releases下载指定版本的docker-compose
上传到服务器 /usr/local/bin/下,重命名为 docker-compose
2、对二进制文件赋可执行权限
# sudo chmod +x /usr/local/bin/docker-compose
# sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
3、测试下docker-compose是否安装成功
# docker-compose --version
docker-compose version 1.27.4, build 1719ceb
四、创建docker-compose.yml
docker-compose.yml包含version、services、networks 三大部分
# mkdir -p /usr/local/elk
# cd /usr/local/elk
# vi docker-compose.yml
version: '3'
networks:
elk:
services:
es01:
image: elasticsearch:7.9.2
container_name: es01
environment:
- cluster.name=elasticsearch-cluster
- node.name=es-node1
- cluster.initial_master_nodes=es01,es02,es03
- http.cors.enabled=true
- http.cors.allow-origin=*
- http.cors.allow-headers=Authorization
- xpack.security.enabled=true
- xpack.monitoring.enabled=true
- xpack.security.transport.ssl.enabled=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=es01,es02,es03"
- "discovery.zen.minimum_master_nodes=2"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /etc/localtime:/etc/localtime
- ./es01/data:/usr/share/elasticsearch/data
- ./es01/logs:/usr/share/elasticsearch/logs
ports:
- 9200:9200
networks:
- elk
es02:
image: elasticsearch:7.9.2
container_name: es02
environment:
- cluster.name=elasticsearch-cluster
- node.name=es-node2
- cluster.initial_master_nodes=es01,es02,es03
- http.cors.enabled=true
- http.cors.allow-origin=*
- http.cors.allow-headers=Authorization
- xpack.security.enabled=true
- xpack.monitoring.enabled=true
- xpack.security.transport.ssl.enabled=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=es01,es02,es03"
- "discovery.zen.minimum_master_nodes=2"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /etc/localtime:/etc/localtime
- ./es02/data:/usr/share/elasticsearch/data
- ./es02/logs:/usr/share/elasticsearch/logs
depends_on:
- es01
networks:
- elk
es03:
image: elasticsearch:7.9.2
container_name: es03
environment:
- cluster.name=elasticsearch-cluster
- node.name=es-node3
- cluster.initial_master_nodes=es01,es02,es03
- http.cors.enabled=true
- http.cors.allow-origin=*
- http.cors.allow-headers=Authorization
- xpack.security.enabled=true
- xpack.monitoring.enabled=true
- xpack.security.transport.ssl.enabled=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=es01,es02,es03"
- "discovery.zen.minimum_master_nodes=2"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /etc/localtime:/etc/localtime
- ./es03/data:/usr/share/elasticsearch/data
- ./es03/logs:/usr/share/elasticsearch/logs
depends_on:
- es02
networks:
- elk
logstash:
image: logstash:7.9.2
container_name: logstash
restart: always
networks:
- elk
ports:
- "8002:9601"
volumes:
- /etc/localtime:/etc/localtime
- ./logstash/config-dir:/config-dir
- ./logstash/config:/opt/logstash/config
command: logstash -f /config-dir
depends_on:
- es03
kibana:
image: kibana:7.9.2
container_name: kibana
restart: always
volumes:
- /etc/localtime:/etc/localtime
- ./kibana/kibana.yml:/usr/share/kibana/config/kibana.yml
networks:
- elk
ports:
- "5601:5601"
depends_on:
- es03
nginx:
image: nginx:1.17.2
container_name: nginx
environment:
- TZ=Asia/Shanghai
restart: always
volumes:
- /etc/localtime:/etc/localtime
- ./nginx/conf/nginx.conf:/etc/nginx/nginx.conf
# - ./nginx/conf/htpasswd:/etc/nginx/htpasswd
networks:
- elk
ports:
- "8003:80"
depends_on:
- kibana
五、创建es.yml配置
# cd /usr/local/elk/
# vi es.yml
# 集群名称
cluster.name: elasticsearch-cluster
# 节点名称
node.name: es-node1
bootstrap.memory_lock: true
#cluster.initial_master_nodes: ["es01","es02","es03"]
# 绑定host,0.0.0.0代表当前节点的ip
network.host: 0.0.0.0
# 设置其它节点和该节点交互的ip地址,如果不设置它会自动判断,值必须是个真实的ip地址(本机ip)
network.publish_host: 192.101.11.158
# 设置对外服务的http端口,默认为9200
http.port: 9200
# 设置节点间交互的tcp端口,默认是9300
transport.tcp.port: 9300
# 是否支持跨域,默认为false
http.cors.enabled: true
# 当设置允许跨域,默认为*,表示支持所有域名,如果我们只是允许某些网站能访问,那么可以使用正则表达式。比如只允许本地地址。 /https?:\/\/localhost(:[0-9]+)?/
http.cors.allow-origin: "*"
# 表示这个节点是否可以充当主节点
node.master: true
# 是否充当数据节点
node.data: true
# 所有主从节点ip:port
discovery.seed_hosts: ["192.101.11.158:9300"]
# 这个参数决定了在选主过程中需要 有多少个节点通信 预防脑裂
discovery.zen.minimum_master_nodes: 1
# 跨域允许设置的头信息,默认为X-Requested-With,Content-Type,Content-Lengt
http.cors.allow-headers: Authorization
# 这条配置表示开启xpack认证机制
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
六、创建es挂载文件夹
# mkdir -p /usr/local/elk/es01/data
# mkdir -p /usr/local/elk/es02/data
# mkdir -p /usr/local/elk/es03/data
# mkdir -p /usr/local/elk/es01/logs
# mkdir -p /usr/local/elk/es02/logs
# mkdir -p /usr/local/elk/es03/logs
# chmod -R 777 /usr/local/elk/es01
# chmod -R 777 /usr/local/elk/es02
# chmod -R 777 /usr/local/elk/es03
七、设置es用户密码
# docker run -d --name elasticSearch -p 9200:9200 -p 9300:9300 -v /usr/local/elk/es01/data:/usr/share/elasticsearch/data -v /usr/local/elk/es.yml:/usr/share/elasticsearch/config/elasticsearch.yml -e ES_JAVA_OPTS="-Xms512m -Xmx512m" -e "discovery.type=single-node" elasticsearch:7.9.2
##注意: discovery.type=single-node
# docker ps
# docker exec -it elasticSearch /bin/bash
####./bin/elasticsearch-setup-passwords auto ##自动生成随机密码
# ./bin/elasticsearch-setup-passwords interactive ##手动配置密码
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y
Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana_system]:
Reenter password for [kibana_system]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:
Changed password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]
注意:输入设置的密码(123456)
# docker stop elasticSearch
# docker rm elasticSearch
八、创建kibana配置
# mkdir -p /usr/local/elk/kibana
# chmod -R 777 /usr/local/elk/kibana
# cd /usr/local/elk/kibana
# vi kibana.yml
server.name: kibana
# kibana的主机地址 0.0.0.0可表示监听所有IP
server.host: "0.0.0.0"
# kibana访问es的URL
elasticsearch.hosts: [ "http://es01:9200" ]
elasticsearch.username: 'kibana'
elasticsearch.password: '123456'
# 显示登陆页面
xpack.monitoring.enabled: true
xpack.monitoring.ui.container.elasticsearch.enabled: true
# 语言
#i18n.locale: "zh-CN"
九、创建logstash配置
# mkdir -p /usr/local/elk/logstash/config
# cd /usr/local/elk/logstash/config
# ll
total 24
-rw-r--r-- 1 root root 1833 Nov 10 15:25 jvm.options
-rw-r--r-- 1 root root 551 Nov 6 14:33 log4j2.properties
-rw-r--r-- 1 root root 342 Nov 6 14:33 logstash-sample.conf
-rw-r--r-- 1 root root 74 Nov 11 12:52 logstash.yml
-rw-r--r-- 1 root root 286 Nov 6 14:33 pipelines.yml
-rw-r--r-- 1 root root 1696 Nov 6 14:33 startup.options
# vi jvm.options
## JVM configuration
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms512m
-Xmx512m
################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don't tamper with them unless
## you understand what you are doing
##
################################################################
## GC configuration
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
## Locale
# Set the locale language
#-Duser.language=en
# Set the locale country
#-Duser.country=US
# Set the locale variant, if any
#-Duser.variant=
## basic
# set the I/O temp directory
#-Djava.io.tmpdir=$HOME
# set to headless, just in case
-Djava.awt.headless=true
# ensure UTF-8 encoding by default (e.g. filenames)
-Dfile.encoding=UTF-8
# use our provided JNA always versus the system one
#-Djna.nosys=true
# Turn on JRuby invokedynamic
-Djruby.compile.invokedynamic=true
# Force Compilation
-Djruby.jit.threshold=0
## heap dumps
# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError
# specify an alternative path for heap dumps
# ensure the directory exists and has sufficient space
#-XX:HeapDumpPath=${LOGSTASH_HOME}/heapdump.hprof
## GC logging
#-XX:+PrintGCDetails
#-XX:+PrintGCTimeStamps
#-XX:+PrintGCDateStamps
#-XX:+PrintClassHistogram
#-XX:+PrintTenuringDistribution
#-XX:+PrintGCApplicationStoppedTime
# log GC status to a file with time stamps
# ensure the directory exists
#-Xloggc:${LS_GC_LOG_FILE}
# Entropy source for randomness
-Djava.security.egd=file:/dev/urandom
# vi log4j2.properties
status = error
name = LogstashPropertiesConfig
appender.console.type = Console
appender.console.name = plain_console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n
appender.json_console.type = Console
appender.json_console.name = json_console
appender.json_console.layout.type = JSONLayout
appender.json_console.layout.compact = true
appender.json_console.layout.eventEol = true
rootLogger.level = ${sys:ls.log.level}
rootLogger.appenderRef.console.ref = ${sys:ls.log.format}_console
# vi logstash-sample.conf
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
user => "elastic"
password => "123456"
}
}
# vi logstash.yml
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: es01:9200
xpack.monitoring.elasticsearch.username: "logstash_system"
xpack.monitoring.elasticsearch.password: "123456"
# vi pipelines.yml
# This file is where you define your pipelines. You can define multiple.
# For more information on multiple pipelines, see the documentation:
# https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html
- pipeline.id: main
path.config: "/usr/share/logstash/pipeline"
# vi startup.options
################################################################################
# These settings are ONLY used by $LS_HOME/bin/system-install to create a custom
# startup script for Logstash and is not used by Logstash itself. It should
# automagically use the init system (systemd, upstart, sysv, etc.) that your
# Linux distribution uses.
#
# After changing anything here, you need to re-run $LS_HOME/bin/system-install
# as root to push the changes to the init script.
################################################################################
# Override Java location
#JAVACMD=/usr/bin/java
# Set a home directory
LS_HOME=/usr/share/logstash
# logstash settings directory, the path which contains logstash.yml
LS_SETTINGS_DIR=/etc/logstash
# Arguments to pass to logstash
LS_OPTS="--path.settings ${LS_SETTINGS_DIR}"
# Arguments to pass to java
LS_JAVA_OPTS=""
# pidfiles aren't used the same way for upstart and systemd; this is for sysv users.
LS_PIDFILE=/var/run/logstash.pid
# user and group id to be invoked as
LS_USER=logstash
LS_GROUP=logstash
# Enable GC logging by uncommenting the appropriate lines in the GC logging
# section in jvm.options
LS_GC_LOG_FILE=/var/log/logstash/gc.log
# Open file limit
LS_OPEN_FILES=16384
# Nice level
LS_NICE=19
# Change these to have the init script named and described differently
# This is useful when running multiple instances of Logstash on the same
# physical box or vm
SERVICE_NAME="logstash"
SERVICE_DESCRIPTION="logstash"
# If you need to run a command or script before launching Logstash, put it
# between the lines beginning with `read` and `EOM`, and uncomment those lines.
###
## read -r -d '' PRESTART << EOM
## EOM
# mkdir -p /usr/local/elk/logstash/config-dir
# cd /usr/local/elk/logstash/config-dir
# vi logstash.conf
input {
tcp {
port => 9601 #监听端口
mode => "server" #使用服务模式
tags => ["tags"]
codec => json_lines #使用json格式传输数据
}
}
output {
elasticsearch {
hosts => "es01:9200"
index => "%{[appname]}-%{+YYYY.MM.dd}" #使用应用名称和时间为日志索引
user => "elastic" #es的认证账号用户
password => "123456" #es的认证账号密码
sniffing => false
}
stdout {
codec => rubydebug
}
}
十、创建nginx配置
# mkdir -p /usr/local/elk/nginx/conf/
# cd /usr/local/elk/nginx/conf/
# vi nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
#include /etc/nginx/conf.d/*.conf;
upstream kibana_web {
server kibana:5601 weight=1 max_fails=2 fail_timeout=30s;
}
server {
location / {
root html;
index index.html index.htm;
proxy_set_header Host $host;
proxy_pass http://kibana_web;
# auth_basic "The Kibana Monitor Center";
# auth_basic_user_file /etc/nginx/htpasswd;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
备注:
# 创建用户名admin密码123456的htpasswd
# printf "admin:$(openssl passwd -crypt 123456)\n" >>/usr/local/elk/nginx/conf/htpasswd
十一、启动ELK
# docker-compose up -d
# docker ps -a
# docker logs es01
{"type": "server", "timestamp": "2020-11-11T07:06:57,163Z", "level": "WARN", "component": "o.e.t.TcpTransport", "cluster.name": "elasticsearch-cluster", "node.name": "es-node1", "message": "exception caught on transport layer [Netty4TcpChannel{localAddress=/172.26.0.2:9300, remoteAddress=/172.26.0.4:37480}], closing connection", "cluster.uuid": "QgqJ-_tJQIWvEdJsI8L6EQ", "node.id": "gkoszyl6R36fhfqyTsAQmQ" ,
"stacktrace": ["io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: No available authentication scheme",
"at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:471) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]",
"at java.lang.Thread.run(Thread.java:832) [?:?]",
"Caused by: javax.net.ssl.SSLHandshakeException: No available authentication scheme",
"at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?]",
"at sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]",
"at sun.security.ssl.TransportContext.fatal(TransportContext.java:356) ~[?:?]",
"at sun.security.ssl.TransportContext.fatal(TransportContext.java:312) ~[?:?]",
"at sun.security.ssl.TransportContext.fatal(TransportContext.java:303) ~[?:?]",
"at sun.security.ssl.CertificateMessage$T13CertificateProducer.onProduceCertificate(CertificateMessage.java:955) ~[?:?]",
"at sun.security.ssl.CertificateMessage$T13CertificateProducer.produce(CertificateMessage.java:944) ~[?:?]",
"at sun.security.ssl.SSLHandshake.produce(SSLHandshake.java:440) ~[?:?]",
"at sun.security.ssl.ClientHello$T13ClientHelloConsumer.goServerHello(ClientHello.java:1252) ~[?:?]",
"at sun.security.ssl.ClientHello$T13ClientHelloConsumer.consume(ClientHello.java:1188) ~[?:?]",
"at sun.security.ssl.ClientHello$ClientHelloConsumer.onClientHello(ClientHello.java:851) ~[?:?]",
"at sun.security.ssl.ClientHello$ClientHelloConsumer.consume(ClientHello.java:812) ~[?:?]",
"at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:396) ~[?:?]",
"at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:480) ~[?:?]",
"at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1267) ~[?:?]",
"at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1254) ~[?:?]",
"at java.security.AccessController.doPrivileged(AccessController.java:691) ~[?:?]",
"at sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:1199) ~[?:?]",
"at io.netty.handler.ssl.SslHandler.runAllDelegatedTasks(SslHandler.java:1542) ~[netty-handler-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.handler.ssl.SslHandler.runDelegatedTasks(SslHandler.java:1556) ~[netty-handler-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1440) ~[netty-handler-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1267) ~[netty-handler-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1314) ~[netty-handler-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]",
"... 16 more"] }
..........................
注意:javax.net.ssl.SSLHandshakeException: No available authentication scheme,通过配置Node间SSL解决
十二、测试
http://192.101.11.158:9200/_cluster/health?pretty
http://192.101.11.158:9200/_cat/nodes?v&pretty
http://192.101.11.158:9200/_cat/health?v
http://192.101.11.158:9200/_cat/nodes ##查看集群节点
http://192.101.11.158:5601
http://192.101.11.158:8003 ##Nginx代理
十三、配置Node间SSL
1、创建CA证书:
# docker exec -it es01 /bin/bash
# ./bin/elasticsearch-certutil ca -v
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.
The 'ca' mode generates a new 'certificate authority'
This will create a new X.509 certificate and private key that can be used
to sign certificate when running in 'cert' mode.
Use the 'ca-dn' option if you wish to configure the 'distinguished name'
of the certificate authority
By default the 'ca' mode produces a single PKCS#12 output file which holds:
* The CA certificate
* The CA's private key
If you elect to generate PEM format certificates (the -pem option), then the output will
be a zip file containing individual files for the CA certificate and private key
Please enter the desired output file [elastic-stack-ca.p12]: # 输入保存的ca文件名称,保持默认,直接回车
Enter password for elastic-stack-ca.p12 : # 输入证书密码,我们这里留空,直接回车
# ll
total 576
-rw-r--r-- 1 elasticsearch root 13675 Sep 23 08:43 LICENSE.txt
-rw-r--r-- 1 elasticsearch root 544318 Sep 23 08:47 NOTICE.txt
-rw-r--r-- 1 elasticsearch root 7007 Sep 23 08:43 README.asciidoc
drwxr-xr-x 2 elasticsearch root 4096 Sep 23 08:50 bin
drwxrwxr-x 1 elasticsearch root 36 Nov 11 13:50 config
drwxrwxrwx 3 root root 19 Nov 10 16:57 data
-rw------- 1 root root 2527 Nov 12 13:50 elastic-stack-ca.p12
drwxr-xr-x 1 elasticsearch root 17 Sep 23 08:48 jdk
drwxr-xr-x 3 elasticsearch root 4096 Sep 23 08:48 lib
drwxrwxrwx 2 root root 4096 Nov 11 13:50 logs
drwxr-xr-x 51 elasticsearch root 4096 Sep 23 08:49 modules
drwxr-xr-x 2 elasticsearch root 6 Sep 23 08:47 plugins
2、创建node证书
# ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.
The 'cert' mode generates X.509 certificate and private keys.
* By default, this generates a single certificate and key for use
on a single instance.
* The '-multiple' option will prompt you to enter details for multiple
instances and will generate a certificate and key for each one
* The '-in' option allows for the certificate generation to be automated by describing
the details of each instance in a YAML file
* An instance is any piece of the Elastic Stack that requires an SSL certificate.
Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats
may all require a certificate and private key.
* The minimum required value for each instance is a name. This can simply be the
hostname, which will be used as the Common Name of the certificate. A full
distinguished name may also be used.
* A filename value may be required for each instance. This is necessary when the
name would result in an invalid file or directory name. The name provided here
is used as the directory name (within the zip) and the prefix for the key and
certificate files. The filename is required if you are prompted and the name
is not displayed in the prompt.
* IP addresses and DNS names are optional. Multiple values can be specified as a
comma separated string. If no IP addresses or DNS names are provided, you may
disable hostname verification in your SSL configuration.
* All certificates generated by this tool will be signed by a certificate authority (CA).
* The tool can automatically generate a new CA for you, or you can provide your own with the
-ca or -ca-cert command line options.
By default the 'cert' mode produces a single PKCS#12 output file which holds:
* The instance certificate
* The private key for the instance certificate
* The CA certificate
If you specify any of the following options:
* -pem (PEM formatted output)
* -keep-ca-key (retain generated CA key)
* -multiple (generate multiple certificates)
* -in (generate certificates from an input file)
then the output will be be a zip file containing individual certificate/key files
Enter password for CA (elastic-stack-ca.p12) : # 输入CA证书的密码,我们这里没有设置,直接回车
Please enter the desired output file [elastic-certificates.p12]: # 输入证书保存名称,保持默认,直接回车
Enter password for elastic-certificates.p12 : # 输入证书的密码,我们这里留空,直接回车
Certificates written to /usr/share/elasticsearch/elastic-certificates.p12
This file should be properly secured as it contains the private key for
your instance.
This file is a self contained file and can be copied and used 'as is'
For each Elastic product that you wish to configure, you should copy
this '.p12' file to the relevant configuration directory
and then follow the SSL configuration instructions in the product guide.
For client applications, you may only need to copy the CA certificate and
configure the client to trust this certificate.
# ll
total 580
-rw-r--r-- 1 elasticsearch root 13675 Sep 23 08:43 LICENSE.txt
-rw-r--r-- 1 elasticsearch root 544318 Sep 23 08:47 NOTICE.txt
-rw-r--r-- 1 elasticsearch root 7007 Sep 23 08:43 README.asciidoc
drwxr-xr-x 2 elasticsearch root 4096 Sep 23 08:50 bin
drwxrwxr-x 1 elasticsearch root 36 Nov 11 13:50 config
drwxrwxrwx 3 root root 19 Nov 10 16:57 data
-rw------- 1 root root 3443 Nov 12 13:53 elastic-certificates.p12
-rw------- 1 root root 2527 Nov 12 13:50 elastic-stack-ca.p12
drwxr-xr-x 1 elasticsearch root 17 Sep 23 08:48 jdk
drwxr-xr-x 3 elasticsearch root 4096 Sep 23 08:48 lib
drwxrwxrwx 2 root root 4096 Nov 11 13:50 logs
drwxr-xr-x 51 elasticsearch root 4096 Sep 23 08:49 modules
drwxr-xr-x 2 elasticsearch root 6 Sep 23 08:47 plugins
这个命令生成格式为PKCS#12名称为 elastic-certificates.p12 的keystore文件,包含node证书、私钥、CA证书。
这个命令生成的证书内部默认是不包含主机名信息的,所以证书可以用在任何的node节点上,但是你必须配置elasticsearch关闭主机名认证。
# mv elastic-*.p12 ./config/
3、配置ES节点使用这个证书:
退出es01容器
# exit
复制证书到物理机
# mkdir -p /usr/local/elk/config
# docker cp es01:/usr/share/elasticsearch/config/elastic-certificates.p12 /usr/local/elk/config
修改证书权限
# chmod 777 /usr/local/elk/config/elastic-certificates.p12
配置docker-compose.yml配置文件,注意所有的node节点都需要配置,这里的配置是使用PKCS#12格式的证书:
environment:
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.keystore.path=elastic-certificates.p12
- xpack.security.transport.ssl.truststore.path=elastic-certificates.p12
volumes:
- ./config/elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
# vi docker-compose.yml
version: '3'
networks:
elk:
services:
es01:
image: elasticsearch:7.9.2
container_name: es01
environment:
- cluster.name=elasticsearch-cluster
- node.name=es-node1
- cluster.initial_master_nodes=es01,es02,es03
- http.cors.enabled=true
- http.cors.allow-origin=*
- http.cors.allow-headers=Authorization
- xpack.security.enabled=true
- xpack.monitoring.enabled=true
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.keystore.path=elastic-certificates.p12
- xpack.security.transport.ssl.truststore.path=elastic-certificates.p12
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=es01,es02,es03"
- "discovery.zen.minimum_master_nodes=2"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /etc/localtime:/etc/localtime
- ./es01/data:/usr/share/elasticsearch/data
- ./es01/logs:/usr/share/elasticsearch/logs
- ./config/elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
ports:
- 9200:9200
networks:
- elk
es02:
image: elasticsearch:7.9.2
container_name: es02
environment:
- cluster.name=elasticsearch-cluster
- node.name=es-node2
- cluster.initial_master_nodes=es01,es02,es03
- http.cors.enabled=true
- http.cors.allow-origin=*
- http.cors.allow-headers=Authorization
- xpack.security.enabled=true
- xpack.monitoring.enabled=true
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.keystore.path=elastic-certificates.p12
- xpack.security.transport.ssl.truststore.path=elastic-certificates.p12
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=es01,es02,es03"
- "discovery.zen.minimum_master_nodes=2"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /etc/localtime:/etc/localtime
- ./es02/data:/usr/share/elasticsearch/data
- ./es02/logs:/usr/share/elasticsearch/logs
- ./config/elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
depends_on:
- es01
networks:
- elk
es03:
image: elasticsearch:7.9.2
container_name: es03
environment:
- cluster.name=elasticsearch-cluster
- node.name=es-node3
- cluster.initial_master_nodes=es01,es02,es03
- http.cors.enabled=true
- http.cors.allow-origin=*
- http.cors.allow-headers=Authorization
- xpack.security.enabled=true
- xpack.monitoring.enabled=true
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.keystore.path=elastic-certificates.p12
- xpack.security.transport.ssl.truststore.path=elastic-certificates.p12
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=es01,es02,es03"
- "discovery.zen.minimum_master_nodes=2"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /etc/localtime:/etc/localtime
- ./es03/data:/usr/share/elasticsearch/data
- ./es03/logs:/usr/share/elasticsearch/logs
- ./config/elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
depends_on:
- es02
networks:
- elk
logstash:
image: logstash:7.9.2
container_name: logstash
restart: always
networks:
- elk
ports:
- "8002:9601"
volumes:
- /etc/localtime:/etc/localtime
- ./logstash/config-dir:/config-dir
- ./logstash/config:/opt/logstash/config
command: logstash -f /config-dir
depends_on:
- es03
kibana:
image: kibana:7.9.2
container_name: kibana
restart: always
volumes:
- /etc/localtime:/etc/localtime
- ./kibana/kibana.yml:/usr/share/kibana/config/kibana.yml
networks:
- elk
ports:
- "5601:5601"
depends_on:
- es03
nginx:
image: nginx:1.17.2
container_name: nginx
environment:
- TZ=Asia/Shanghai
restart: always
volumes:
- /etc/localtime:/etc/localtime
- ./nginx/conf/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/conf/htpasswd:/etc/nginx/htpasswd
networks:
- elk
ports:
- "8003:80"
depends_on:
- kibana
4、再次启动ELK:
# docker-compose up -d
5、再次测试:
查看集群状态:
http://192.101.11.158:9200/_cluster/health?pretty
cluster_name "elasticsearch-cluster"
status "green"
timed_out false
number_of_nodes 3
number_of_data_nodes 3
active_primary_shards 10
active_shards 20
relocating_shards 0
initializing_shards 0
unassigned_shards 0
delayed_unassigned_shards 0
number_of_pending_tasks 0
number_of_in_flight_fetch 0
task_max_waiting_in_queue_millis 0
active_shards_percent_as_number 100
http://192.101.11.158:9200/_cat/nodes?v&pretty
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.26.0.4 68 61 83 4.54 4.76 4.94 dilmrt - es-node3
172.26.0.2 20 61 83 4.54 4.76 4.94 dilmrt * es-node1
172.26.0.3 72 61 83 4.54 4.76 4.94 dilmrt - es-node2
http://192.101.11.158:9200/_cat/health?v
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1605170896 08:48:16 elasticsearch-cluster green 3 3 20 10 0 0 0 0 - 100.0%
http://192.101.11.158:9200/_cat/nodes
172.26.0.4 20 61 84 4.73 4.78 4.94 dilmrt - es-node3
172.26.0.2 39 61 84 4.73 4.78 4.94 dilmrt * es-node1
172.26.0.3 36 61 84 4.73 4.78 4.94 dilmrt - es-node2
查看索引状态:
http://192.101.11.158:9200/_cat/indices?v
检查Kibana状态:
http://192.101.11.158:5601/status
访问Kibana:
http://192.101.11.158:5601
http://192.101.11.158:8003 ##Nginx代理
附录:docker compose帮助
查看版本:
# docker-compose --version
停止容器:
# docker-compose stop
构建并启动容器:
# docker-compose -f docker-compose.yml up -d #-d 在后台运行服务容器,注意点:如果我们的yml文件不是docker-compose.yml时我们在进行服务排编是需要指定yml文件名称
单独启动一个服务:
# docker-compose -f docker-compose.yml up -d [指定服务名称]
#查看帮助
docker-compose -h
#启动所有容器,-d 将会在后台启动并运行所有的容器
docker-compose up -d
#停用移除所有容器以及网络相关
docker-compose down
#查看服务容器的输出
docker-compose logs
#列出项目中目前的所有容器
docker-compose ps
#构建(重新构建)项目中的服务容器。服务容器一旦构建后,将会带上一个标记名,例如对于 web 项目中的一个 db 容器,可能是 web_db。可以随时在项目目录下运行 docker-compose build 来重新构建服务
docker-compose build
#拉取服务依赖的镜像
docker-compose pull
#重启项目中的服务
docker-compose restart
#删除所有(停止状态的)服务容器。推荐先执行 docker-compose stop 命令来停止容器。
docker-compose rm
#在指定服务上执行一个命令。
docker-compose run ubuntu ping docker.com
#设置指定服务运行的容器个数。通过 service=num 的参数来设置数量
docker-compose scale web=3 db=2
#启动已经存在的服务容器。
docker-compose start
#停止已经处于运行状态的容器,但不删除它。通过 docker-compose start 可以再次启动这些容器。
docker-compose stop
查看容器内存:
docker stats $(docker ps --format={{.Names}})
问题及处理:
问题1:
# docker logs es01
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
[2020-11-09T06:02:22,846][INFO ][o.e.e.NodeEnvironment ] [es01] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda2)]], net usable_space [37.3gb], net total_space [97.6gb], types [xfs]
[2020-11-09T06:02:22,849][INFO ][o.e.e.NodeEnvironment ] [es01] heap size [4.9gb], compressed ordinary object pointers [true]
[2020-11-09T06:02:22,851][INFO ][o.e.n.Node ] [es01] node name [es01], node ID [u-CQVKHyQFi-cc7VrRP4EQ]
[2020-11-09T06:02:22,851][INFO ][o.e.n.Node ] [es01] version[6.8.0], pid[1], build[default/docker/65b6179/2019-05-15T20:06:13.172855Z], OS[Linux/3.10.0-514.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/12.0.1/12.0.1+12]
[2020-11-09T06:02:22,851][INFO ][o.e.n.Node ] [es01] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-474425971056552043, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -Des.cgroups.hierarchy.override=/, -Xms5g, -Xmx5g, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=docker]
[2020-11-09T06:02:25,047][INFO ][o.e.p.PluginsService ] [es01] loaded module [aggs-matrix-stats]
[2020-11-09T06:02:25,047][INFO ][o.e.p.PluginsService ] [es01] loaded module [analysis-common]
[2020-11-09T06:02:25,047][INFO ][o.e.p.PluginsService ] [es01] loaded module [ingest-common]
[2020-11-09T06:02:25,047][INFO ][o.e.p.PluginsService ] [es01] loaded module [ingest-geoip]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [ingest-user-agent]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [lang-expression]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [lang-mustache]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [lang-painless]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [mapper-extras]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [parent-join]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [percolator]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [rank-eval]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [reindex]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [repository-url]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [transport-netty4]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [tribe]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-ccr]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-core]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-deprecation]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-graph]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-ilm]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-logstash]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-ml]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-monitoring]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-rollup]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-security]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-sql]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-upgrade]
[2020-11-09T06:02:25,050][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-watcher]
[2020-11-09T06:02:25,050][INFO ][o.e.p.PluginsService ] [es01] no plugins loaded
[2020-11-09T06:02:28,555][INFO ][o.e.x.s.a.s.FileRolesStore] [es01] parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]
[2020-11-09T06:02:29,391][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [es01] [controller/87] [Main.cc@109] controller (64 bit): Version 6.8.0 (Build e6cf25e2acc5ec) Copyright (c) 2019 Elasticsearch BV
[2020-11-09T06:02:30,119][INFO ][o.e.d.DiscoveryModule ] [es01] using discovery type [zen] and host providers [settings]
[2020-11-09T06:02:30,881][INFO ][o.e.n.Node ] [es01] initialized
[2020-11-09T06:02:30,881][INFO ][o.e.n.Node ] [es01] starting ...
[2020-11-09T06:02:31,010][INFO ][o.e.t.TransportService ] [es01] publish_address {172.18.0.2:9300}, bound_addresses {0.0.0.0:9300}
[2020-11-09T06:02:31,026][INFO ][o.e.b.BootstrapChecks ] [es01] bound or publishing to a non-loopback address, enforcing bootstrap checks
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2020-11-09T06:02:31,033][INFO ][o.e.n.Node ] [es01] stopping ...
[2020-11-09T06:02:31,050][INFO ][o.e.n.Node ] [es01] stopped
[2020-11-09T06:02:31,050][INFO ][o.e.n.Node ] [es01] closing ...
[2020-11-09T06:02:31,064][INFO ][o.e.n.Node ] [es01] closed
[2020-11-09T06:02:31,066][INFO ][o.e.x.m.p.NativeController] [es01] Native controller process has stopped - no new native processes can be started
解决:
切换到root用户,执行命令,调高JVM线程数限制数量
sysctl -w vm.max_map_count=262144
查看结果:
sysctl -a|grep vm.max_map_count
显示:
vm.max_map_count = 262144
上述方法修改之后,如果重启虚拟机将失效,所以:
解决办法:在 /etc/sysctl.conf文件最后添加一行
vm.max_map_count=262144
或
echo "vm.max_map_count=262144" >> /etc/sysctl.conf
sysctl -p
即为永久修改
问题2:
# docker logs es01
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 3579183104 bytes for committing reserved memory.
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x000000072aaa0000, 3579183104, 0) failed; error='Not enough space' (errno=12)
# An error report file with more information is saved as:
# /usr/share/elasticsearch/hs_err_pid90.log
Exception in thread "main" java.lang.RuntimeException: starting java failed with [1]
output:
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 5368709120 bytes for committing reserved memory.
# An error report file with more information is saved as:
# logs/hs_err_pid172.log
error:
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000680000000, 5368709120, 0) failed; error='Not enough space' (errno=12)
at org.elasticsearch.tools.launchers.JvmErgonomics.flagsFinal(JvmErgonomics.java:126)
at org.elasticsearch.tools.launchers.JvmErgonomics.finalJvmOptions(JvmErgonomics.java:88)
at org.elasticsearch.tools.launchers.JvmErgonomics.choose(JvmErgonomics.java:59)
at org.elasticsearch.tools.launchers.JvmOptionsParser.jvmOptions(JvmOptionsParser.java:137)
at org.elasticsearch.tools.launchers.JvmOptionsParser.main(JvmOptionsParser.java:95)
解决:
将
- "ES_JAVA_OPTS=-Xms5g -Xmx5g"
修改为:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
问题3:
# docker logs logstash
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2020-11-11T06:26:42,704][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.9.2", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10-LTS on 11.0.8+10-LTS +indy +jit [linux-x86_64]"}
[2020-11-11T06:26:42,725][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<ArgumentError: The setting `xpack.monitoring.elasticsearch.url` has been deprecated and removed from Logstash; please update your configuration to use `xpack.monitoring.elasticsearch.hosts` instead.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:691:in `set'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:119:in `set_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:138:in `block in merge'", "org/jruby/RubyHash.java:1415:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:138:in `merge'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:196:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:312:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:268:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:88:in `<main>'"]}
[2020-11-11T06:26:42,734][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/tmp/jruby-1/jruby11414559284155129184jopenssl.jar) to field java.security.MessageDigest.provider
WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
解决:
# vi logstash.yml
http.host: "0.0.0.0"
#xpack.monitoring.elasticsearch.url: http://es01:9200
xpack.monitoring.elasticsearch.hosts: es01:9200
xpack.monitoring.elasticsearch.username: "logstash_system"
xpack.monitoring.elasticsearch.password: "123456"
如何设置kibana特定的用户只能查看特定的dashboard:
1.在Stack Management-->security--->roles中创建role:
1.1填写role name.
1.2选择该用户只能看到的index,权限选择read
1.3kibana中的权限Discover选择Read,其他选择None.
2.在Stack Management-->security--->user中创建user:
2.1填写username,password
2.2填写full name
2.3选择role:创建的role。
这样使用该用户登录kibana时左侧导航栏就只能看到dashboard,且不可看到编辑按钮,且只可看到使用该index的dashboard,对于其他Index的dashboard只能看到界面但看不到具体的数据展示。
Elasticsearch:Index 生命周期管理
我们希望在索引达到50GB,或已在5天前创建索引后对其进行 rollover,然后在10天后删除该索引。
登陆Kibana,进入Dev Tools执行如下命令:
PUT _ilm/policy/tongji-service-policy
{
"policy": {
"phases": {
"hot":{
"actions":{
"rollover":{
"max_size":"50GB",
"max_age":"5d"
}
}
},
"delete":{
"min_age":"10d",
"actions":{
"delete":{}
}
}
}
}
}
GET _ilm/policy/tongji-service-policy
PUT _template/tongji-service-template
{
"index_patterns":["tongji-service-*"],
"settings":{
"number_of_shards":1,
"number_of_replicas":1,
"index.lifecycle.name":"tongji-service-policy",
"index.lifecycle.rollover_alias":"tongji-service"
}
}
GET _template/tongji-service-template
其他查询命令:
GET tongji-service-*/_ilm/explain
GET tongji-service-*/_count
GET _cat/shards/tongji-service-*
GET _cat/indices/tongji-service-*
GET _cat/indices/tongji-service-*?v
进入Management下的Elasticsearch---->Index Lifecycle Policies可查看创建的 policies
进入Management下的Elasticsearch---->Index Management,点击每个Index可查看每个Index的具体设置信息,包括对应的Index Lifecycle Policy
ELK是三个开源软件的缩写,分别表示:elasticsearch、logstash、kibana
Elasticsearch是个开源分布式搜索引擎,提供搜集、分析、存储数据三大功能。它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等
Logstash 主要是用来日志的搜集、分析、过滤日志的工具,支持大量的数据获取方式。一般工作方式为c/s架构,client端安装在需要收集日志的主机上,server端负责将收到的各节点日志进行过滤、修改等操作在一并发往elasticsearch上去。
Logstash收集AppServer产生的Log,并存放到ElasticSearch集群中,而Kibana则从ES集群中查询数据生成图表,再返回给Browser。
Logstash工作原理:
Logstash事件处理有三个阶段:inputs → filters → outputs。是一个接收,处理,转发日志的工具。支持系统日志,webserver日志,错误日志,应用日志,总之包括所有可以抛出来的日志类型。
Kibana可以为 Logstash 和 ElasticSearch 通过报表、图形化数据进行可视化展示 Web 界面,可以帮助汇总、分析和搜索重要数据日志。
方案一:logstash->elasticsearch->kibana(本次方案)
方案二:filebeat->kafka->logstash->elasticSearch->kibana
一、镜像环境准备:
docker pull elasticsearch:7.9.2
docker pull logstash:7.9.2
docker pull kibana:7.9.2
docker pull mobz/elasticsearch-head:5
docker pull openjdk:15.0.1
docker pull nginx:1.17.2
docker save -o /opt/images/elasticsearch-7.9.2.tar elasticsearch:7.9.2
docker save -o /opt/images/logstash-7.9.2.tar logstash:7.9.2
docker save -o /opt/images/kibana-7.9.2.tar kibana:7.9.2
docker save -o /opt/images/elasticsearch-head-5.tar mobz/elasticsearch-head:5
docker save -o /opt/images/openjdk-15.0.1.tar openjdk:15.0.1
docker save -o /opt/images/nginx-1.17.2.tar nginx:1.17.2
下载到本地,然后从本地上传到安装服务器并加载镜像
docker load < elasticsearch-7.9.2.tar
docker load < logstash-7.9.2.tar
docker load < kibana-7.9.2.tar
docker load < elasticsearch-head-5.tar
docker load < openjdk-15.0.1.tar
docker load < nginx-1.17.2.tar
下载jdk-15.0.1_linux-x64_bin.tar.gz 安装服务器安装java 15.0.1
# java -version
java version "15.0.1" 2020-10-20
Java(TM) SE Runtime Environment (build 15.0.1+9-18)
Java HotSpot(TM) 64-Bit Server VM (build 15.0.1+9-18, mixed mode, sharing)
二、docker安装
1、yum 包更新
# yum update
2、检查是否安装过
# rpm -qa|grep docker
3、卸载旧版本 Docker
# yum remove docker docker-common docker-selinux docker-engine
4、安装软件包
# yum install -y yum-utils device-mapper-persistent-data lvm2
5、添加 Docker yum 源
# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
6、安装 Docker
# yum -y install docker-ce
7、# 启动 Docker
# systemctl start docker
8、查看 Docker 版本号
# docker version
三、docker-compose安装
1、下载指定版本的docker-compose
从地址 https://github.com/docker/compose/releases下载指定版本的docker-compose
上传到服务器 /usr/local/bin/下,重命名为 docker-compose
2、对二进制文件赋可执行权限
# sudo chmod +x /usr/local/bin/docker-compose
# sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
3、测试下docker-compose是否安装成功
# docker-compose --version
docker-compose version 1.27.4, build 1719ceb
四、创建docker-compose.yml
docker-compose.yml包含version、services、networks 三大部分
# mkdir -p /usr/local/elk
# cd /usr/local/elk
# vi docker-compose.yml
version: '3'
networks:
elk:
services:
es01:
image: elasticsearch:7.9.2
container_name: es01
environment:
- cluster.name=elasticsearch-cluster
- node.name=es-node1
- cluster.initial_master_nodes=es01,es02,es03
- http.cors.enabled=true
- http.cors.allow-origin=*
- http.cors.allow-headers=Authorization
- xpack.security.enabled=true
- xpack.monitoring.enabled=true
- xpack.security.transport.ssl.enabled=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=es01,es02,es03"
- "discovery.zen.minimum_master_nodes=2"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /etc/localtime:/etc/localtime
- ./es01/data:/usr/share/elasticsearch/data
- ./es01/logs:/usr/share/elasticsearch/logs
ports:
- 9200:9200
networks:
- elk
es02:
image: elasticsearch:7.9.2
container_name: es02
environment:
- cluster.name=elasticsearch-cluster
- node.name=es-node2
- cluster.initial_master_nodes=es01,es02,es03
- http.cors.enabled=true
- http.cors.allow-origin=*
- http.cors.allow-headers=Authorization
- xpack.security.enabled=true
- xpack.monitoring.enabled=true
- xpack.security.transport.ssl.enabled=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=es01,es02,es03"
- "discovery.zen.minimum_master_nodes=2"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /etc/localtime:/etc/localtime
- ./es02/data:/usr/share/elasticsearch/data
- ./es02/logs:/usr/share/elasticsearch/logs
depends_on:
- es01
networks:
- elk
es03:
image: elasticsearch:7.9.2
container_name: es03
environment:
- cluster.name=elasticsearch-cluster
- node.name=es-node3
- cluster.initial_master_nodes=es01,es02,es03
- http.cors.enabled=true
- http.cors.allow-origin=*
- http.cors.allow-headers=Authorization
- xpack.security.enabled=true
- xpack.monitoring.enabled=true
- xpack.security.transport.ssl.enabled=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=es01,es02,es03"
- "discovery.zen.minimum_master_nodes=2"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /etc/localtime:/etc/localtime
- ./es03/data:/usr/share/elasticsearch/data
- ./es03/logs:/usr/share/elasticsearch/logs
depends_on:
- es02
networks:
- elk
logstash:
image: logstash:7.9.2
container_name: logstash
restart: always
networks:
- elk
ports:
- "8002:9601"
volumes:
- /etc/localtime:/etc/localtime
- ./logstash/config-dir:/config-dir
- ./logstash/config:/opt/logstash/config
command: logstash -f /config-dir
depends_on:
- es03
kibana:
image: kibana:7.9.2
container_name: kibana
restart: always
volumes:
- /etc/localtime:/etc/localtime
- ./kibana/kibana.yml:/usr/share/kibana/config/kibana.yml
networks:
- elk
ports:
- "5601:5601"
depends_on:
- es03
nginx:
image: nginx:1.17.2
container_name: nginx
environment:
- TZ=Asia/Shanghai
restart: always
volumes:
- /etc/localtime:/etc/localtime
- ./nginx/conf/nginx.conf:/etc/nginx/nginx.conf
# - ./nginx/conf/htpasswd:/etc/nginx/htpasswd
networks:
- elk
ports:
- "8003:80"
depends_on:
- kibana
五、创建es.yml配置
# cd /usr/local/elk/
# vi es.yml
# 集群名称
cluster.name: elasticsearch-cluster
# 节点名称
node.name: es-node1
bootstrap.memory_lock: true
#cluster.initial_master_nodes: ["es01","es02","es03"]
# 绑定host,0.0.0.0代表当前节点的ip
network.host: 0.0.0.0
# 设置其它节点和该节点交互的ip地址,如果不设置它会自动判断,值必须是个真实的ip地址(本机ip)
network.publish_host: 192.101.11.158
# 设置对外服务的http端口,默认为9200
http.port: 9200
# 设置节点间交互的tcp端口,默认是9300
transport.tcp.port: 9300
# 是否支持跨域,默认为false
http.cors.enabled: true
# 当设置允许跨域,默认为*,表示支持所有域名,如果我们只是允许某些网站能访问,那么可以使用正则表达式。比如只允许本地地址。 /https?:\/\/localhost(:[0-9]+)?/
http.cors.allow-origin: "*"
# 表示这个节点是否可以充当主节点
node.master: true
# 是否充当数据节点
node.data: true
# 所有主从节点ip:port
discovery.seed_hosts: ["192.101.11.158:9300"]
# 这个参数决定了在选主过程中需要 有多少个节点通信 预防脑裂
discovery.zen.minimum_master_nodes: 1
# 跨域允许设置的头信息,默认为X-Requested-With,Content-Type,Content-Lengt
http.cors.allow-headers: Authorization
# 这条配置表示开启xpack认证机制
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
六、创建es挂载文件夹
# mkdir -p /usr/local/elk/es01/data
# mkdir -p /usr/local/elk/es02/data
# mkdir -p /usr/local/elk/es03/data
# mkdir -p /usr/local/elk/es01/logs
# mkdir -p /usr/local/elk/es02/logs
# mkdir -p /usr/local/elk/es03/logs
# chmod -R 777 /usr/local/elk/es01
# chmod -R 777 /usr/local/elk/es02
# chmod -R 777 /usr/local/elk/es03
七、设置es用户密码
# docker run -d --name elasticSearch -p 9200:9200 -p 9300:9300 -v /usr/local/elk/es01/data:/usr/share/elasticsearch/data -v /usr/local/elk/es.yml:/usr/share/elasticsearch/config/elasticsearch.yml -e ES_JAVA_OPTS="-Xms512m -Xmx512m" -e "discovery.type=single-node" elasticsearch:7.9.2
##注意: discovery.type=single-node
# docker ps
# docker exec -it elasticSearch /bin/bash
####./bin/elasticsearch-setup-passwords auto ##自动生成随机密码
# ./bin/elasticsearch-setup-passwords interactive ##手动配置密码
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y
Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana_system]:
Reenter password for [kibana_system]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:
Changed password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]
注意:输入设置的密码(123456)
# docker stop elasticSearch
# docker rm elasticSearch
八、创建kibana配置
# mkdir -p /usr/local/elk/kibana
# chmod -R 777 /usr/local/elk/kibana
# cd /usr/local/elk/kibana
# vi kibana.yml
server.name: kibana
# kibana的主机地址 0.0.0.0可表示监听所有IP
server.host: "0.0.0.0"
# kibana访问es的URL
elasticsearch.hosts: [ "http://es01:9200" ]
elasticsearch.username: 'kibana'
elasticsearch.password: '123456'
# 显示登陆页面
xpack.monitoring.enabled: true
xpack.monitoring.ui.container.elasticsearch.enabled: true
# 语言
#i18n.locale: "zh-CN"
九、创建logstash配置
# mkdir -p /usr/local/elk/logstash/config
# cd /usr/local/elk/logstash/config
# ll
total 24
-rw-r--r-- 1 root root 1833 Nov 10 15:25 jvm.options
-rw-r--r-- 1 root root 551 Nov 6 14:33 log4j2.properties
-rw-r--r-- 1 root root 342 Nov 6 14:33 logstash-sample.conf
-rw-r--r-- 1 root root 74 Nov 11 12:52 logstash.yml
-rw-r--r-- 1 root root 286 Nov 6 14:33 pipelines.yml
-rw-r--r-- 1 root root 1696 Nov 6 14:33 startup.options
# vi jvm.options
## JVM configuration
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms512m
-Xmx512m
################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don't tamper with them unless
## you understand what you are doing
##
################################################################
## GC configuration
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
## Locale
# Set the locale language
#-Duser.language=en
# Set the locale country
#-Duser.country=US
# Set the locale variant, if any
#-Duser.variant=
## basic
# set the I/O temp directory
#-Djava.io.tmpdir=$HOME
# set to headless, just in case
-Djava.awt.headless=true
# ensure UTF-8 encoding by default (e.g. filenames)
-Dfile.encoding=UTF-8
# use our provided JNA always versus the system one
#-Djna.nosys=true
# Turn on JRuby invokedynamic
-Djruby.compile.invokedynamic=true
# Force Compilation
-Djruby.jit.threshold=0
## heap dumps
# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError
# specify an alternative path for heap dumps
# ensure the directory exists and has sufficient space
#-XX:HeapDumpPath=${LOGSTASH_HOME}/heapdump.hprof
## GC logging
#-XX:+PrintGCDetails
#-XX:+PrintGCTimeStamps
#-XX:+PrintGCDateStamps
#-XX:+PrintClassHistogram
#-XX:+PrintTenuringDistribution
#-XX:+PrintGCApplicationStoppedTime
# log GC status to a file with time stamps
# ensure the directory exists
#-Xloggc:${LS_GC_LOG_FILE}
# Entropy source for randomness
-Djava.security.egd=file:/dev/urandom
# vi log4j2.properties
status = error
name = LogstashPropertiesConfig
appender.console.type = Console
appender.console.name = plain_console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n
appender.json_console.type = Console
appender.json_console.name = json_console
appender.json_console.layout.type = JSONLayout
appender.json_console.layout.compact = true
appender.json_console.layout.eventEol = true
rootLogger.level = ${sys:ls.log.level}
rootLogger.appenderRef.console.ref = ${sys:ls.log.format}_console
# vi logstash-sample.conf
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
user => "elastic"
password => "123456"
}
}
# vi logstash.yml
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: es01:9200
xpack.monitoring.elasticsearch.username: "logstash_system"
xpack.monitoring.elasticsearch.password: "123456"
# vi pipelines.yml
# This file is where you define your pipelines. You can define multiple.
# For more information on multiple pipelines, see the documentation:
# https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html
- pipeline.id: main
path.config: "/usr/share/logstash/pipeline"
# vi startup.options
################################################################################
# These settings are ONLY used by $LS_HOME/bin/system-install to create a custom
# startup script for Logstash and is not used by Logstash itself. It should
# automagically use the init system (systemd, upstart, sysv, etc.) that your
# Linux distribution uses.
#
# After changing anything here, you need to re-run $LS_HOME/bin/system-install
# as root to push the changes to the init script.
################################################################################
# Override Java location
#JAVACMD=/usr/bin/java
# Set a home directory
LS_HOME=/usr/share/logstash
# logstash settings directory, the path which contains logstash.yml
LS_SETTINGS_DIR=/etc/logstash
# Arguments to pass to logstash
LS_OPTS="--path.settings ${LS_SETTINGS_DIR}"
# Arguments to pass to java
LS_JAVA_OPTS=""
# pidfiles aren't used the same way for upstart and systemd; this is for sysv users.
LS_PIDFILE=/var/run/logstash.pid
# user and group id to be invoked as
LS_USER=logstash
LS_GROUP=logstash
# Enable GC logging by uncommenting the appropriate lines in the GC logging
# section in jvm.options
LS_GC_LOG_FILE=/var/log/logstash/gc.log
# Open file limit
LS_OPEN_FILES=16384
# Nice level
LS_NICE=19
# Change these to have the init script named and described differently
# This is useful when running multiple instances of Logstash on the same
# physical box or vm
SERVICE_NAME="logstash"
SERVICE_DESCRIPTION="logstash"
# If you need to run a command or script before launching Logstash, put it
# between the lines beginning with `read` and `EOM`, and uncomment those lines.
###
## read -r -d '' PRESTART << EOM
## EOM
# mkdir -p /usr/local/elk/logstash/config-dir
# cd /usr/local/elk/logstash/config-dir
# vi logstash.conf
input {
tcp {
port => 9601 #监听端口
mode => "server" #使用服务模式
tags => ["tags"]
codec => json_lines #使用json格式传输数据
}
}
output {
elasticsearch {
hosts => "es01:9200"
index => "%{[appname]}-%{+YYYY.MM.dd}" #使用应用名称和时间为日志索引
user => "elastic" #es的认证账号用户
password => "123456" #es的认证账号密码
sniffing => false
}
stdout {
codec => rubydebug
}
}
十、创建nginx配置
# mkdir -p /usr/local/elk/nginx/conf/
# cd /usr/local/elk/nginx/conf/
# vi nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
#include /etc/nginx/conf.d/*.conf;
upstream kibana_web {
server kibana:5601 weight=1 max_fails=2 fail_timeout=30s;
}
server {
location / {
root html;
index index.html index.htm;
proxy_set_header Host $host;
proxy_pass http://kibana_web;
# auth_basic "The Kibana Monitor Center";
# auth_basic_user_file /etc/nginx/htpasswd;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
备注:
# 创建用户名admin密码123456的htpasswd
# printf "admin:$(openssl passwd -crypt 123456)\n" >>/usr/local/elk/nginx/conf/htpasswd
十一、启动ELK
# docker-compose up -d
# docker ps -a
# docker logs es01
{"type": "server", "timestamp": "2020-11-11T07:06:57,163Z", "level": "WARN", "component": "o.e.t.TcpTransport", "cluster.name": "elasticsearch-cluster", "node.name": "es-node1", "message": "exception caught on transport layer [Netty4TcpChannel{localAddress=/172.26.0.2:9300, remoteAddress=/172.26.0.4:37480}], closing connection", "cluster.uuid": "QgqJ-_tJQIWvEdJsI8L6EQ", "node.id": "gkoszyl6R36fhfqyTsAQmQ" ,
"stacktrace": ["io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: No available authentication scheme",
"at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:471) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]",
"at java.lang.Thread.run(Thread.java:832) [?:?]",
"Caused by: javax.net.ssl.SSLHandshakeException: No available authentication scheme",
"at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?]",
"at sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]",
"at sun.security.ssl.TransportContext.fatal(TransportContext.java:356) ~[?:?]",
"at sun.security.ssl.TransportContext.fatal(TransportContext.java:312) ~[?:?]",
"at sun.security.ssl.TransportContext.fatal(TransportContext.java:303) ~[?:?]",
"at sun.security.ssl.CertificateMessage$T13CertificateProducer.onProduceCertificate(CertificateMessage.java:955) ~[?:?]",
"at sun.security.ssl.CertificateMessage$T13CertificateProducer.produce(CertificateMessage.java:944) ~[?:?]",
"at sun.security.ssl.SSLHandshake.produce(SSLHandshake.java:440) ~[?:?]",
"at sun.security.ssl.ClientHello$T13ClientHelloConsumer.goServerHello(ClientHello.java:1252) ~[?:?]",
"at sun.security.ssl.ClientHello$T13ClientHelloConsumer.consume(ClientHello.java:1188) ~[?:?]",
"at sun.security.ssl.ClientHello$ClientHelloConsumer.onClientHello(ClientHello.java:851) ~[?:?]",
"at sun.security.ssl.ClientHello$ClientHelloConsumer.consume(ClientHello.java:812) ~[?:?]",
"at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:396) ~[?:?]",
"at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:480) ~[?:?]",
"at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1267) ~[?:?]",
"at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1254) ~[?:?]",
"at java.security.AccessController.doPrivileged(AccessController.java:691) ~[?:?]",
"at sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:1199) ~[?:?]",
"at io.netty.handler.ssl.SslHandler.runAllDelegatedTasks(SslHandler.java:1542) ~[netty-handler-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.handler.ssl.SslHandler.runDelegatedTasks(SslHandler.java:1556) ~[netty-handler-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1440) ~[netty-handler-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1267) ~[netty-handler-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1314) ~[netty-handler-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]",
"at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]",
"... 16 more"] }
..........................
注意:javax.net.ssl.SSLHandshakeException: No available authentication scheme,通过配置Node间SSL解决
十二、测试
http://192.101.11.158:9200/_cluster/health?pretty
http://192.101.11.158:9200/_cat/nodes?v&pretty
http://192.101.11.158:9200/_cat/health?v
http://192.101.11.158:9200/_cat/nodes ##查看集群节点
http://192.101.11.158:5601
http://192.101.11.158:8003 ##Nginx代理
十三、配置Node间SSL
1、创建CA证书:
# docker exec -it es01 /bin/bash
# ./bin/elasticsearch-certutil ca -v
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.
The 'ca' mode generates a new 'certificate authority'
This will create a new X.509 certificate and private key that can be used
to sign certificate when running in 'cert' mode.
Use the 'ca-dn' option if you wish to configure the 'distinguished name'
of the certificate authority
By default the 'ca' mode produces a single PKCS#12 output file which holds:
* The CA certificate
* The CA's private key
If you elect to generate PEM format certificates (the -pem option), then the output will
be a zip file containing individual files for the CA certificate and private key
Please enter the desired output file [elastic-stack-ca.p12]: # 输入保存的ca文件名称,保持默认,直接回车
Enter password for elastic-stack-ca.p12 : # 输入证书密码,我们这里留空,直接回车
# ll
total 576
-rw-r--r-- 1 elasticsearch root 13675 Sep 23 08:43 LICENSE.txt
-rw-r--r-- 1 elasticsearch root 544318 Sep 23 08:47 NOTICE.txt
-rw-r--r-- 1 elasticsearch root 7007 Sep 23 08:43 README.asciidoc
drwxr-xr-x 2 elasticsearch root 4096 Sep 23 08:50 bin
drwxrwxr-x 1 elasticsearch root 36 Nov 11 13:50 config
drwxrwxrwx 3 root root 19 Nov 10 16:57 data
-rw------- 1 root root 2527 Nov 12 13:50 elastic-stack-ca.p12
drwxr-xr-x 1 elasticsearch root 17 Sep 23 08:48 jdk
drwxr-xr-x 3 elasticsearch root 4096 Sep 23 08:48 lib
drwxrwxrwx 2 root root 4096 Nov 11 13:50 logs
drwxr-xr-x 51 elasticsearch root 4096 Sep 23 08:49 modules
drwxr-xr-x 2 elasticsearch root 6 Sep 23 08:47 plugins
2、创建node证书
# ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.
The 'cert' mode generates X.509 certificate and private keys.
* By default, this generates a single certificate and key for use
on a single instance.
* The '-multiple' option will prompt you to enter details for multiple
instances and will generate a certificate and key for each one
* The '-in' option allows for the certificate generation to be automated by describing
the details of each instance in a YAML file
* An instance is any piece of the Elastic Stack that requires an SSL certificate.
Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats
may all require a certificate and private key.
* The minimum required value for each instance is a name. This can simply be the
hostname, which will be used as the Common Name of the certificate. A full
distinguished name may also be used.
* A filename value may be required for each instance. This is necessary when the
name would result in an invalid file or directory name. The name provided here
is used as the directory name (within the zip) and the prefix for the key and
certificate files. The filename is required if you are prompted and the name
is not displayed in the prompt.
* IP addresses and DNS names are optional. Multiple values can be specified as a
comma separated string. If no IP addresses or DNS names are provided, you may
disable hostname verification in your SSL configuration.
* All certificates generated by this tool will be signed by a certificate authority (CA).
* The tool can automatically generate a new CA for you, or you can provide your own with the
-ca or -ca-cert command line options.
By default the 'cert' mode produces a single PKCS#12 output file which holds:
* The instance certificate
* The private key for the instance certificate
* The CA certificate
If you specify any of the following options:
* -pem (PEM formatted output)
* -keep-ca-key (retain generated CA key)
* -multiple (generate multiple certificates)
* -in (generate certificates from an input file)
then the output will be be a zip file containing individual certificate/key files
Enter password for CA (elastic-stack-ca.p12) : # 输入CA证书的密码,我们这里没有设置,直接回车
Please enter the desired output file [elastic-certificates.p12]: # 输入证书保存名称,保持默认,直接回车
Enter password for elastic-certificates.p12 : # 输入证书的密码,我们这里留空,直接回车
Certificates written to /usr/share/elasticsearch/elastic-certificates.p12
This file should be properly secured as it contains the private key for
your instance.
This file is a self contained file and can be copied and used 'as is'
For each Elastic product that you wish to configure, you should copy
this '.p12' file to the relevant configuration directory
and then follow the SSL configuration instructions in the product guide.
For client applications, you may only need to copy the CA certificate and
configure the client to trust this certificate.
# ll
total 580
-rw-r--r-- 1 elasticsearch root 13675 Sep 23 08:43 LICENSE.txt
-rw-r--r-- 1 elasticsearch root 544318 Sep 23 08:47 NOTICE.txt
-rw-r--r-- 1 elasticsearch root 7007 Sep 23 08:43 README.asciidoc
drwxr-xr-x 2 elasticsearch root 4096 Sep 23 08:50 bin
drwxrwxr-x 1 elasticsearch root 36 Nov 11 13:50 config
drwxrwxrwx 3 root root 19 Nov 10 16:57 data
-rw------- 1 root root 3443 Nov 12 13:53 elastic-certificates.p12
-rw------- 1 root root 2527 Nov 12 13:50 elastic-stack-ca.p12
drwxr-xr-x 1 elasticsearch root 17 Sep 23 08:48 jdk
drwxr-xr-x 3 elasticsearch root 4096 Sep 23 08:48 lib
drwxrwxrwx 2 root root 4096 Nov 11 13:50 logs
drwxr-xr-x 51 elasticsearch root 4096 Sep 23 08:49 modules
drwxr-xr-x 2 elasticsearch root 6 Sep 23 08:47 plugins
这个命令生成格式为PKCS#12名称为 elastic-certificates.p12 的keystore文件,包含node证书、私钥、CA证书。
这个命令生成的证书内部默认是不包含主机名信息的,所以证书可以用在任何的node节点上,但是你必须配置elasticsearch关闭主机名认证。
# mv elastic-*.p12 ./config/
3、配置ES节点使用这个证书:
退出es01容器
# exit
复制证书到物理机
# mkdir -p /usr/local/elk/config
# docker cp es01:/usr/share/elasticsearch/config/elastic-certificates.p12 /usr/local/elk/config
修改证书权限
# chmod 777 /usr/local/elk/config/elastic-certificates.p12
配置docker-compose.yml配置文件,注意所有的node节点都需要配置,这里的配置是使用PKCS#12格式的证书:
environment:
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.keystore.path=elastic-certificates.p12
- xpack.security.transport.ssl.truststore.path=elastic-certificates.p12
volumes:
- ./config/elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
# vi docker-compose.yml
version: '3'
networks:
elk:
services:
es01:
image: elasticsearch:7.9.2
container_name: es01
environment:
- cluster.name=elasticsearch-cluster
- node.name=es-node1
- cluster.initial_master_nodes=es01,es02,es03
- http.cors.enabled=true
- http.cors.allow-origin=*
- http.cors.allow-headers=Authorization
- xpack.security.enabled=true
- xpack.monitoring.enabled=true
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.keystore.path=elastic-certificates.p12
- xpack.security.transport.ssl.truststore.path=elastic-certificates.p12
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=es01,es02,es03"
- "discovery.zen.minimum_master_nodes=2"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /etc/localtime:/etc/localtime
- ./es01/data:/usr/share/elasticsearch/data
- ./es01/logs:/usr/share/elasticsearch/logs
- ./config/elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
ports:
- 9200:9200
networks:
- elk
es02:
image: elasticsearch:7.9.2
container_name: es02
environment:
- cluster.name=elasticsearch-cluster
- node.name=es-node2
- cluster.initial_master_nodes=es01,es02,es03
- http.cors.enabled=true
- http.cors.allow-origin=*
- http.cors.allow-headers=Authorization
- xpack.security.enabled=true
- xpack.monitoring.enabled=true
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.keystore.path=elastic-certificates.p12
- xpack.security.transport.ssl.truststore.path=elastic-certificates.p12
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=es01,es02,es03"
- "discovery.zen.minimum_master_nodes=2"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /etc/localtime:/etc/localtime
- ./es02/data:/usr/share/elasticsearch/data
- ./es02/logs:/usr/share/elasticsearch/logs
- ./config/elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
depends_on:
- es01
networks:
- elk
es03:
image: elasticsearch:7.9.2
container_name: es03
environment:
- cluster.name=elasticsearch-cluster
- node.name=es-node3
- cluster.initial_master_nodes=es01,es02,es03
- http.cors.enabled=true
- http.cors.allow-origin=*
- http.cors.allow-headers=Authorization
- xpack.security.enabled=true
- xpack.monitoring.enabled=true
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.keystore.path=elastic-certificates.p12
- xpack.security.transport.ssl.truststore.path=elastic-certificates.p12
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=es01,es02,es03"
- "discovery.zen.minimum_master_nodes=2"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- /etc/localtime:/etc/localtime
- ./es03/data:/usr/share/elasticsearch/data
- ./es03/logs:/usr/share/elasticsearch/logs
- ./config/elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
depends_on:
- es02
networks:
- elk
logstash:
image: logstash:7.9.2
container_name: logstash
restart: always
networks:
- elk
ports:
- "8002:9601"
volumes:
- /etc/localtime:/etc/localtime
- ./logstash/config-dir:/config-dir
- ./logstash/config:/opt/logstash/config
command: logstash -f /config-dir
depends_on:
- es03
kibana:
image: kibana:7.9.2
container_name: kibana
restart: always
volumes:
- /etc/localtime:/etc/localtime
- ./kibana/kibana.yml:/usr/share/kibana/config/kibana.yml
networks:
- elk
ports:
- "5601:5601"
depends_on:
- es03
nginx:
image: nginx:1.17.2
container_name: nginx
environment:
- TZ=Asia/Shanghai
restart: always
volumes:
- /etc/localtime:/etc/localtime
- ./nginx/conf/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/conf/htpasswd:/etc/nginx/htpasswd
networks:
- elk
ports:
- "8003:80"
depends_on:
- kibana
4、再次启动ELK:
# docker-compose up -d
5、再次测试:
查看集群状态:
http://192.101.11.158:9200/_cluster/health?pretty
cluster_name "elasticsearch-cluster"
status "green"
timed_out false
number_of_nodes 3
number_of_data_nodes 3
active_primary_shards 10
active_shards 20
relocating_shards 0
initializing_shards 0
unassigned_shards 0
delayed_unassigned_shards 0
number_of_pending_tasks 0
number_of_in_flight_fetch 0
task_max_waiting_in_queue_millis 0
active_shards_percent_as_number 100
http://192.101.11.158:9200/_cat/nodes?v&pretty
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.26.0.4 68 61 83 4.54 4.76 4.94 dilmrt - es-node3
172.26.0.2 20 61 83 4.54 4.76 4.94 dilmrt * es-node1
172.26.0.3 72 61 83 4.54 4.76 4.94 dilmrt - es-node2
http://192.101.11.158:9200/_cat/health?v
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1605170896 08:48:16 elasticsearch-cluster green 3 3 20 10 0 0 0 0 - 100.0%
http://192.101.11.158:9200/_cat/nodes
172.26.0.4 20 61 84 4.73 4.78 4.94 dilmrt - es-node3
172.26.0.2 39 61 84 4.73 4.78 4.94 dilmrt * es-node1
172.26.0.3 36 61 84 4.73 4.78 4.94 dilmrt - es-node2
查看索引状态:
http://192.101.11.158:9200/_cat/indices?v
检查Kibana状态:
http://192.101.11.158:5601/status
访问Kibana:
http://192.101.11.158:5601
http://192.101.11.158:8003 ##Nginx代理
附录:docker compose帮助
查看版本:
# docker-compose --version
停止容器:
# docker-compose stop
构建并启动容器:
# docker-compose -f docker-compose.yml up -d #-d 在后台运行服务容器,注意点:如果我们的yml文件不是docker-compose.yml时我们在进行服务排编是需要指定yml文件名称
单独启动一个服务:
# docker-compose -f docker-compose.yml up -d [指定服务名称]
#查看帮助
docker-compose -h
#启动所有容器,-d 将会在后台启动并运行所有的容器
docker-compose up -d
#停用移除所有容器以及网络相关
docker-compose down
#查看服务容器的输出
docker-compose logs
#列出项目中目前的所有容器
docker-compose ps
#构建(重新构建)项目中的服务容器。服务容器一旦构建后,将会带上一个标记名,例如对于 web 项目中的一个 db 容器,可能是 web_db。可以随时在项目目录下运行 docker-compose build 来重新构建服务
docker-compose build
#拉取服务依赖的镜像
docker-compose pull
#重启项目中的服务
docker-compose restart
#删除所有(停止状态的)服务容器。推荐先执行 docker-compose stop 命令来停止容器。
docker-compose rm
#在指定服务上执行一个命令。
docker-compose run ubuntu ping docker.com
#设置指定服务运行的容器个数。通过 service=num 的参数来设置数量
docker-compose scale web=3 db=2
#启动已经存在的服务容器。
docker-compose start
#停止已经处于运行状态的容器,但不删除它。通过 docker-compose start 可以再次启动这些容器。
docker-compose stop
查看容器内存:
docker stats $(docker ps --format={{.Names}})
问题及处理:
问题1:
# docker logs es01
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
[2020-11-09T06:02:22,846][INFO ][o.e.e.NodeEnvironment ] [es01] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda2)]], net usable_space [37.3gb], net total_space [97.6gb], types [xfs]
[2020-11-09T06:02:22,849][INFO ][o.e.e.NodeEnvironment ] [es01] heap size [4.9gb], compressed ordinary object pointers [true]
[2020-11-09T06:02:22,851][INFO ][o.e.n.Node ] [es01] node name [es01], node ID [u-CQVKHyQFi-cc7VrRP4EQ]
[2020-11-09T06:02:22,851][INFO ][o.e.n.Node ] [es01] version[6.8.0], pid[1], build[default/docker/65b6179/2019-05-15T20:06:13.172855Z], OS[Linux/3.10.0-514.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/12.0.1/12.0.1+12]
[2020-11-09T06:02:22,851][INFO ][o.e.n.Node ] [es01] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-474425971056552043, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -Des.cgroups.hierarchy.override=/, -Xms5g, -Xmx5g, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=docker]
[2020-11-09T06:02:25,047][INFO ][o.e.p.PluginsService ] [es01] loaded module [aggs-matrix-stats]
[2020-11-09T06:02:25,047][INFO ][o.e.p.PluginsService ] [es01] loaded module [analysis-common]
[2020-11-09T06:02:25,047][INFO ][o.e.p.PluginsService ] [es01] loaded module [ingest-common]
[2020-11-09T06:02:25,047][INFO ][o.e.p.PluginsService ] [es01] loaded module [ingest-geoip]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [ingest-user-agent]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [lang-expression]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [lang-mustache]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [lang-painless]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [mapper-extras]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [parent-join]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [percolator]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [rank-eval]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [reindex]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [repository-url]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [transport-netty4]
[2020-11-09T06:02:25,048][INFO ][o.e.p.PluginsService ] [es01] loaded module [tribe]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-ccr]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-core]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-deprecation]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-graph]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-ilm]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-logstash]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-ml]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-monitoring]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-rollup]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-security]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-sql]
[2020-11-09T06:02:25,049][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-upgrade]
[2020-11-09T06:02:25,050][INFO ][o.e.p.PluginsService ] [es01] loaded module [x-pack-watcher]
[2020-11-09T06:02:25,050][INFO ][o.e.p.PluginsService ] [es01] no plugins loaded
[2020-11-09T06:02:28,555][INFO ][o.e.x.s.a.s.FileRolesStore] [es01] parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]
[2020-11-09T06:02:29,391][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [es01] [controller/87] [Main.cc@109] controller (64 bit): Version 6.8.0 (Build e6cf25e2acc5ec) Copyright (c) 2019 Elasticsearch BV
[2020-11-09T06:02:30,119][INFO ][o.e.d.DiscoveryModule ] [es01] using discovery type [zen] and host providers [settings]
[2020-11-09T06:02:30,881][INFO ][o.e.n.Node ] [es01] initialized
[2020-11-09T06:02:30,881][INFO ][o.e.n.Node ] [es01] starting ...
[2020-11-09T06:02:31,010][INFO ][o.e.t.TransportService ] [es01] publish_address {172.18.0.2:9300}, bound_addresses {0.0.0.0:9300}
[2020-11-09T06:02:31,026][INFO ][o.e.b.BootstrapChecks ] [es01] bound or publishing to a non-loopback address, enforcing bootstrap checks
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2020-11-09T06:02:31,033][INFO ][o.e.n.Node ] [es01] stopping ...
[2020-11-09T06:02:31,050][INFO ][o.e.n.Node ] [es01] stopped
[2020-11-09T06:02:31,050][INFO ][o.e.n.Node ] [es01] closing ...
[2020-11-09T06:02:31,064][INFO ][o.e.n.Node ] [es01] closed
[2020-11-09T06:02:31,066][INFO ][o.e.x.m.p.NativeController] [es01] Native controller process has stopped - no new native processes can be started
解决:
切换到root用户,执行命令,调高JVM线程数限制数量
sysctl -w vm.max_map_count=262144
查看结果:
sysctl -a|grep vm.max_map_count
显示:
vm.max_map_count = 262144
上述方法修改之后,如果重启虚拟机将失效,所以:
解决办法:在 /etc/sysctl.conf文件最后添加一行
vm.max_map_count=262144
或
echo "vm.max_map_count=262144" >> /etc/sysctl.conf
sysctl -p
即为永久修改
问题2:
# docker logs es01
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 3579183104 bytes for committing reserved memory.
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x000000072aaa0000, 3579183104, 0) failed; error='Not enough space' (errno=12)
# An error report file with more information is saved as:
# /usr/share/elasticsearch/hs_err_pid90.log
Exception in thread "main" java.lang.RuntimeException: starting java failed with [1]
output:
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 5368709120 bytes for committing reserved memory.
# An error report file with more information is saved as:
# logs/hs_err_pid172.log
error:
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000680000000, 5368709120, 0) failed; error='Not enough space' (errno=12)
at org.elasticsearch.tools.launchers.JvmErgonomics.flagsFinal(JvmErgonomics.java:126)
at org.elasticsearch.tools.launchers.JvmErgonomics.finalJvmOptions(JvmErgonomics.java:88)
at org.elasticsearch.tools.launchers.JvmErgonomics.choose(JvmErgonomics.java:59)
at org.elasticsearch.tools.launchers.JvmOptionsParser.jvmOptions(JvmOptionsParser.java:137)
at org.elasticsearch.tools.launchers.JvmOptionsParser.main(JvmOptionsParser.java:95)
解决:
将
- "ES_JAVA_OPTS=-Xms5g -Xmx5g"
修改为:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
问题3:
# docker logs logstash
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2020-11-11T06:26:42,704][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.9.2", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10-LTS on 11.0.8+10-LTS +indy +jit [linux-x86_64]"}
[2020-11-11T06:26:42,725][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<ArgumentError: The setting `xpack.monitoring.elasticsearch.url` has been deprecated and removed from Logstash; please update your configuration to use `xpack.monitoring.elasticsearch.hosts` instead.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:691:in `set'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:119:in `set_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:138:in `block in merge'", "org/jruby/RubyHash.java:1415:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:138:in `merge'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:196:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:312:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:268:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:88:in `<main>'"]}
[2020-11-11T06:26:42,734][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/tmp/jruby-1/jruby11414559284155129184jopenssl.jar) to field java.security.MessageDigest.provider
WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
解决:
# vi logstash.yml
http.host: "0.0.0.0"
#xpack.monitoring.elasticsearch.url: http://es01:9200
xpack.monitoring.elasticsearch.hosts: es01:9200
xpack.monitoring.elasticsearch.username: "logstash_system"
xpack.monitoring.elasticsearch.password: "123456"
如何设置kibana特定的用户只能查看特定的dashboard:
1.在Stack Management-->security--->roles中创建role:
1.1填写role name.
1.2选择该用户只能看到的index,权限选择read
1.3kibana中的权限Discover选择Read,其他选择None.
2.在Stack Management-->security--->user中创建user:
2.1填写username,password
2.2填写full name
2.3选择role:创建的role。
这样使用该用户登录kibana时左侧导航栏就只能看到dashboard,且不可看到编辑按钮,且只可看到使用该index的dashboard,对于其他Index的dashboard只能看到界面但看不到具体的数据展示。
Elasticsearch:Index 生命周期管理
我们希望在索引达到50GB,或已在5天前创建索引后对其进行 rollover,然后在10天后删除该索引。
登陆Kibana,进入Dev Tools执行如下命令:
PUT _ilm/policy/tongji-service-policy
{
"policy": {
"phases": {
"hot":{
"actions":{
"rollover":{
"max_size":"50GB",
"max_age":"5d"
}
}
},
"delete":{
"min_age":"10d",
"actions":{
"delete":{}
}
}
}
}
}
GET _ilm/policy/tongji-service-policy
PUT _template/tongji-service-template
{
"index_patterns":["tongji-service-*"],
"settings":{
"number_of_shards":1,
"number_of_replicas":1,
"index.lifecycle.name":"tongji-service-policy",
"index.lifecycle.rollover_alias":"tongji-service"
}
}
GET _template/tongji-service-template
其他查询命令:
GET tongji-service-*/_ilm/explain
GET tongji-service-*/_count
GET _cat/shards/tongji-service-*
GET _cat/indices/tongji-service-*
GET _cat/indices/tongji-service-*?v
进入Management下的Elasticsearch---->Index Lifecycle Policies可查看创建的 policies
进入Management下的Elasticsearch---->Index Management,点击每个Index可查看每个Index的具体设置信息,包括对应的Index Lifecycle Policy
发表评论
-
HTTPS的加密原理解读
2021-12-31 11:25 290一、为什么需要加密? 因为http的内容是明文传输的,明文数据 ... -
容器技术的基石: cgroup、namespace和联合文件系统
2021-12-09 10:47 718Docker 是基于 Linux Kernel 的 Names ... -
链路追踪skywalking安装部署
2021-10-21 12:06 809APM 安装部署: 一、下载 版本目录地址:http://a ... -
自动化运维 Ansible 安装部署
2021-08-20 19:06 838一、概述 Ansible 实现了批量系统配置、批量程序部署、 ... -
Linux 下 Kafka Cluster 搭建
2021-07-08 11:23 972概述 http://kafka.apachecn.org/q ... -
ELK RPM 安装配置
2021-06-22 18:59 614相关组件: 1)filebeat。用于收集日志组件,经测试其 ... -
在Kubernetes上部署 Redis 三主三从 集群
2021-03-10 16:25 661NFS搭建见: Linux NFS搭建与配置(https:// ... -
Kubernetes1.16.3下部署node-exporter+alertmanager+prometheus+grafana 监控系统
2020-10-28 10:48 1066准备工作 建议将所有的yaml文件存在如下目录: # mkd ... -
Linux NFS 搭建与配置
2020-10-21 17:58 421一、NFS 介绍 NFS 是 Network FileSys ... -
K8S 备份及升级
2020-10-20 15:48 874一、准备工作 查看集群版本: # kubectl get no ... -
API 网关 kong 的 konga 配置使用
2020-09-23 10:46 4198一、Kong 概述: kong的 ... -
云原生技术 Docker、K8S
2020-09-02 16:53 554容器的三大好处 1.资源 ... -
Kubernetes 应用编排、管理与运维
2020-08-24 16:40 580一、kubectl 运维命令 kubectl control ... -
API 网关 kong/konga 安装部署
2020-08-25 17:34 591一、概述 Kong是Mashape开 ... -
Linux 下 Redis Cluster 搭建
2020-08-13 09:14 739Redis集群演变过程: 单 ... -
Kubernetes离线安装的本地yum源构建
2020-08-08 22:41 531一、需求场景 在K8S的使用过程中有时候会遇到在一些无法上网 ... -
Kubernetes 证书延期
2020-08-01 22:28 462一、概述 kubeadm 是 kubernetes 提供的一 ... -
kubeadm方式部署安装kubernetes
2020-07-29 08:01 2382一、前提准备: 0、升级更新系统(切记升级一下,曾被坑过) ... -
Kubernetes 部署 Nginx 集群
2020-07-20 09:32 867一.设置标签 为了保证nginx之能分配到nginx服务器需要 ... -
Prometheus 外部监控 Kubernetes 集群
2020-07-10 15:59 2039大多情况都是将 Prometheus 通过 yaml 安装在 ...
相关推荐
基于docker-compose构建filebeat + Logstash +Elasticsearch+ kibana日志系统 对nginx日志进行正则切割字段。 https://www.jianshu.com/p/f7927591d530
在IT行业中,ELK(Elasticsearch、Logstash、Kibana)栈是广泛用于日志管理和分析的工具集合。配合Filebeat,可以构建出一个强大的日志收集、处理和可视化系统。本教程将详细介绍如何在CentOS 7上利用docker-compose...
docker-compose搭建elk过程。可以查看对应的文章。 ├── docker-compose.yml ├── elasticsearch │ └── config │ └── elasticsearch.yml ├── kibana │ └── config │ └── kibana.yml └...
ELK(Elasticsearch, Logstash, Kibana)是一个流行的日志管理和分析栈,用于收集、解析、存储和可视化各种日志数据。在本文中,我们将深入探讨如何使用Docker Compose来设置一个完整的ELK环境。Docker Compose是一...
docker集群ELK部署读取本地日志--配置文件;包含docker-compose集群化部署ELK的配置脚本和elasticsearch、kibana的配置文件、logstash解析日志的配置文件
ELK+Filebeat日志监控系统,在docker环境下的安装部署,使用docker环境省去了繁琐的下载安装时间,实现docker快速搭建,ELK是Elasticsearch、Logstash、Kibana的简称,这三者是核心套件(日志系统的三剑客)。
通过一个YAML文件(`docker-compose.yml`),我们可以定义整个服务网络,包括Elasticsearch、Logstash和Kibana,实现一键启动。 **实施步骤:** 1. **安装Docker和Docker Compose** 首先,确保你的系统已经安装了...
5. 监控与日志记录同样重要,可以集成Prometheus和Grafana进行性能监控,使用ELK(Elasticsearch, Logstash, Kibana)或Fluentd收集和分析服务日志。 通过以上步骤,我们可以成功地利用Docker Compose搭建起一个高...
docker-compose.yml文件中定义了三个Elasticsearch实例(es-node1, es-node2, es-node3)和一个Kibana实例。每个Elasticsearch实例都映射了相应的配置文件到容器内部,并设置了一些必要的环境变量,如`cluster.name`...
Docker-compose是一个众所周知的名为Fig的工具,现已集成到Docker中要启动堆栈,请克隆存储库并发出以下命令: # build the sample app and its docker image $ mvn install -f logtest/pom.xml # Boot all the ...
**在“docker-compose-elasticsearch-kibana-master”压缩包中,我们可以期待找到Docker Compose的配置文件和其他可能的辅助脚本,用于启动、管理和维护Elasticsearch和Kibana的容器。这些脚本和配置文件可以帮助...
docker-compose version 1.24.0-rc1 elasticsearch version 6.6.1 kibana version 6.6.1 logstash version 6.6.1 一、ELK-dockerfile文件编写及配置文件 ● elasticsearch 1、elasticsearch-dockerfile FROM ...
ELK6.3和Filebeat6.3是基于Elasticsearch、Logstash和Kibana技术栈的最新版本,通常用在日志处理和分析场景中。在Docker-Compose环境下安装ELK6.3和Filebeat6.3涉及到了容器化技术、ELK组件的架构知识、CentOS操作...
使用docker-compose将pfSense与Suricata绑定到ELK(Elasticsearch,logstash和kibana) 已针对Windows使用docker在Elasticsearch 6.3.0和pfSense 2.4.3-RELEASE-p1上进行了测试 这里的想法是使用由发布的普通...
标题“compose-elk.zip”暗示了这是一个与使用Docker Compose部署ELK(Elasticsearch、Logstash、Kibana)堆栈相关的压缩文件。ELK堆栈是用于日志管理和分析的流行开源工具组合。Docker Compose使得在单个主机上轻松...
:french_fries: 这个简单的项目演示了Elasticsearch-Logstash-Kibina(ELK)与Golang的集成 去栈 没有Web框架,使用本机运行 DEP的包管理brew install dep 杜松子酒进行实时重新加载, go get github....
RUN plugin -i elasticsearch/marvel/latest 用法 使用docker-compose在后台启动堆栈: $ docker-compose up -d 您将立即开始看到通过收集将统计信息(cpu,内存,负载)压入ELK堆栈。 通过使用网络浏览器访问...
开发者现在可以自定义日志驱动,这允许将应用日志发送到不同的日志收集系统,如ELK(Elasticsearch, Logstash, Kibana)堆栈或Graylog。此外,新版本还支持日志格式的定制,使日志更符合团队的分析和监控需求。 在...
它为ElasticSearch,Kibana和Logstash的每一个运行一个容器。 它包括一个logstash客户端映像,该映像将syslogd数据发送到Logstash服务器。 您可以使用docker-compose scale客户端数量。 该设置的灵感来自的出色。 ...
它演示了 ELK 堆栈(ElasticSearch + Logstash + Kibana)用于分析应用程序日志(以 apache 为例)的用法。 出于测试目的,我们将启动容器 apache 反向代理。 Stack 部署在 docker 容器上, docker-compose (ex fig...