`
knight_black_bob
  • 浏览: 853299 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论

linux 日志kafka 写mongodb

阅读更多

 

linux 日志kafka 写mongodb 

 

1. 安装 java 

 

jdk-8u151-linux-x64.tar.gz  

scp -r *.tar.gz zkkafka@10.156.50.36:/home/zkkafka/
scp -r *.tar.gz zkkafka@10.156.50.37:/home/zkkafka/

tar xf jdk-8u151-linux-x64.tar.gz

vi ~/.bash_profile

export PATH
export LANG="zh_CN.utf8"
export   JAVA_HOME=/home/zkkafka/jdk1.8.0_151
export   CLASSPATH=$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:
export   PATH=$JAVA_HOME/bin:$PATH

source ~/.bash_profile
echo $JAVA_HOME
java -version

 

 

2.安装 zookeeper 集群

 

tar xf zookeeper-3.4.6.tar.gz
cd zookeeper-3.4.6/conf
cp zoo_example.conf  zoo.conf
vi zoo.conf

dataDir=/home/zkkafka/zookeeper-3.4.6/data/
dataLogDir=/home/zkkafka/zookeeper-3.4.6/logs/
autopurge.snapRetainCount=3 
autopurge.purgeInterval=1
server.1=10.156.50.35:2888:3888
server.2=10.156.50.36:2888:3888
server.3=10.156.50.37:2888:3888



vi ~/.bash_profile

export PATH
export LANG="zh_CN.utf8"
export   JAVA_HOME=/home/zkkafka/jdk1.8.0_151
export   ZOOKEEPER_HOME=/home/zkkafka/zookeeper-3.4.6
export   CLASSPATH=$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:
export   PATH=$JAVA_HOME/bin:$PATH
export   PATH=$PATH:$ZOOKEEPER_HOME/bin:$ZOOKEEPER_HOME/conf

source ~/.bash_profile

scp -r zoo.cnfig zkkafka@10.156.50.36:/home/zkkafka/zookeeper-3.4.6/conf/
scp -r zoo.cnfig zkkafka@10.156.50.37:/home/zkkafka/zookeeper-3.4.6/conf/



cd /home/zkkafka/zookeeper-3.4.6/data/
vi myid  
1
cd /home/zkkafka/zookeeper-3.4.6/data/
vi myid  
2
cd /home/zkkafka/zookeeper-3.4.6/data/
vi myid  
3

zkServer.sh start
zkServer.sh stop
zkServer.sh restart

 

 

2.1 10.156.50.35  zk日志

 

===============looking log  from 10.156.50.35======================================================
2019-04-18 11:23:08,522 [myid:] - INFO  [main:QuorumPeerConfig@103] - Reading configuration from: /home/zkkafka/zookeeper-3.4.6/bin/../conf/zoo.cfg
2019-04-18 11:23:08,531 [myid:] - INFO  [main:QuorumPeerConfig@340] - Defaulting to majority quorums
2019-04-18 11:23:08,538 [myid:1] - INFO  [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
2019-04-18 11:23:08,538 [myid:1] - INFO  [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 1
2019-04-18 11:23:08,548 [myid:1] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
2019-04-18 11:23:08,571 [myid:1] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.
2019-04-18 11:23:08,580 [myid:1] - INFO  [main:QuorumPeerMain@127] - Starting quorum peer
2019-04-18 11:23:08,603 [myid:1] - INFO  [main:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:2181
2019-04-18 11:23:08,625 [myid:1] - INFO  [main:QuorumPeer@959] - tickTime set to 2000
2019-04-18 11:23:08,625 [myid:1] - INFO  [main:QuorumPeer@979] - minSessionTimeout set to -1
2019-04-18 11:23:08,626 [myid:1] - INFO  [main:QuorumPeer@990] - maxSessionTimeout set to -1
2019-04-18 11:23:08,626 [myid:1] - INFO  [main:QuorumPeer@1005] - initLimit set to 10
2019-04-18 11:23:08,646 [myid:1] - INFO  [main:FileSnap@83] - Reading snapshot /home/zkkafka/zookeeper-3.4.6/data/version-2/snapshot.0
2019-04-18 11:23:08,663 [myid:1] - INFO  [Thread-2:QuorumCnxManager$Listener@504] - My election bind port: /10.156.50.35:3888
2019-04-18 11:23:08,684 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer@714] - LOOKING
2019-04-18 11:23:08,686 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@815] - New election. My id =  1, proposed zxid=0x0
2019-04-18 11:23:08,692 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:23:08,697 [myid:1] - INFO  [WorkerSender[myid=1]:QuorumCnxManager@193] - Have smaller server identifier, so dropping the connection: (2, 1)
2019-04-18 11:23:08,699 [myid:1] - INFO  [WorkerSender[myid=1]:QuorumCnxManager@193] - Have smaller server identifier, so dropping the connection: (3, 1)
2019-04-18 11:23:08,699 [myid:1] - INFO  [/10.156.50.35:3888:QuorumCnxManager$Listener@511] - Received connection request /10.156.50.36:41620
2019-04-18 11:23:08,705 [myid:1] - INFO  [/10.156.50.35:3888:QuorumCnxManager$Listener@511] - Received connection request /10.156.50.37:38886
2019-04-18 11:23:08,706 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:23:08,707 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LEADING (n.state), 2 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:23:08,708 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:23:08,708 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), FOLLOWING (n.state), 3 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:23:08,709 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer@784] - FOLLOWING
2019-04-18 11:23:08,720 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Learner@86] - TCP NoDelay set to: true
2019-04-18 11:23:08,735 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2019-04-18 11:23:08,735 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:host.name=yanfabu2-35.base.app.dev.yf
2019-04-18 11:23:08,736 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.version=1.8.0_151
2019-04-18 11:23:08,736 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.vendor=Oracle Corporation
2019-04-18 11:23:08,736 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.home=/home/zkkafka/jdk1.8.0_151/jre
2019-04-18 11:23:08,736 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.class.path=/home/zkkafka/zookeeper-3.4.6/bin/../build/classes:/home/zkkafka/zookeeper-3.4.6/bin/../build/lib/*.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/slf4j-api-1.6.1.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/netty-3.7.0.Final.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/log4j-1.2.16.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/jline-0.9.94.jar:/home/zkkafka/zookeeper-3.4.6/bin/../zookeeper-3.4.6.jar:/home/zkkafka/zookeeper-3.4.6/bin/../src/java/lib/*.jar:/home/zkkafka/zookeeper-3.4.6/bin/../conf:/home/zkkafka/jdk1.8.0_151/lib/dt.jar:/home/zkkafka/jdk1.8.0_151/lib/tools.jar:
2019-04-18 11:23:08,737 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2019-04-18 11:23:08,737 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.io.tmpdir=/tmp
2019-04-18 11:23:08,737 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.compiler=<NA>
2019-04-18 11:23:08,737 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:os.name=Linux
2019-04-18 11:23:08,737 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:os.arch=amd64
2019-04-18 11:23:08,738 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:os.version=3.10.0-862.el7.x86_64
2019-04-18 11:23:08,738 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:user.name=zkkafka
2019-04-18 11:23:08,738 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:user.home=/home/zkkafka
2019-04-18 11:23:08,738 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:user.dir=/home/zkkafka/zookeeper-3.4.6
2019-04-18 11:23:08,740 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:ZooKeeperServer@162] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /home/zkkafka/zookeeper-3.4.6/logs/version-2 snapdir /home/zkkafka/zookeeper-3.4.6/data/version-2
2019-04-18 11:23:08,742 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Follower@63] - FOLLOWING - LEADER ELECTION TOOK - 56
2019-04-18 11:23:08,759 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Learner@326] - Getting a snapshot from leader
2019-04-18 11:23:08,783 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FileTxnSnapLog@240] - Snapshotting: 0x100000000 to /home/zkkafka/zookeeper-3.4.6/data/version-2/snapshot.100000000

 2.2 10.156.50.36  zk日志

 

 

 

===============looking log  from 10.156.50.36======================================================
2019-04-18 11:51:12,688 [myid:] - INFO  [main:QuorumPeerConfig@103] - Reading configuration from: /home/zkkafka/zookeeper-3.4.6/bin/../conf/zoo.cfg
2019-04-18 11:51:12,699 [myid:] - INFO  [main:QuorumPeerConfig@340] - Defaulting to majority quorums
2019-04-18 11:51:12,707 [myid:2] - INFO  [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
2019-04-18 11:51:12,707 [myid:2] - INFO  [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 1
2019-04-18 11:51:12,717 [myid:2] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
2019-04-18 11:51:12,743 [myid:2] - INFO  [main:QuorumPeerMain@127] - Starting quorum peer
2019-04-18 11:51:12,748 [myid:2] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.
2019-04-18 11:51:12,768 [myid:2] - INFO  [main:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:2181
2019-04-18 11:51:12,790 [myid:2] - INFO  [main:QuorumPeer@959] - tickTime set to 2000
2019-04-18 11:51:12,791 [myid:2] - INFO  [main:QuorumPeer@979] - minSessionTimeout set to -1
2019-04-18 11:51:12,791 [myid:2] - INFO  [main:QuorumPeer@990] - maxSessionTimeout set to -1
2019-04-18 11:51:12,791 [myid:2] - INFO  [main:QuorumPeer@1005] - initLimit set to 10
2019-04-18 11:51:12,812 [myid:2] - INFO  [main:FileSnap@83] - Reading snapshot /home/zkkafka/zookeeper-3.4.6/data/version-2/snapshot.200000000
2019-04-18 11:51:12,828 [myid:2] - INFO  [Thread-2:QuorumCnxManager$Listener@504] - My election bind port: /10.156.50.36:3888
2019-04-18 11:51:12,846 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:QuorumPeer@714] - LOOKING
2019-04-18 11:51:12,848 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@815] - New election. My id =  2, proposed zxid=0x200000000
2019-04-18 11:51:12,868 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x200000000 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x3 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:51:12,869 [myid:2] - INFO  [WorkerSender[myid=2]:QuorumCnxManager@193] - Have smaller server identifier, so dropping the connection: (3, 2)
2019-04-18 11:51:12,869 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x300000000 (n.zxid), 0x4 (n.round), LOOKING (n.state), 1 (n.sid), 0x3 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:51:12,870 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x300000000 (n.zxid), 0x4 (n.round), FOLLOWING (n.state), 1 (n.sid), 0x4 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:51:12,871 [myid:2] - INFO  [/10.156.50.36:3888:QuorumCnxManager$Listener@511] - Received connection request /10.156.50.37:38532
2019-04-18 11:51:12,871 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x300000000 (n.zxid), 0x4 (n.round), LOOKING (n.state), 2 (n.sid), 0x3 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:51:12,873 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x300000000 (n.zxid), 0x4 (n.round), LOOKING (n.state), 3 (n.sid), 0x3 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:51:12,874 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x300000000 (n.zxid), 0x4 (n.round), FOLLOWING (n.state), 1 (n.sid), 0x4 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:51:12,875 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x300000000 (n.zxid), 0x4 (n.round), LEADING (n.state), 3 (n.sid), 0x4 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:51:12,876 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:QuorumPeer@784] - FOLLOWING
2019-04-18 11:51:12,885 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Learner@86] - TCP NoDelay set to: true
2019-04-18 11:51:12,898 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2019-04-18 11:51:12,899 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:host.name=yanfabu2-36.base.app.dev.yf
2019-04-18 11:51:12,899 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.version=1.8.0_151
2019-04-18 11:51:12,899 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.vendor=Oracle Corporation
2019-04-18 11:51:12,899 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.home=/home/zkkafka/jdk1.8.0_151/jre
2019-04-18 11:51:12,899 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.class.path=/home/zkkafka/zookeeper-3.4.6/bin/../build/classes:/home/zkkafka/zookeeper-3.4.6/bin/../build/lib/*.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/slf4j-api-1.6.1.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/netty-3.7.0.Final.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/log4j-1.2.16.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/jline-0.9.94.jar:/home/zkkafka/zookeeper-3.4.6/bin/../zookeeper-3.4.6.jar:/home/zkkafka/zookeeper-3.4.6/bin/../src/java/lib/*.jar:/home/zkkafka/zookeeper-3.4.6/bin/../conf:/home/zkkafka/jdk1.8.0_151/lib/dt.jar:/home/zkkafka/jdk1.8.0_151/lib/tools.jar:
2019-04-18 11:51:12,900 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2019-04-18 11:51:12,900 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.io.tmpdir=/tmp
2019-04-18 11:51:12,900 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.compiler=<NA>
2019-04-18 11:51:12,900 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:os.name=Linux
2019-04-18 11:51:12,900 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:os.arch=amd64
2019-04-18 11:51:12,900 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:os.version=3.10.0-862.el7.x86_64
2019-04-18 11:51:12,901 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:user.name=zkkafka
2019-04-18 11:51:12,901 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:user.home=/home/zkkafka
2019-04-18 11:51:12,901 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:user.dir=/home/zkkafka/zookeeper-3.4.6
2019-04-18 11:51:12,903 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:ZooKeeperServer@162] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /home/zkkafka/zookeeper-3.4.6/logs/version-2 snapdir /home/zkkafka/zookeeper-3.4.6/data/version-2
2019-04-18 11:51:12,904 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Follower@63] - FOLLOWING - LEADER ELECTION TOOK - 56
2019-04-18 11:51:12,916 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Learner@326] - Getting a snapshot from leader
2019-04-18 11:51:12,923 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:FileTxnSnapLog@240] - Snapshotting: 0x400000000 to /home/zkkafka/zookeeper-3.4.6/data/version-2/snapshot.400000000

 

 

2.3 10.156.50.37 zk日志

 

===============looking log  from 10.156.50.37======================================================
2019-04-18 11:56:03,795 [myid:] - INFO  [main:QuorumPeerConfig@103] - Reading configuration from: /home/zkkafka/zookeeper-3.4.6/bin/../conf/zoo.cfg
2019-04-18 11:56:03,803 [myid:] - INFO  [main:QuorumPeerConfig@340] - Defaulting to majority quorums
2019-04-18 11:56:03,810 [myid:3] - INFO  [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
2019-04-18 11:56:03,810 [myid:3] - INFO  [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 1
2019-04-18 11:56:03,823 [myid:3] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
2019-04-18 11:56:03,844 [myid:3] - INFO  [main:QuorumPeerMain@127] - Starting quorum peer
2019-04-18 11:56:03,853 [myid:3] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.
2019-04-18 11:56:03,883 [myid:3] - INFO  [main:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:2181
2019-04-18 11:56:03,905 [myid:3] - INFO  [main:QuorumPeer@959] - tickTime set to 2000
2019-04-18 11:56:03,905 [myid:3] - INFO  [main:QuorumPeer@979] - minSessionTimeout set to -1
2019-04-18 11:56:03,905 [myid:3] - INFO  [main:QuorumPeer@990] - maxSessionTimeout set to -1
2019-04-18 11:56:03,905 [myid:3] - INFO  [main:QuorumPeer@1005] - initLimit set to 10
2019-04-18 11:56:03,926 [myid:3] - INFO  [main:FileSnap@83] - Reading snapshot /home/zkkafka/zookeeper-3.4.6/data/version-2/snapshot.300000000
2019-04-18 11:56:03,942 [myid:3] - INFO  [Thread-2:QuorumCnxManager$Listener@504] - My election bind port: /10.156.50.37:3888
2019-04-18 11:56:03,959 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:QuorumPeer@714] - LOOKING
2019-04-18 11:56:03,961 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@815] - New election. My id =  3, proposed zxid=0x300000000
2019-04-18 11:56:03,978 [myid:3] - INFO  [WorkerReceiver[myid=3]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x400000000 (n.zxid), 0x5 (n.round), LOOKING (n.state), 1 (n.sid), 0x4 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:56:03,978 [myid:3] - INFO  [WorkerReceiver[myid=3]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x300000000 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x4 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:56:03,979 [myid:3] - INFO  [WorkerReceiver[myid=3]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x400000000 (n.zxid), 0x5 (n.round), FOLLOWING (n.state), 1 (n.sid), 0x5 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:56:03,980 [myid:3] - INFO  [WorkerReceiver[myid=3]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x400000000 (n.zxid), 0x5 (n.round), LOOKING (n.state), 3 (n.sid), 0x4 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:56:03,981 [myid:3] - INFO  [WorkerReceiver[myid=3]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x400000000 (n.zxid), 0x5 (n.round), LOOKING (n.state), 2 (n.sid), 0x4 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:56:03,981 [myid:3] - INFO  [WorkerReceiver[myid=3]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x400000000 (n.zxid), 0x5 (n.round), LEADING (n.state), 2 (n.sid), 0x5 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:56:03,982 [myid:3] - INFO  [WorkerReceiver[myid=3]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x400000000 (n.zxid), 0x5 (n.round), FOLLOWING (n.state), 1 (n.sid), 0x5 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:56:03,983 [myid:3] - INFO  [WorkerReceiver[myid=3]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x400000000 (n.zxid), 0x5 (n.round), LEADING (n.state), 2 (n.sid), 0x5 (n.peerEpoch) FOLLOWING (my state)
2019-04-18 11:56:03,983 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:QuorumPeer@784] - FOLLOWING
2019-04-18 11:56:03,992 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Learner@86] - TCP NoDelay set to: true
2019-04-18 11:56:04,008 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2019-04-18 11:56:04,008 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:host.name=yanfabu2-37.base.app.dev.yf
2019-04-18 11:56:04,008 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.version=1.8.0_151
2019-04-18 11:56:04,009 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.vendor=Oracle Corporation
2019-04-18 11:56:04,009 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.home=/home/zkkafka/jdk1.8.0_151/jre
2019-04-18 11:56:04,009 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.class.path=/home/zkkafka/zookeeper-3.4.6/bin/../build/classes:/home/zkkafka/zookeeper-3.4.6/bin/../build/lib/*.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/slf4j-api-1.6.1.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/netty-3.7.0.Final.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/log4j-1.2.16.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/jline-0.9.94.jar:/home/zkkafka/zookeeper-3.4.6/bin/../zookeeper-3.4.6.jar:/home/zkkafka/zookeeper-3.4.6/bin/../src/java/lib/*.jar:/home/zkkafka/zookeeper-3.4.6/bin/../conf:/home/zkkafka/jdk1.8.0_151/lib/dt.jar:/home/zkkafka/jdk1.8.0_151/lib/tools.jar:
2019-04-18 11:56:04,009 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2019-04-18 11:56:04,010 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.io.tmpdir=/tmp
2019-04-18 11:56:04,010 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.compiler=<NA>
2019-04-18 11:56:04,010 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:os.name=Linux
2019-04-18 11:56:04,010 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:os.arch=amd64
2019-04-18 11:56:04,010 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:os.version=3.10.0-862.el7.x86_64
2019-04-18 11:56:04,011 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:user.name=zkkafka
2019-04-18 11:56:04,011 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:user.home=/home/zkkafka
2019-04-18 11:56:04,011 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:user.dir=/home/zkkafka/zookeeper-3.4.6
2019-04-18 11:56:04,013 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:ZooKeeperServer@162] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /home/zkkafka/zookeeper-3.4.6/logs/version-2 snapdir /home/zkkafka/zookeeper-3.4.6/data/version-2
2019-04-18 11:56:04,014 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Follower@63] - FOLLOWING - LEADER ELECTION TOOK - 53
2019-04-18 11:56:04,098 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Learner@326] - Getting a snapshot from leader
2019-04-18 11:56:04,105 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:FileTxnSnapLog@240] - Snapshotting: 0x500000000 to /home/zkkafka/zookeeper-3.4.6/data/version-2/snapshot.500000000

 

 

2.4 重启 10.156.50.35  zk日志

 

===============looking log  from 10.156.50.35======================================================
2019-04-18 11:23:08,522 [myid:] - INFO  [main:QuorumPeerConfig@103] - Reading configuration from: /home/zkkafka/zookeeper-3.4.6/bin/../conf/zoo.cfg
2019-04-18 11:23:08,531 [myid:] - INFO  [main:QuorumPeerConfig@340] - Defaulting to majority quorums
2019-04-18 11:23:08,538 [myid:1] - INFO  [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
2019-04-18 11:23:08,538 [myid:1] - INFO  [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 1
2019-04-18 11:23:08,548 [myid:1] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
2019-04-18 11:23:08,571 [myid:1] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.
2019-04-18 11:23:08,580 [myid:1] - INFO  [main:QuorumPeerMain@127] - Starting quorum peer
2019-04-18 11:23:08,603 [myid:1] - INFO  [main:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:2181
2019-04-18 11:23:08,625 [myid:1] - INFO  [main:QuorumPeer@959] - tickTime set to 2000
2019-04-18 11:23:08,625 [myid:1] - INFO  [main:QuorumPeer@979] - minSessionTimeout set to -1
2019-04-18 11:23:08,626 [myid:1] - INFO  [main:QuorumPeer@990] - maxSessionTimeout set to -1
2019-04-18 11:23:08,626 [myid:1] - INFO  [main:QuorumPeer@1005] - initLimit set to 10
2019-04-18 11:23:08,646 [myid:1] - INFO  [main:FileSnap@83] - Reading snapshot /home/zkkafka/zookeeper-3.4.6/data/version-2/snapshot.0
2019-04-18 11:23:08,663 [myid:1] - INFO  [Thread-2:QuorumCnxManager$Listener@504] - My election bind port: /10.156.50.35:3888
2019-04-18 11:23:08,684 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer@714] - LOOKING
2019-04-18 11:23:08,686 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@815] - New election. My id =  1, proposed zxid=0x0
2019-04-18 11:23:08,692 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:23:08,697 [myid:1] - INFO  [WorkerSender[myid=1]:QuorumCnxManager@193] - Have smaller server identifier, so dropping the connection: (2, 1)
2019-04-18 11:23:08,699 [myid:1] - INFO  [WorkerSender[myid=1]:QuorumCnxManager@193] - Have smaller server identifier, so dropping the connection: (3, 1)
2019-04-18 11:23:08,699 [myid:1] - INFO  [/10.156.50.35:3888:QuorumCnxManager$Listener@511] - Received connection request /10.156.50.36:41620
2019-04-18 11:23:08,705 [myid:1] - INFO  [/10.156.50.35:3888:QuorumCnxManager$Listener@511] - Received connection request /10.156.50.37:38886
2019-04-18 11:23:08,706 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:23:08,707 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LEADING (n.state), 2 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:23:08,708 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:23:08,708 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), FOLLOWING (n.state), 3 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:23:08,709 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer@784] - FOLLOWING
2019-04-18 11:23:08,720 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Learner@86] - TCP NoDelay set to: true
2019-04-18 11:23:08,735 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2019-04-18 11:23:08,735 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:host.name=yanfabu2-35.base.app.dev.yf
2019-04-18 11:23:08,736 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.version=1.8.0_151
2019-04-18 11:23:08,736 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.vendor=Oracle Corporation
2019-04-18 11:23:08,736 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.home=/home/zkkafka/jdk1.8.0_151/jre
2019-04-18 11:23:08,736 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.class.path=/home/zkkafka/zookeeper-3.4.6/bin/../build/classes:/home/zkkafka/zookeeper-3.4.6/bin/../build/lib/*.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/slf4j-api-1.6.1.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/netty-3.7.0.Final.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/log4j-1.2.16.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/jline-0.9.94.jar:/home/zkkafka/zookeeper-3.4.6/bin/../zookeeper-3.4.6.jar:/home/zkkafka/zookeeper-3.4.6/bin/../src/java/lib/*.jar:/home/zkkafka/zookeeper-3.4.6/bin/../conf:/home/zkkafka/jdk1.8.0_151/lib/dt.jar:/home/zkkafka/jdk1.8.0_151/lib/tools.jar:
2019-04-18 11:23:08,737 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2019-04-18 11:23:08,737 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.io.tmpdir=/tmp
2019-04-18 11:23:08,737 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.compiler=<NA>
2019-04-18 11:23:08,737 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:os.name=Linux
2019-04-18 11:23:08,737 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:os.arch=amd64
2019-04-18 11:23:08,738 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:os.version=3.10.0-862.el7.x86_64
2019-04-18 11:23:08,738 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:user.name=zkkafka
2019-04-18 11:23:08,738 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:user.home=/home/zkkafka
2019-04-18 11:23:08,738 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:user.dir=/home/zkkafka/zookeeper-3.4.6
2019-04-18 11:23:08,740 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:ZooKeeperServer@162] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /home/zkkafka/zookeeper-3.4.6/logs/version-2 snapdir /home/zkkafka/zookeeper-3.4.6/data/version-2
2019-04-18 11:23:08,742 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Follower@63] - FOLLOWING - LEADER ELECTION TOOK - 56
2019-04-18 11:23:08,759 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Learner@326] - Getting a snapshot from leader
2019-04-18 11:23:08,783 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FileTxnSnapLog@240] - Snapshotting: 0x100000000 to /home/zkkafka/zookeeper-3.4.6/data/version-2/snapshot.100000000
2019-04-18 11:24:09,940 [myid:1] - WARN  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Follower@89] - Exception when following the leader
java.io.EOFException
	at java.io.DataInputStream.readInt(DataInputStream.java:392)
	at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63)
	at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83)
	at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:103)
	at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:153)
	at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:85)
	at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:786)
2019-04-18 11:24:09,941 [myid:1] - WARN  [RecvWorker:2:QuorumCnxManager$RecvWorker@780] - Connection broken for id 2, my id = 1, error = 
java.io.EOFException
	at java.io.DataInputStream.readInt(DataInputStream.java:392)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:765)
2019-04-18 11:24:09,946 [myid:1] - WARN  [RecvWorker:2:QuorumCnxManager$RecvWorker@783] - Interrupting SendWorker
2019-04-18 11:24:09,947 [myid:1] - WARN  [SendWorker:2:QuorumCnxManager$SendWorker@697] - Interrupted while waiting for message on queue
java.lang.InterruptedException
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088)
	at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:849)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:64)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:685)
2019-04-18 11:24:09,948 [myid:1] - WARN  [SendWorker:2:QuorumCnxManager$SendWorker@706] - Send worker leaving thread
2019-04-18 11:24:09,946 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Follower@166] - shutdown called
java.lang.Exception: shutdown Follower
	at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166)
	at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:790)
2019-04-18 11:24:09,949 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FollowerZooKeeperServer@139] - Shutting down
2019-04-18 11:24:09,949 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:ZooKeeperServer@441] - shutting down
2019-04-18 11:24:09,950 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FollowerRequestProcessor@105] - Shutting down
2019-04-18 11:24:09,950 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:CommitProcessor@181] - Shutting down
2019-04-18 11:24:09,950 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FinalRequestProcessor@415] - shutdown of request processor complete
2019-04-18 11:24:09,951 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:SyncRequestProcessor@209] - Shutting down
2019-04-18 11:24:09,952 [myid:1] - INFO  [SyncThread:1:SyncRequestProcessor@187] - SyncRequestProcessor exited!
2019-04-18 11:24:09,952 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer@714] - LOOKING
2019-04-18 11:24:09,953 [myid:1] - INFO  [FollowerRequestProcessor:1:FollowerRequestProcessor@95] - FollowerRequestProcessor exited loop!
2019-04-18 11:24:09,953 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FileSnap@83] - Reading snapshot /home/zkkafka/zookeeper-3.4.6/data/version-2/snapshot.100000000
2019-04-18 11:24:09,953 [myid:1] - INFO  [CommitProcessor:1:CommitProcessor@150] - CommitProcessor exited loop!
2019-04-18 11:24:09,955 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@815] - New election. My id =  1, proposed zxid=0x100000000
2019-04-18 11:24:09,956 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x100000000 (n.zxid), 0x2 (n.round), LOOKING (n.state), 3 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:24:09,956 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 1 (n.leader), 0x100000000 (n.zxid), 0x2 (n.round), LOOKING (n.state), 1 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:24:09,957 [myid:1] - WARN  [WorkerSender[myid=1]:QuorumCnxManager@382] - Cannot open channel to 2 at election address /10.156.50.36:3888
java.net.ConnectException: 拒绝连接 (Connection refused)
	at java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341)
	at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449)
	at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430)
	at java.lang.Thread.run(Thread.java:748)
2019-04-18 11:24:09,960 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x100000000 (n.zxid), 0x2 (n.round), LOOKING (n.state), 1 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:24:09,960 [myid:1] - WARN  [WorkerSender[myid=1]:QuorumCnxManager@382] - Cannot open channel to 2 at election address /10.156.50.36:3888
java.net.ConnectException: 拒绝连接 (Connection refused)
	at java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341)
	at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449)
	at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430)
	at java.lang.Thread.run(Thread.java:748)
2019-04-18 11:24:10,162 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer@784] - FOLLOWING
2019-04-18 11:24:10,162 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:ZooKeeperServer@162] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /home/zkkafka/zookeeper-3.4.6/logs/version-2 snapdir /home/zkkafka/zookeeper-3.4.6/data/version-2
2019-04-18 11:24:10,163 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Follower@63] - FOLLOWING - LEADER ELECTION TOOK - 210
2019-04-18 11:24:10,165 [myid:1] - WARN  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Learner@233] - Unexpected exception, tries=0, connecting to /10.156.50.37:2888
java.net.ConnectException: 拒绝连接 (Connection refused)
	at java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at org.apache.zookeeper.server.quorum.Learner.connectToLeader(Learner.java:225)
	at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:71)
	at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:786)
2019-04-18 11:24:11,237 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Learner@323] - Getting a diff from the leader 0x100000000
2019-04-18 11:24:11,251 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FileTxnSnapLog@240] - Snapshotting: 0x100000000 to /home/zkkafka/zookeeper-3.4.6/data/version-2/snapshot.100000000
2019-04-18 11:24:29,983 [myid:1] - INFO  [/10.156.50.35:3888:QuorumCnxManager$Listener@511] - Received connection request /10.156.50.36:41622
2019-04-18 11:24:29,992 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x1 (n.peerEpoch) FOLLOWING (my state)
2019-04-18 11:24:29,993 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x100000000 (n.zxid), 0x2 (n.round), LOOKING (n.state), 2 (n.sid), 0x1 (n.peerEpoch) FOLLOWING (my state)
2019-04-18 11:24:55,845 [myid:1] - WARN  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Follower@89] - Exception when following the leader
java.io.EOFException
	at java.io.DataInputStream.readInt(DataInputStream.java:392)
	at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63)
	at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83)
	at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:103)
	at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:153)
	at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:85)
	at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:786)
2019-04-18 11:24:55,847 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Follower@166] - shutdown called
java.lang.Exception: shutdown Follower
	at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166)
	at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:790)
2019-04-18 11:24:55,847 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FollowerZooKeeperServer@139] - Shutting down
2019-04-18 11:24:55,847 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:ZooKeeperServer@441] - shutting down
2019-04-18 11:24:55,848 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FollowerRequestProcessor@105] - Shutting down
2019-04-18 11:24:55,848 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:CommitProcessor@181] - Shutting down
2019-04-18 11:24:55,848 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FinalRequestProcessor@415] - shutdown of request processor complete
2019-04-18 11:24:55,849 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:SyncRequestProcessor@209] - Shutting down
2019-04-18 11:24:55,845 [myid:1] - WARN  [RecvWorker:3:QuorumCnxManager$RecvWorker@780] - Connection broken for id 3, my id = 1, error = 
java.io.EOFException
	at java.io.DataInputStream.readInt(DataInputStream.java:392)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:765)
2019-04-18 11:24:55,851 [myid:1] - WARN  [RecvWorker:3:QuorumCnxManager$RecvWorker@783] - Interrupting SendWorker
2019-04-18 11:24:55,851 [myid:1] - INFO  [SyncThread:1:SyncRequestProcessor@187] - SyncRequestProcessor exited!
2019-04-18 11:24:55,849 [myid:1] - INFO  [CommitProcessor:1:CommitProcessor@150] - CommitProcessor exited loop!
2019-04-18 11:24:55,849 [myid:1] - INFO  [FollowerRequestProcessor:1:FollowerRequestProcessor@95] - FollowerRequestProcessor exited loop!
2019-04-18 11:24:55,864 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x200000000 (n.zxid), 0x3 (n.round), LOOKING (n.state), 2 (n.sid), 0x2 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:24:55,852 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer@714] - LOOKING
2019-04-18 11:24:55,866 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FileSnap@83] - Reading snapshot /home/zkkafka/zookeeper-3.4.6/data/version-2/snapshot.100000000
2019-04-18 11:24:55,852 [myid:1] - WARN  [SendWorker:3:QuorumCnxManager$SendWorker@697] - Interrupted while waiting for message on queue
java.lang.InterruptedException
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088)
	at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:849)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:64)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:685)
2019-04-18 11:24:55,869 [myid:1] - WARN  [SendWorker:3:QuorumCnxManager$SendWorker@706] - Send worker leaving thread
2019-04-18 11:24:55,870 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@815] - New election. My id =  1, proposed zxid=0x100000000
2019-04-18 11:24:55,871 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 1 (n.leader), 0x100000000 (n.zxid), 0x3 (n.round), LOOKING (n.state), 1 (n.sid), 0x2 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:24:55,872 [myid:1] - WARN  [WorkerSender[myid=1]:QuorumCnxManager@382] - Cannot open channel to 3 at election address /10.156.50.37:3888
java.net.ConnectException: 拒绝连接 (Connection refused)
	at java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341)
	at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449)
	at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430)
	at java.lang.Thread.run(Thread.java:748)
2019-04-18 11:24:55,873 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x200000000 (n.zxid), 0x3 (n.round), LOOKING (n.state), 1 (n.sid), 0x2 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:24:55,873 [myid:1] - WARN  [WorkerSender[myid=1]:QuorumCnxManager@382] - Cannot open channel to 3 at election address /10.156.50.37:3888
java.net.ConnectException: 拒绝连接 (Connection refused)
	at java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341)
	at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449)
	at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430)
	at java.lang.Thread.run(Thread.java:748)
2019-04-18 11:24:56,075 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer@784] - FOLLOWING
2019-04-18 11:24:56,075 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:ZooKeeperServer@162] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /home/zkkafka/zookeeper-3.4.6/logs/version-2 snapdir /home/zkkafka/zookeeper-3.4.6/data/version-2
2019-04-18 11:24:56,075 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Follower@63] - FOLLOWING - LEADER ELECTION TOOK - 210
2019-04-18 11:24:56,078 [myid:1] - WARN  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Learner@233] - Unexpected exception, tries=0, connecting to /10.156.50.36:2888
java.net.ConnectException: 拒绝连接 (Connection refused)
	at java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at org.apache.zookeeper.server.quorum.Learner.connectToLeader(Learner.java:225)
	at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:71)
	at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:786)
2019-04-18 11:24:57,136 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Learner@326] - Getting a snapshot from leader
2019-04-18 11:24:57,140 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FileTxnSnapLog@240] - Snapshotting: 0x200000000 to /home/zkkafka/zookeeper-3.4.6/data/version-2/snapshot.200000000
2019-04-18 11:25:06,910 [myid:1] - INFO  [/10.156.50.35:3888:QuorumCnxManager$Listener@511] - Received connection request /10.156.50.37:38892
2019-04-18 11:25:06,913 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x100000000 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x2 (n.peerEpoch) FOLLOWING (my state)
2019-04-18 11:25:06,916 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x200000000 (n.zxid), 0x3 (n.round), LOOKING (n.state), 3 (n.sid), 0x2 (n.peerEpoch) FOLLOWING (my state)
2019-04-18 11:51:01,944 [myid:1] - WARN  [RecvWorker:2:QuorumCnxManager$RecvWorker@780] - Connection broken for id 2, my id = 1, error = 
java.io.EOFException
	at java.io.DataInputStream.readInt(DataInputStream.java:392)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:765)
2019-04-18 11:51:01,946 [myid:1] - WARN  [RecvWorker:2:QuorumCnxManager$RecvWorker@783] - Interrupting SendWorker
2019-04-18 11:51:01,944 [myid:1] - WARN  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Follower@89] - Exception when following the leader
java.io.EOFException
	at java.io.DataInputStream.readInt(DataInputStream.java:392)
	at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63)
	at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83)
	at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:103)
	at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:153)
	at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:85)
	at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:786)
2019-04-18 11:51:01,950 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Follower@166] - shutdown called
java.lang.Exception: shutdown Follower
	at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166)
	at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:790)
2019-04-18 11:51:01,951 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FollowerZooKeeperServer@139] - Shutting down
2019-04-18 11:51:01,951 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:ZooKeeperServer@441] - shutting down
2019-04-18 11:51:01,951 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FollowerRequestProcessor@105] - Shutting down
2019-04-18 11:51:01,951 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:CommitProcessor@181] - Shutting down
2019-04-18 11:51:01,952 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FinalRequestProcessor@415] - shutdown of request processor complete
2019-04-18 11:51:01,952 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:SyncRequestProcessor@209] - Shutting down
2019-04-18 11:51:01,953 [myid:1] - INFO  [CommitProcessor:1:CommitProcessor@150] - CommitProcessor exited loop!
2019-04-18 11:51:01,953 [myid:1] - INFO  [SyncThread:1:SyncRequestProcessor@187] - SyncRequestProcessor exited!
2019-04-18 11:51:01,947 [myid:1] - WARN  [SendWorker:2:QuorumCnxManager$SendWorker@697] - Interrupted while waiting for message on queue
java.lang.InterruptedException
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088)
	at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:849)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:64)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:685)
2019-04-18 11:51:01,954 [myid:1] - WARN  [SendWorker:2:QuorumCnxManager$SendWorker@706] - Send worker leaving thread
2019-04-18 11:51:01,954 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer@714] - LOOKING
2019-04-18 11:51:01,957 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FileSnap@83] - Reading snapshot /home/zkkafka/zookeeper-3.4.6/data/version-2/snapshot.200000000
2019-04-18 11:51:01,959 [myid:1] - INFO  [FollowerRequestProcessor:1:FollowerRequestProcessor@95] - FollowerRequestProcessor exited loop!
2019-04-18 11:51:01,960 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@815] - New election. My id =  1, proposed zxid=0x200000000
2019-04-18 11:51:01,962 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x300000000 (n.zxid), 0x4 (n.round), LOOKING (n.state), 3 (n.sid), 0x3 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:51:01,963 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 1 (n.leader), 0x200000000 (n.zxid), 0x4 (n.round), LOOKING (n.state), 1 (n.sid), 0x3 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:51:01,963 [myid:1] - WARN  [WorkerSender[myid=1]:QuorumCnxManager@382] - Cannot open channel to 2 at election address /10.156.50.36:3888
java.net.ConnectException: 拒绝连接 (Connection refused)
	at java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341)
	at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449)
	at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430)
	at java.lang.Thread.run(Thread.java:748)
2019-04-18 11:51:01,965 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x300000000 (n.zxid), 0x4 (n.round), LOOKING (n.state), 1 (n.sid), 0x3 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:51:01,965 [myid:1] - WARN  [WorkerSender[myid=1]:QuorumCnxManager@382] - Cannot open channel to 2 at election address /10.156.50.36:3888
java.net.ConnectException: 拒绝连接 (Connection refused)
	at java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341)
	at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449)
	at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430)
	at java.lang.Thread.run(Thread.java:748)
2019-04-18 11:51:02,167 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer@784] - FOLLOWING
2019-04-18 11:51:02,167 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:ZooKeeperServer@162] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /home/zkkafka/zookeeper-3.4.6/logs/version-2 snapdir /home/zkkafka/zookeeper-3.4.6/data/version-2
2019-04-18 11:51:02,167 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Follower@63] - FOLLOWING - LEADER ELECTION TOOK - 212
2019-04-18 11:51:02,172 [myid:1] - WARN  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Learner@233] - Unexpected exception, tries=0, connecting to /10.156.50.37:2888
java.net.ConnectException: 拒绝连接 (Connection refused)
	at java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at org.apache.zookeeper.server.quorum.Learner.connectToLeader(Learner.java:225)
	at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:71)
	at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:786)
2019-04-18 11:51:03,291 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Learner@326] - Getting a snapshot from leader
2019-04-18 11:51:03,294 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FileTxnSnapLog@240] - Snapshotting: 0x300000000 to /home/zkkafka/zookeeper-3.4.6/data/version-2/snapshot.300000000
2019-04-18 11:51:12,862 [myid:1] - INFO  [/10.156.50.35:3888:QuorumCnxManager$Listener@511] - Received connection request /10.156.50.36:41630
2019-04-18 11:51:12,868 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x200000000 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x3 (n.peerEpoch) FOLLOWING (my state)
2019-04-18 11:51:12,873 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x300000000 (n.zxid), 0x4 (n.round), LOOKING (n.state), 2 (n.sid), 0x3 (n.peerEpoch) FOLLOWING (my state)
2019-04-18 11:55:51,912 [myid:1] - WARN  [RecvWorker:3:QuorumCnxManager$RecvWorker@780] - Connection broken for id 3, my id = 1, error = 
java.io.EOFException
	at java.io.DataInputStream.readInt(DataInputStream.java:392)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:765)
2019-04-18 11:55:51,913 [myid:1] - WARN  [RecvWorker:3:QuorumCnxManager$RecvWorker@783] - Interrupting SendWorker
2019-04-18 11:55:51,912 [myid:1] - WARN  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Follower@89] - Exception when following the leader
java.io.EOFException
	at java.io.DataInputStream.readInt(DataInputStream.java:392)
	at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63)
	at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83)
	at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:103)
	at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:153)
	at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:85)
	at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:786)
2019-04-18 11:55:51,916 [myid:1] - WARN  [SendWorker:3:QuorumCnxManager$SendWorker@697] - Interrupted while waiting for message on queue
java.lang.InterruptedException
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088)
	at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:849)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:64)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:685)
2019-04-18 11:55:51,926 [myid:1] - WARN  [SendWorker:3:QuorumCnxManager$SendWorker@706] - Send worker leaving thread
2019-04-18 11:55:51,926 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Follower@166] - shutdown called
java.lang.Exception: shutdown Follower
	at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166)
	at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:790)
2019-04-18 11:55:51,928 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FollowerZooKeeperServer@139] - Shutting down
2019-04-18 11:55:51,928 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:ZooKeeperServer@441] - shutting down
2019-04-18 11:55:51,928 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FollowerRequestProcessor@105] - Shutting down
2019-04-18 11:55:51,928 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:CommitProcessor@181] - Shutting down
2019-04-18 11:55:51,929 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FinalRequestProcessor@415] - shutdown of request processor complete
2019-04-18 11:55:51,929 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:SyncRequestProcessor@209] - Shutting down
2019-04-18 11:55:51,929 [myid:1] - INFO  [FollowerRequestProcessor:1:FollowerRequestProcessor@95] - FollowerRequestProcessor exited loop!
2019-04-18 11:55:51,930 [myid:1] - INFO  [SyncThread:1:SyncRequestProcessor@187] - SyncRequestProcessor exited!
2019-04-18 11:55:51,930 [myid:1] - INFO  [CommitProcessor:1:CommitProcessor@150] - CommitProcessor exited loop!
2019-04-18 11:55:51,931 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer@714] - LOOKING
2019-04-18 11:55:51,933 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FileSnap@83] - Reading snapshot /home/zkkafka/zookeeper-3.4.6/data/version-2/snapshot.300000000
2019-04-18 11:55:51,934 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@815] - New election. My id =  1, proposed zxid=0x300000000
2019-04-18 11:55:51,935 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x400000000 (n.zxid), 0x5 (n.round), LOOKING (n.state), 2 (n.sid), 0x4 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:55:51,935 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 1 (n.leader), 0x300000000 (n.zxid), 0x5 (n.round), LOOKING (n.state), 1 (n.sid), 0x4 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:55:51,937 [myid:1] - WARN  [WorkerSender[myid=1]:QuorumCnxManager@382] - Cannot open channel to 3 at election address /10.156.50.37:3888
java.net.ConnectException: 拒绝连接 (Connection refused)
	at java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341)
	at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449)
	at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430)
	at java.lang.Thread.run(Thread.java:748)
2019-04-18 11:55:51,938 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x400000000 (n.zxid), 0x5 (n.round), LOOKING (n.state), 1 (n.sid), 0x4 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:55:51,939 [myid:1] - WARN  [WorkerSender[myid=1]:QuorumCnxManager@382] - Cannot open channel to 3 at election address /10.156.50.37:3888
java.net.ConnectException: 拒绝连接 (Connection refused)
	at java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341)
	at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449)
	at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430)
	at java.lang.Thread.run(Thread.java:748)
2019-04-18 11:55:52,139 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer@784] - FOLLOWING
2019-04-18 11:55:52,140 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:ZooKeeperServer@162] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /home/zkkafka/zookeeper-3.4.6/logs/version-2 snapdir /home/zkkafka/zookeeper-3.4.6/data/version-2
2019-04-18 11:55:52,141 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Follower@63] - FOLLOWING - LEADER ELECTION TOOK - 210
2019-04-18 11:55:52,143 [myid:1] - WARN  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Learner@233] - Unexpected exception, tries=0, connecting to /10.156.50.36:2888
java.net.ConnectException: 拒绝连接 (Connection refused)
	at java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at org.apache.zookeeper.server.quorum.Learner.connectToLeader(Learner.java:225)
	at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:71)
	at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:786)
2019-04-18 11:55:53,221 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Learner@326] - Getting a snapshot from leader
2019-04-18 11:55:53,224 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FileTxnSnapLog@240] - Snapshotting: 0x400000000 to /home/zkkafka/zookeeper-3.4.6/data/version-2/snapshot.400000000
2019-04-18 11:56:03,971 [myid:1] - INFO  [/10.156.50.35:3888:QuorumCnxManager$Listener@511] - Received connection request /10.156.50.37:38902
2019-04-18 11:56:03,976 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x300000000 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x4 (n.peerEpoch) FOLLOWING (my state)
2019-04-18 11:56:03,981 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x400000000 (n.zxid), 0x5 (n.round), LOOKING (n.state), 3 (n.sid), 0x4 (n.peerEpoch) FOLLOWING (my state)
2019-04-18 12:23:08,543 [myid:1] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
Removing file: 2019-4-18 11:18:16	/home/zkkafka/zookeeper-3.4.6/data/version-2/snapshot.0
Removing file: 2019-4-18 11:24:11	/home/zkkafka/zookeeper-3.4.6/data/version-2/snapshot.100000000
2019-04-18 12:23:08,571 [myid:1] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.

 

 

2.4.重启 10.156.50.36 日志

 

===============looking log  from 10.156.50.36======================================================
2019-04-18 11:51:12,688 [myid:] - INFO  [main:QuorumPeerConfig@103] - Reading configuration from: /home/zkkafka/zookeeper-3.4.6/bin/../conf/zoo.cfg
2019-04-18 11:51:12,699 [myid:] - INFO  [main:QuorumPeerConfig@340] - Defaulting to majority quorums
2019-04-18 11:51:12,707 [myid:2] - INFO  [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
2019-04-18 11:51:12,707 [myid:2] - INFO  [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 1
2019-04-18 11:51:12,717 [myid:2] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
2019-04-18 11:51:12,743 [myid:2] - INFO  [main:QuorumPeerMain@127] - Starting quorum peer
2019-04-18 11:51:12,748 [myid:2] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.
2019-04-18 11:51:12,768 [myid:2] - INFO  [main:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:2181
2019-04-18 11:51:12,790 [myid:2] - INFO  [main:QuorumPeer@959] - tickTime set to 2000
2019-04-18 11:51:12,791 [myid:2] - INFO  [main:QuorumPeer@979] - minSessionTimeout set to -1
2019-04-18 11:51:12,791 [myid:2] - INFO  [main:QuorumPeer@990] - maxSessionTimeout set to -1
2019-04-18 11:51:12,791 [myid:2] - INFO  [main:QuorumPeer@1005] - initLimit set to 10
2019-04-18 11:51:12,812 [myid:2] - INFO  [main:FileSnap@83] - Reading snapshot /home/zkkafka/zookeeper-3.4.6/data/version-2/snapshot.200000000
2019-04-18 11:51:12,828 [myid:2] - INFO  [Thread-2:QuorumCnxManager$Listener@504] - My election bind port: /10.156.50.36:3888
2019-04-18 11:51:12,846 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:QuorumPeer@714] - LOOKING
2019-04-18 11:51:12,848 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@815] - New election. My id =  2, proposed zxid=0x200000000
2019-04-18 11:51:12,868 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x200000000 (n.zxid), 0x1 (n.round), LOOKING (n.state), 2 (n.sid), 0x3 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:51:12,869 [myid:2] - INFO  [WorkerSender[myid=2]:QuorumCnxManager@193] - Have smaller server identifier, so dropping the connection: (3, 2)
2019-04-18 11:51:12,869 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x300000000 (n.zxid), 0x4 (n.round), LOOKING (n.state), 1 (n.sid), 0x3 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:51:12,870 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x300000000 (n.zxid), 0x4 (n.round), FOLLOWING (n.state), 1 (n.sid), 0x4 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:51:12,871 [myid:2] - INFO  [/10.156.50.36:3888:QuorumCnxManager$Listener@511] - Received connection request /10.156.50.37:38532
2019-04-18 11:51:12,871 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x300000000 (n.zxid), 0x4 (n.round), LOOKING (n.state), 2 (n.sid), 0x3 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:51:12,873 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x300000000 (n.zxid), 0x4 (n.round), LOOKING (n.state), 3 (n.sid), 0x3 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:51:12,874 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x300000000 (n.zxid), 0x4 (n.round), FOLLOWING (n.state), 1 (n.sid), 0x4 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:51:12,875 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x300000000 (n.zxid), 0x4 (n.round), LEADING (n.state), 3 (n.sid), 0x4 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:51:12,876 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:QuorumPeer@784] - FOLLOWING
2019-04-18 11:51:12,885 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Learner@86] - TCP NoDelay set to: true
2019-04-18 11:51:12,898 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2019-04-18 11:51:12,899 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:host.name=yanfabu2-36.base.app.dev.yf
2019-04-18 11:51:12,899 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.version=1.8.0_151
2019-04-18 11:51:12,899 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.vendor=Oracle Corporation
2019-04-18 11:51:12,899 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.home=/home/zkkafka/jdk1.8.0_151/jre
2019-04-18 11:51:12,899 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.class.path=/home/zkkafka/zookeeper-3.4.6/bin/../build/classes:/home/zkkafka/zookeeper-3.4.6/bin/../build/lib/*.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/slf4j-api-1.6.1.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/netty-3.7.0.Final.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/log4j-1.2.16.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/jline-0.9.94.jar:/home/zkkafka/zookeeper-3.4.6/bin/../zookeeper-3.4.6.jar:/home/zkkafka/zookeeper-3.4.6/bin/../src/java/lib/*.jar:/home/zkkafka/zookeeper-3.4.6/bin/../conf:/home/zkkafka/jdk1.8.0_151/lib/dt.jar:/home/zkkafka/jdk1.8.0_151/lib/tools.jar:
2019-04-18 11:51:12,900 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2019-04-18 11:51:12,900 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.io.tmpdir=/tmp
2019-04-18 11:51:12,900 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:java.compiler=<NA>
2019-04-18 11:51:12,900 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:os.name=Linux
2019-04-18 11:51:12,900 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:os.arch=amd64
2019-04-18 11:51:12,900 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:os.version=3.10.0-862.el7.x86_64
2019-04-18 11:51:12,901 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:user.name=zkkafka
2019-04-18 11:51:12,901 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:user.home=/home/zkkafka
2019-04-18 11:51:12,901 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Environment@100] - Server environment:user.dir=/home/zkkafka/zookeeper-3.4.6
2019-04-18 11:51:12,903 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:ZooKeeperServer@162] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /home/zkkafka/zookeeper-3.4.6/logs/version-2 snapdir /home/zkkafka/zookeeper-3.4.6/data/version-2
2019-04-18 11:51:12,904 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Follower@63] - FOLLOWING - LEADER ELECTION TOOK - 56
2019-04-18 11:51:12,916 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Learner@326] - Getting a snapshot from leader
2019-04-18 11:51:12,923 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:FileTxnSnapLog@240] - Snapshotting: 0x400000000 to /home/zkkafka/zookeeper-3.4.6/data/version-2/snapshot.400000000
2019-04-18 11:55:51,913 [myid:2] - WARN  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Follower@89] - Exception when following the leader
java.io.EOFException
	at java.io.DataInputStream.readInt(DataInputStream.java:392)
	at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63)
	at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83)
	at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:103)
	at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:153)
	at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:85)
	at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:786)
2019-04-18 11:55:51,921 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Follower@166] - shutdown called
java.lang.Exception: shutdown Follower
	at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166)
	at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:790)
2019-04-18 11:55:51,922 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:FollowerZooKeeperServer@139] - Shutting down
2019-04-18 11:55:51,922 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:ZooKeeperServer@441] - shutting down
2019-04-18 11:55:51,922 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:FollowerRequestProcessor@105] - Shutting down
2019-04-18 11:55:51,914 [myid:2] - WARN  [RecvWorker:3:QuorumCnxManager$RecvWorker@780] - Connection broken for id 3, my id = 2, error = 
java.io.EOFException
	at java.io.DataInputStream.readInt(DataInputStream.java:392)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:765)
2019-04-18 11:55:51,923 [myid:2] - INFO  [FollowerRequestProcessor:2:FollowerRequestProcessor@95] - FollowerRequestProcessor exited loop!
2019-04-18 11:55:51,923 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:CommitProcessor@181] - Shutting down
2019-04-18 11:55:51,923 [myid:2] - WARN  [RecvWorker:3:QuorumCnxManager$RecvWorker@783] - Interrupting SendWorker
2019-04-18 11:55:51,924 [myid:2] - INFO  [CommitProcessor:2:CommitProcessor@150] - CommitProcessor exited loop!
2019-04-18 11:55:51,924 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:FinalRequestProcessor@415] - shutdown of request processor complete
2019-04-18 11:55:51,925 [myid:2] - WARN  [SendWorker:3:QuorumCnxManager$SendWorker@697] - Interrupted while waiting for message on queue
java.lang.InterruptedException
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088)
	at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:849)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:64)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:685)
2019-04-18 11:55:51,926 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:SyncRequestProcessor@209] - Shutting down
2019-04-18 11:55:51,927 [myid:2] - INFO  [SyncThread:2:SyncRequestProcessor@187] - SyncRequestProcessor exited!
2019-04-18 11:55:51,927 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:QuorumPeer@714] - LOOKING
2019-04-18 11:55:51,929 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:FileSnap@83] - Reading snapshot /home/zkkafka/zookeeper-3.4.6/data/version-2/snapshot.400000000
2019-04-18 11:55:51,931 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@815] - New election. My id =  2, proposed zxid=0x400000000
2019-04-18 11:55:51,933 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x400000000 (n.zxid), 0x5 (n.round), LOOKING (n.state), 2 (n.sid), 0x4 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:55:51,927 [myid:2] - WARN  [SendWorker:3:QuorumCnxManager$SendWorker@706] - Send worker leaving thread
2019-04-18 11:55:51,934 [myid:2] - WARN  [WorkerSender[myid=2]:QuorumCnxManager@382] - Cannot open channel to 3 at election address /10.156.50.37:3888
java.net.ConnectException: 拒绝连接 (Connection refused)
	at java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341)
	at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449)
	at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430)
	at java.lang.Thread.run(Thread.java:748)
2019-04-18 11:55:51,936 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 1 (n.leader), 0x300000000 (n.zxid), 0x5 (n.round), LOOKING (n.state), 1 (n.sid), 0x4 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:55:51,939 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x400000000 (n.zxid), 0x5 (n.round), LOOKING (n.state), 1 (n.sid), 0x4 (n.peerEpoch) LOOKING (my state)
2019-04-18 11:55:52,140 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:QuorumPeer@796] - LEADING
2019-04-18 11:55:52,147 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Leader@60] - TCP NoDelay set to: true
2019-04-18 11:55:52,148 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:ZooKeeperServer@162] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /home/zkkafka/zookeeper-3.4.6/logs/version-2 snapdir /home/zkkafka/zookeeper-3.4.6/data/version-2
2019-04-18 11:55:52,150 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Leader@358] - LEADING - LEADER ELECTION TOOK - 222
2019-04-18 11:55:53,157 [myid:2] - INFO  [LearnerHandler-/10.156.50.35:40706:LearnerHandler@330] - Follower sid: 1 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@5a7c2219
2019-04-18 11:55:53,218 [myid:2] - INFO  [LearnerHandler-/10.156.50.35:40706:LearnerHandler@385] - Synchronizing with Follower sid: 1 maxCommittedLog=0x0 minCommittedLog=0x0 peerLastZxid=0x300000000
2019-04-18 11:55:53,218 [myid:2] - INFO  [LearnerHandler-/10.156.50.35:40706:LearnerHandler@462] - Sending SNAP
2019-04-18 11:55:53,220 [myid:2] - INFO  [LearnerHandler-/10.156.50.35:40706:LearnerHandler@486] - Sending snapshot last zxid of peer is 0x300000000  zxid of leader is 0x500000000sent zxid of db as 0x400000000
2019-04-18 11:55:53,264 [myid:2] - INFO  [LearnerHandler-/10.156.50.35:40706:LearnerHandler@522] - Received NEWLEADER-ACK message from 1
2019-04-18 11:55:53,265 [myid:2] - INFO  [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Leader@943] - Have quorum of supporters, sids: [ 1,2 ]; starting up and setting last processed zxid: 0x500000000
2019-04-18 11:56:03,976 [myid:2] - INFO  [/10.156.50.36:3888:QuorumCnxManager$Listener@511] - Received connection request /10.156.50.37:38536
2019-04-18 11:56:03,979 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x300000000 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x4 (n.peerEpoch) LEADING (my state)
2019-04-18 11:56:03,981 [myid:2] - INFO  [WorkerReceiver[myid=2]:FastLeaderElection@597] - Notification: 1 (message format version), 2 (n.leader), 0x400000000 (n.zxid), 0x5 (n.round), LOOKING (n.state), 3 (n.sid), 0x4 (n.peerEpoch) LEADING (my state)
2019-04-18 11:56:04,022 [myid:2] - INFO  [LearnerHandler-/10.156.50.37:45440:LearnerHandler@330] - Follower sid: 3 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@2a997f6a
2019-04-18 11:56:04,096 [myid:2] - INFO  [LearnerHandler-/10.156.50.37:45440:LearnerHandler@385] - Synchronizing with Follower sid: 3 maxCommittedLog=0x0 minCommittedLog=0x0 peerLastZxid=0x300000000
2019-04-18 11:56:04,097 [myid:2] - INFO  [LearnerHandler-/10.156.50.37:45440:LearnerHandler@462] - Sending SNAP
2019-04-18 11:56:04,098 [myid:2] - INFO  [LearnerHandler-/10.156.50.37:45440:LearnerHandler@486] - Sending snapshot last zxid of peer is 0x300000000  zxid of leader is 0x500000000sent zxid of db as 0x500000000
2019-04-18 11:56:04,117 [myid:2] - INFO  [LearnerHandler-/10.156.50.37:45440:LearnerHandler@522] - Received NEWLEADER-ACK message from 3
2019-04-18 12:51:12,712 [myid:2] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
2019-04-18 12:51:12,713 [myid:2] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.
2019-04-18 13:23:06,948 [myid:2] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:36124
2019-04-18 13:23:06,957 [myid:2] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing srvr command from /127.0.0.1:36124
2019-04-18 13:23:06,961 [myid:2] - INFO  [Thread-6:NIOServerCnxn@1007] - Closed socket connection for client /127.0.0.1:36124 (no session established for client)

 

 

2.6 zk leader follower 状态

 

===============looking status  from 10.156.50.35======================================================
[zkkafka@yanfabu2-35 ~]$ zkServer.sh status
JMX enabled by default
Using config: /home/zkkafka/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: follower
===============looking status  from 10.156.50.36======================================================
[zkkafka@yanfabu2-36 zookeeper-3.4.6]$ zkServer.sh status
JMX enabled by default
Using config: /home/zkkafka/zookeeper-3.4.6/bin/../conf/zoo.cfg

Mode: leader
===============looking status  from 10.156.50.37======================================================
[zkkafka@yanfabu2-37 zookeeper-3.4.6]$ zkServer.sh status
JMX enabled by default
Using config: /home/zkkafka/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: follower

 

 

3.安装 kafka

 

tar xf kafka_2.11-2.1.1.tgz

vi ~/.bash_profile

export PATH
export LANG="zh_CN.utf8"
export   JAVA_HOME=/home/zkkafka/jdk1.8.0_151
export   ZOOKEEPER_HOME=/home/zkkafka/zookeeper-3.4.6
export   CLASSPATH=$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:
export   PATH=$JAVA_HOME/bin:$PATH
export   PATH=$PATH:$ZOOKEEPER_HOME/bin:$ZOOKEEPER_HOME/conf
export KAFKA_HOME=/home/zkkafka/kafka_2.11-1.1.1
export PATH=$KAFKA_HOME/bin:$PATH


source ~/.bash_profile 



cd /home/zkkafka/kafka_2.11-1.1.1
mdkir logs

vi /home/zkkafka/kafka_2.11-1.1.1/config/vim server.properties
broker.id=1
log.dirs=/home/zkkafka/kafka_2.11-1.1.1/logs
zookeeper.connect=10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181

scp -r server.properties  zkkafka@10.156.50.36:/home/zkkafka/kafka_2.11-1.1.1/config/
broker.id=2
scp -r server.properties  zkkafka@10.156.50.36:/home/zkkafka/kafka_2.11-1.1.1/config/
broker.id=3

 

 

启动kafka

 

# sh /home/zkkafka/kafka_2.11-1.1.1/bin/zookeeper-server-start.sh  /home/zkkafka/kafka_2.11-1.1.1/config/server.properties  &
bin/kafka-server-start.sh config/server.properties &

 

 

错误

 

================start error log==========================
[2019-04-18 14:14:27,469] ERROR Invalid config, exiting abnormally (org.apache.zookeeper.server.quorum.QuorumPeerMain)
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing /home/zkkafka/kafka_2.11-1.1.1/config/server.properties
	at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:156)
	at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:104)
	at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:81)
Caused by: java.lang.NumberFormatException: For input string: "initial.rebalance.delay.ms"
	at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
	at java.lang.Long.parseLong(Long.java:589)
	at java.lang.Long.parseLong(Long.java:631)
	at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:244)
	at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:152)
	... 2 more
Invalid config, exiting abnormally
================initial.rebalance.delay.ms==========================

 

 

修改配置

 

broker.id=1
listeners = PLAINTEXT://10.156.50.35:9092
port=9092
advertised.listeners = PLAINTEXT://10.156.50.35:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/home/zkkafka/kafka_2.11-1.1.1/logs
num.partitions=4
num.recovery.threads.per.data.dir=1
metadata.broker.list=127.0.0.1:9092
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181
zookeeer.connection.timeout.ms=6000
#group.initial.rebalance.delay.ms=0
initial.rebalance.delay.ms=0

 

 

启动日志

 

[zkkafka@yanfabu2-35 kafka_2.11-1.1.1]$  bin/kafka-server-start.sh config/server.properties &
[1] 52205
[zkkafka@yanfabu2-35 kafka_2.11-1.1.1]$ [2019-04-19 10:00:39,154] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2019-04-19 10:00:40,272] INFO starting (kafka.server.KafkaServer)
[2019-04-19 10:00:40,274] INFO Connecting to zookeeper on 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181 (kafka.server.KafkaServer)
[2019-04-19 10:00:40,310] INFO [ZooKeeperClient] Initializing a new session to 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181. (kafka.zookeeper.ZooKeeperClient)
[2019-04-19 10:00:40,323] INFO Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT (org.apache.zookeeper.ZooKeeper)
[2019-04-19 10:00:40,323] INFO Client environment:host.name=yanfabu2-35.base.app.dev.yf (org.apache.zookeeper.ZooKeeper)
[2019-04-19 10:00:40,323] INFO Client environment:java.version=1.8.0_151 (org.apache.zookeeper.ZooKeeper)
[2019-04-19 10:00:40,324] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2019-04-19 10:00:40,324] INFO Client environment:java.home=/home/zkkafka/jdk1.8.0_151/jre (org.apache.zookeeper.ZooKeeper)
[2019-04-19 10:00:40,324] INFO Client environment:java.class.path=/home/zkkafka/jdk1.8.0_151/lib/dt.jar:/home/zkkafka/jdk1.8.0_151/lib/tools.jar::/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/activation-1.1.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/aopalliance-repackaged-2.5.0-b32.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/argparse4j-0.7.0.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/commons-lang3-3.5.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/connect-api-1.1.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/connect-file-1.1.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/connect-json-1.1.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/connect-runtime-1.1.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/connect-transforms-1.1.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/guava-20.0.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/hk2-api-2.5.0-b32.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/hk2-locator-2.5.0-b32.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/hk2-utils-2.5.0-b32.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jackson-annotations-2.9.6.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jackson-core-2.9.6.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jackson-databind-2.9.6.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jackson-jaxrs-base-2.9.6.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jackson-jaxrs-json-provider-2.9.6.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jackson-module-jaxb-annotations-2.9.6.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/javassist-3.20.0-GA.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/javassist-3.21.0-GA.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/javax.annotation-api-1.2.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/javax.inject-1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/javax.inject-2.5.0-b32.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/javax.servlet-api-3.1.0.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/javax.ws.rs-api-2.0.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jaxb-api-2.3.0.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jersey-client-2.25.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jersey-common-2.25.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jersey-container-servlet-2.25.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jersey-container-servlet-core-2.25.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jersey-guava-2.25.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jersey-media-jaxb-2.25.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jersey-server-2.25.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jetty-client-9.2.24.v20180105.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jetty-continuation-9.2.24.v20180105.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jetty-http-9.2.24.v20180105.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jetty-io-9.2.24.v20180105.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jetty-security-9.2.24.v20180105.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jetty-server-9.2.24.v20180105.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jetty-servlet-9.2.24.v20180105.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jetty-servlets-9.2.24.v20180105.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jetty-util-9.2.24.v20180105.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/jopt-simple-5.0.4.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/kafka_2.11-1.1.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/kafka_2.11-1.1.1-sources.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/kafka_2.11-1.1.1-test-sources.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/kafka-clients-1.1.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/kafka-log4j-appender-1.1.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/kafka-streams-1.1.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/kafka-streams-examples-1.1.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/kafka-streams-test-utils-1.1.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/kafka-tools-1.1.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/log4j-1.2.17.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/lz4-java-1.4.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/maven-artifact-3.5.3.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/metrics-core-2.2.0.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/osgi-resource-locator-1.0.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/plexus-utils-3.1.0.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/reflections-0.9.11.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/rocksdbjni-5.7.3.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/scala-library-2.11.12.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/scala-logging_2.11-3.8.0.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/scala-reflect-2.11.12.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/slf4j-api-1.7.25.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/slf4j-log4j12-1.7.25.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/snappy-java-1.1.7.1.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/validation-api-1.1.0.Final.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/zkclient-0.10.jar:/home/zkkafka/kafka_2.11-1.1.1/bin/../libs/zookeeper-3.4.10.jar (org.apache.zookeeper.ZooKeeper)
[2019-04-19 10:00:40,325] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2019-04-19 10:00:40,325] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2019-04-19 10:00:40,325] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2019-04-19 10:00:40,325] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2019-04-19 10:00:40,325] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2019-04-19 10:00:40,325] INFO Client environment:os.version=3.10.0-862.el7.x86_64 (org.apache.zookeeper.ZooKeeper)
[2019-04-19 10:00:40,325] INFO Client environment:user.name=zkkafka (org.apache.zookeeper.ZooKeeper)
[2019-04-19 10:00:40,325] INFO Client environment:user.home=/home/zkkafka (org.apache.zookeeper.ZooKeeper)
[2019-04-19 10:00:40,326] INFO Client environment:user.dir=/home/zkkafka/kafka_2.11-1.1.1 (org.apache.zookeeper.ZooKeeper)
[2019-04-19 10:00:40,328] INFO Initiating client connection, connectString=10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@59662a0b (org.apache.zookeeper.ZooKeeper)
[2019-04-19 10:00:40,355] INFO Opening socket connection to server 10.156.50.36/10.156.50.36:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2019-04-19 10:00:40,356] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2019-04-19 10:00:40,366] INFO Socket connection established to 10.156.50.36/10.156.50.36:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2019-04-19 10:00:40,425] INFO Session establishment complete on server 10.156.50.36/10.156.50.36:2181, sessionid = 0x26a2fcce4240000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2019-04-19 10:00:40,433] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
[2019-04-19 10:00:41,209] INFO Cluster ID = 6PkAOxZQSne_bGuEWPR4CA (kafka.server.KafkaServer)
[2019-04-19 10:00:41,227] WARN No meta.properties file under dir /home/zkkafka/kafka_2.11-1.1.1/logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2019-04-19 10:00:41,346] INFO KafkaConfig values: 
	advertised.host.name = null
	advertised.listeners = PLAINTEXT://10.156.50.35:9092
	advertised.port = null
	alter.config.policy.class.name = null
	alter.log.dirs.replication.quota.window.num = 11
	alter.log.dirs.replication.quota.window.size.seconds = 1
	authorizer.class.name = 
	auto.create.topics.enable = true
	auto.leader.rebalance.enable = true
	background.threads = 10
	broker.id = 0
	broker.id.generation.enable = true
	broker.rack = null
	compression.type = producer
	connections.max.idle.ms = 600000
	controlled.shutdown.enable = true
	controlled.shutdown.max.retries = 3
	controlled.shutdown.retry.backoff.ms = 5000
	controller.socket.timeout.ms = 30000
	create.topic.policy.class.name = null
	default.replication.factor = 1
	delegation.token.expiry.check.interval.ms = 3600000
	delegation.token.expiry.time.ms = 86400000
	delegation.token.master.key = null
	delegation.token.max.lifetime.ms = 604800000
	delete.records.purgatory.purge.interval.requests = 1
	delete.topic.enable = true
	fetch.purgatory.purge.interval.requests = 1000
	group.initial.rebalance.delay.ms = 3000
	group.max.session.timeout.ms = 300000
	group.min.session.timeout.ms = 6000
	host.name = 
	inter.broker.listener.name = null
	inter.broker.protocol.version = 1.1-IV0
	leader.imbalance.check.interval.seconds = 300
	leader.imbalance.per.broker.percentage = 10
	listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
	listeners = PLAINTEXT://10.156.50.35:9092
	log.cleaner.backoff.ms = 15000
	log.cleaner.dedupe.buffer.size = 134217728
	log.cleaner.delete.retention.ms = 86400000
	log.cleaner.enable = true
	log.cleaner.io.buffer.load.factor = 0.9
	log.cleaner.io.buffer.size = 524288
	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
	log.cleaner.min.cleanable.ratio = 0.5
	log.cleaner.min.compaction.lag.ms = 0
	log.cleaner.threads = 1
	log.cleanup.policy = [delete]
	log.dir = /tmp/kafka-logs
	log.dirs = /home/zkkafka/kafka_2.11-1.1.1/logs
	log.flush.interval.messages = 9223372036854775807
	log.flush.interval.ms = null
	log.flush.offset.checkpoint.interval.ms = 60000
	log.flush.scheduler.interval.ms = 9223372036854775807
	log.flush.start.offset.checkpoint.interval.ms = 60000
	log.index.interval.bytes = 4096
	log.index.size.max.bytes = 10485760
	log.message.format.version = 1.1-IV0
	log.message.timestamp.difference.max.ms = 9223372036854775807
	log.message.timestamp.type = CreateTime
	log.preallocate = false
	log.retention.bytes = -1
	log.retention.check.interval.ms = 300000
	log.retention.hours = 168
	log.retention.minutes = null
	log.retention.ms = null
	log.roll.hours = 168
	log.roll.jitter.hours = 0
	log.roll.jitter.ms = null
	log.roll.ms = null
	log.segment.bytes = 1073741824
	log.segment.delete.delay.ms = 60000
	max.connections.per.ip = 2147483647
	max.connections.per.ip.overrides = 
	max.incremental.fetch.session.cache.slots = 1000
	message.max.bytes = 1000012
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	min.insync.replicas = 1
	num.io.threads = 8
	num.network.threads = 3
	num.partitions = 4
	num.recovery.threads.per.data.dir = 1
	num.replica.alter.log.dirs.threads = null
	num.replica.fetchers = 1
	offset.metadata.max.bytes = 4096
	offsets.commit.required.acks = -1
	offsets.commit.timeout.ms = 5000
	offsets.load.buffer.size = 5242880
	offsets.retention.check.interval.ms = 600000
	offsets.retention.minutes = 1440
	offsets.topic.compression.codec = 0
	offsets.topic.num.partitions = 50
	offsets.topic.replication.factor = 1
	offsets.topic.segment.bytes = 104857600
	password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
	password.encoder.iterations = 4096
	password.encoder.key.length = 128
	password.encoder.keyfactory.algorithm = null
	password.encoder.old.secret = null
	password.encoder.secret = null
	port = 9092
	principal.builder.class = null
	producer.purgatory.purge.interval.requests = 1000
	queued.max.request.bytes = -1
	queued.max.requests = 500
	quota.consumer.default = 9223372036854775807
	quota.producer.default = 9223372036854775807
	quota.window.num = 11
	quota.window.size.seconds = 1
	replica.fetch.backoff.ms = 1000
	replica.fetch.max.bytes = 1048576
	replica.fetch.min.bytes = 1
	replica.fetch.response.max.bytes = 10485760
	replica.fetch.wait.max.ms = 500
	replica.high.watermark.checkpoint.interval.ms = 5000
	replica.lag.time.max.ms = 10000
	replica.socket.receive.buffer.bytes = 65536
	replica.socket.timeout.ms = 30000
	replication.quota.window.num = 11
	replication.quota.window.size.seconds = 1
	request.timeout.ms = 30000
	reserved.broker.max.id = 1000
	sasl.enabled.mechanisms = [GSSAPI]
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.principal.to.local.rules = [DEFAULT]
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism.inter.broker.protocol = GSSAPI
	security.inter.broker.protocol = PLAINTEXT
	socket.receive.buffer.bytes = 102400
	socket.request.max.bytes = 104857600
	socket.send.buffer.bytes = 102400
	ssl.cipher.suites = []
	ssl.client.auth = none
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
	transaction.max.timeout.ms = 900000
	transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
	transaction.state.log.load.buffer.size = 5242880
	transaction.state.log.min.isr = 1
	transaction.state.log.num.partitions = 50
	transaction.state.log.replication.factor = 1
	transaction.state.log.segment.bytes = 104857600
	transactional.id.expiration.ms = 604800000
	unclean.leader.election.enable = false
	zookeeper.connect = 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181
	zookeeper.connection.timeout.ms = null
	zookeeper.max.in.flight.requests = 10
	zookeeper.session.timeout.ms = 6000
	zookeeper.set.acl = false
	zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2019-04-19 10:00:41,367] INFO KafkaConfig values: 
	advertised.host.name = null
	advertised.listeners = PLAINTEXT://10.156.50.35:9092
	advertised.port = null
	alter.config.policy.class.name = null
	alter.log.dirs.replication.quota.window.num = 11
	alter.log.dirs.replication.quota.window.size.seconds = 1
	authorizer.class.name = 
	auto.create.topics.enable = true
	auto.leader.rebalance.enable = true
	background.threads = 10
	broker.id = 0
	broker.id.generation.enable = true
	broker.rack = null
	compression.type = producer
	connections.max.idle.ms = 600000
	controlled.shutdown.enable = true
	controlled.shutdown.max.retries = 3
	controlled.shutdown.retry.backoff.ms = 5000
	controller.socket.timeout.ms = 30000
	create.topic.policy.class.name = null
	default.replication.factor = 1
	delegation.token.expiry.check.interval.ms = 3600000
	delegation.token.expiry.time.ms = 86400000
	delegation.token.master.key = null
	delegation.token.max.lifetime.ms = 604800000
	delete.records.purgatory.purge.interval.requests = 1
	delete.topic.enable = true
	fetch.purgatory.purge.interval.requests = 1000
	group.initial.rebalance.delay.ms = 3000
	group.max.session.timeout.ms = 300000
	group.min.session.timeout.ms = 6000
	host.name = 
	inter.broker.listener.name = null
	inter.broker.protocol.version = 1.1-IV0
	leader.imbalance.check.interval.seconds = 300
	leader.imbalance.per.broker.percentage = 10
	listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
	listeners = PLAINTEXT://10.156.50.35:9092
	log.cleaner.backoff.ms = 15000
	log.cleaner.dedupe.buffer.size = 134217728
	log.cleaner.delete.retention.ms = 86400000
	log.cleaner.enable = true
	log.cleaner.io.buffer.load.factor = 0.9
	log.cleaner.io.buffer.size = 524288
	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
	log.cleaner.min.cleanable.ratio = 0.5
	log.cleaner.min.compaction.lag.ms = 0
	log.cleaner.threads = 1
	log.cleanup.policy = [delete]
	log.dir = /tmp/kafka-logs
	log.dirs = /home/zkkafka/kafka_2.11-1.1.1/logs
	log.flush.interval.messages = 9223372036854775807
	log.flush.interval.ms = null
	log.flush.offset.checkpoint.interval.ms = 60000
	log.flush.scheduler.interval.ms = 9223372036854775807
	log.flush.start.offset.checkpoint.interval.ms = 60000
	log.index.interval.bytes = 4096
	log.index.size.max.bytes = 10485760
	log.message.format.version = 1.1-IV0
	log.message.timestamp.difference.max.ms = 9223372036854775807
	log.message.timestamp.type = CreateTime
	log.preallocate = false
	log.retention.bytes = -1
	log.retention.check.interval.ms = 300000
	log.retention.hours = 168
	log.retention.minutes = null
	log.retention.ms = null
	log.roll.hours = 168
	log.roll.jitter.hours = 0
	log.roll.jitter.ms = null
	log.roll.ms = null
	log.segment.bytes = 1073741824
	log.segment.delete.delay.ms = 60000
	max.connections.per.ip = 2147483647
	max.connections.per.ip.overrides = 
	max.incremental.fetch.session.cache.slots = 1000
	message.max.bytes = 1000012
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	min.insync.replicas = 1
	num.io.threads = 8
	num.network.threads = 3
	num.partitions = 4
	num.recovery.threads.per.data.dir = 1
	num.replica.alter.log.dirs.threads = null
	num.replica.fetchers = 1
	offset.metadata.max.bytes = 4096
	offsets.commit.required.acks = -1
	offsets.commit.timeout.ms = 5000
	offsets.load.buffer.size = 5242880
	offsets.retention.check.interval.ms = 600000
	offsets.retention.minutes = 1440
	offsets.topic.compression.codec = 0
	offsets.topic.num.partitions = 50
	offsets.topic.replication.factor = 1
	offsets.topic.segment.bytes = 104857600
	password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
	password.encoder.iterations = 4096
	password.encoder.key.length = 128
	password.encoder.keyfactory.algorithm = null
	password.encoder.old.secret = null
	password.encoder.secret = null
	port = 9092
	principal.builder.class = null
	producer.purgatory.purge.interval.requests = 1000
	queued.max.request.bytes = -1
	queued.max.requests = 500
	quota.consumer.default = 9223372036854775807
	quota.producer.default = 9223372036854775807
	quota.window.num = 11
	quota.window.size.seconds = 1
	replica.fetch.backoff.ms = 1000
	replica.fetch.max.bytes = 1048576
	replica.fetch.min.bytes = 1
	replica.fetch.response.max.bytes = 10485760
	replica.fetch.wait.max.ms = 500
	replica.high.watermark.checkpoint.interval.ms = 5000
	replica.lag.time.max.ms = 10000
	replica.socket.receive.buffer.bytes = 65536
	replica.socket.timeout.ms = 30000
	replication.quota.window.num = 11
	replication.quota.window.size.seconds = 1
	request.timeout.ms = 30000
	reserved.broker.max.id = 1000
	sasl.enabled.mechanisms = [GSSAPI]
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.principal.to.local.rules = [DEFAULT]
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism.inter.broker.protocol = GSSAPI
	security.inter.broker.protocol = PLAINTEXT
	socket.receive.buffer.bytes = 102400
	socket.request.max.bytes = 104857600
	socket.send.buffer.bytes = 102400
	ssl.cipher.suites = []
	ssl.client.auth = none
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
	transaction.max.timeout.ms = 900000
	transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
	transaction.state.log.load.buffer.size = 5242880
	transaction.state.log.min.isr = 1
	transaction.state.log.num.partitions = 50
	transaction.state.log.replication.factor = 1
	transaction.state.log.segment.bytes = 104857600
	transactional.id.expiration.ms = 604800000
	unclean.leader.election.enable = false
	zookeeper.connect = 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181
	zookeeper.connection.timeout.ms = null
	zookeeper.max.in.flight.requests = 10
	zookeeper.session.timeout.ms = 6000
	zookeeper.set.acl = false
	zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2019-04-19 10:00:41,427] INFO [ThrottledRequestReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2019-04-19 10:00:41,433] INFO [ThrottledRequestReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2019-04-19 10:00:41,438] INFO [ThrottledRequestReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2019-04-19 10:00:41,512] INFO Loading logs. (kafka.log.LogManager)
[2019-04-19 10:00:41,530] INFO Logs loading complete in 18 ms. (kafka.log.LogManager)
[2019-04-19 10:00:41,554] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2019-04-19 10:00:41,560] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2019-04-19 10:00:42,639] INFO Awaiting socket connections on 10.156.50.35:9092. (kafka.network.Acceptor)
[2019-04-19 10:00:42,798] INFO [SocketServer brokerId=0] Started 1 acceptor threads (kafka.network.SocketServer)
[2019-04-19 10:00:42,852] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-19 10:00:42,853] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-19 10:00:42,877] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-19 10:00:42,903] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2019-04-19 10:00:42,968] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient)
[2019-04-19 10:00:42,981] INFO Result of znode creation at /brokers/ids/0 is: OK (kafka.zk.KafkaZkClient)
[2019-04-19 10:00:42,983] INFO Registered broker 0 at path /brokers/ids/0 with addresses: ArrayBuffer(EndPoint(10.156.50.35,9092,ListenerName(PLAINTEXT),PLAINTEXT)) (kafka.zk.KafkaZkClient)
[2019-04-19 10:00:42,987] WARN No meta.properties file under dir /home/zkkafka/kafka_2.11-1.1.1/logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2019-04-19 10:00:43,187] INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-19 10:00:43,191] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-19 10:00:43,198] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-19 10:00:43,197] INFO Creating /controller (is it secure? false) (kafka.zk.KafkaZkClient)
[2019-04-19 10:00:43,220] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2019-04-19 10:00:43,222] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2019-04-19 10:00:43,231] INFO Result of znode creation at /controller is: OK (kafka.zk.KafkaZkClient)
[2019-04-19 10:00:43,233] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 6 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 10:00:43,269] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
[2019-04-19 10:00:43,332] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2019-04-19 10:00:43,335] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2019-04-19 10:00:43,353] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2019-04-19 10:00:43,442] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2019-04-19 10:00:43,502] INFO [SocketServer brokerId=0] Started processors for 1 acceptors (kafka.network.SocketServer)
[2019-04-19 10:00:43,509] INFO Kafka version : 1.1.1 (org.apache.kafka.common.utils.AppInfoParser)
[2019-04-19 10:00:43,509] INFO Kafka commitId : 8e07427ffb493498 (org.apache.kafka.common.utils.AppInfoParser)
[2019-04-19 10:00:43,512] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)

 

 

jps

 

[zkkafka@yanfabu2-35 kafka_2.11-1.1.1]$ jps
46966 QuorumPeerMain
52205 Kafka
52541 Jps

 

 

 

scp -r server.properties  zkkafka@10.156.50.36:/home/zkkafka/kafka_2.11-1.1.1/config/
scp -r zookeeper.properties  zkkafka@10.156.50.36:/home/zkkafka/kafka_2.11-1.1.1/config/
scp -r server.properties  zkkafka@10.156.50.37:/home/zkkafka/kafka_2.11-1.1.1/config/
scp -r zookeeper.properties  zkkafka@10.156.50.37:/home/zkkafka/kafka_2.11-1.1.1/config/


 

 

查看 kafka 连接到 zk

 

sh bin/zkCli.sh  -server 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181

brokers

sh bin/zkCli.sh  -server 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181
Connecting to 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181
2019-04-19 10:07:18,442 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2019-04-19 10:07:18,450 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=yanfabu2-35.base.app.dev.yf
2019-04-19 10:07:18,450 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_151
2019-04-19 10:07:18,456 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2019-04-19 10:07:18,456 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/home/zkkafka/jdk1.8.0_151/jre
2019-04-19 10:07:18,456 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/home/zkkafka/zookeeper-3.4.6/bin/../build/classes:/home/zkkafka/zookeeper-3.4.6/bin/../build/lib/*.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/slf4j-api-1.6.1.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/netty-3.7.0.Final.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/log4j-1.2.16.jar:/home/zkkafka/zookeeper-3.4.6/bin/../lib/jline-0.9.94.jar:/home/zkkafka/zookeeper-3.4.6/bin/../zookeeper-3.4.6.jar:/home/zkkafka/zookeeper-3.4.6/bin/../src/java/lib/*.jar:/home/zkkafka/zookeeper-3.4.6/bin/../conf:/home/zkkafka/jdk1.8.0_151/lib/dt.jar:/home/zkkafka/jdk1.8.0_151/lib/tools.jar:
2019-04-19 10:07:18,456 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2019-04-19 10:07:18,457 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2019-04-19 10:07:18,457 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>
2019-04-19 10:07:18,457 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
2019-04-19 10:07:18,457 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
2019-04-19 10:07:18,457 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=3.10.0-862.el7.x86_64
2019-04-19 10:07:18,458 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=zkkafka
2019-04-19 10:07:18,458 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/home/zkkafka
2019-04-19 10:07:18,458 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/home/zkkafka/zookeeper-3.4.6
2019-04-19 10:07:18,461 [myid:] - INFO  [main:ZooKeeper@438] - Initiating client connection, connectString=10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@22d8cfe0
Welcome to ZooKeeper!
2019-04-19 10:07:18,513 [myid:] - INFO  [main-SendThread(10.156.50.35:2181):ClientCnxn$SendThread@975] - Opening socket connection to server 10.156.50.35/10.156.50.35:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2019-04-19 10:07:18,710 [myid:] - INFO  [main-SendThread(10.156.50.35:2181):ClientCnxn$SendThread@852] - Socket connection established to 10.156.50.35/10.156.50.35:2181, initiating session
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTING) 0] 2019-04-19 10:07:18,779 [myid:] - INFO  [main-SendThread(10.156.50.35:2181):ClientCnxn$SendThread@1235] - Session establishment complete on server 10.156.50.35/10.156.50.35:2181, sessionid = 0x16a2fcce3fe0000, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null

[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 0] 
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 0] 
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 0] ls /
[cluster, controller, controller_epoch, brokers, zookeeper, admin, isr_change_notification, consumers, log_dir_event_notification, latest_producer_id_block, config]
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 1] ls /zookeeper
[quota]
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 2] ls /zookeeper/quota/
Command failed: java.lang.IllegalArgumentException: Path must not end with / character
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 3] ls /zookeeper/quota 
[]
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 4] ls /brokers
[ids, topics, seqid]
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 5] ls /
[cluster, controller, controller_epoch, brokers, zookeeper, admin, isr_change_notification, consumers, log_dir_event_notification, latest_producer_id_block, config]
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 6] 
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 6] ls /brokers
[ids, topics, seqid]
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 7] ls /brokers/ids
[0, 1, 2]
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 8] ls /brokers/ids/0
[]
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 9] ls /brokers/topics
[]
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 10] ls
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 11] ls /brokers/seqid 
[]
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 12] 

 

 

kafka创建 topic

bin/kafka-topics.sh --zookeeper  10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181  --list

bin/kafka-topics.sh --create --zookeeper 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181 --replication-factor 3 --partitions 3 --topic m2topic

 

bin/kafka-topics.sh --zookeeper  10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181  --list

bin/kafka-topics.sh --create --zookeeper 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181 --replication-factor 3 --partitions 3 --topic m2topic

=======================create topic log==10.156.50.35:2181===========================================
[zkkafka@yanfabu2-35 kafka_2.11-1.1.1]$ bin/kafka-topics.sh --create --zookeeper 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181 --replication-factor 3 --partitions 3 --topic m2topic
Created topic "m2topic".
[2019-04-19 10:30:24,459] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions m2topic-1 (kafka.server.ReplicaFetcherManager)
[2019-04-19 10:30:24,570] INFO [Log partition=m2topic-1, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 10:30:24,595] INFO [Log partition=m2topic-1, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 89 ms (kafka.log.Log)
[2019-04-19 10:30:24,606] INFO Created log for partition m2topic-1 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 10:30:24,607] INFO [Partition m2topic-1 broker=0] No checkpointed highwatermark is found for partition m2topic-1 (kafka.cluster.Partition)
[2019-04-19 10:30:24,637] INFO Replica loaded for partition m2topic-1 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,639] INFO Replica loaded for partition m2topic-1 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,639] INFO Replica loaded for partition m2topic-1 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,642] INFO [Partition m2topic-1 broker=0] m2topic-1 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-04-19 10:30:24,711] INFO Replica loaded for partition m2topic-0 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,711] INFO Replica loaded for partition m2topic-0 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,743] INFO [Log partition=m2topic-0, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 10:30:24,744] INFO [Log partition=m2topic-0, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2019-04-19 10:30:24,747] INFO Created log for partition m2topic-0 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 10:30:24,747] INFO [Partition m2topic-0 broker=0] No checkpointed highwatermark is found for partition m2topic-0 (kafka.cluster.Partition)
[2019-04-19 10:30:24,747] INFO Replica loaded for partition m2topic-0 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,754] INFO Replica loaded for partition m2topic-2 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,767] INFO [Log partition=m2topic-2, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 10:30:24,768] INFO [Log partition=m2topic-2, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2019-04-19 10:30:24,771] INFO Created log for partition m2topic-2 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 10:30:24,771] INFO [Partition m2topic-2 broker=0] No checkpointed highwatermark is found for partition m2topic-2 (kafka.cluster.Partition)
[2019-04-19 10:30:24,772] INFO Replica loaded for partition m2topic-2 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,772] INFO Replica loaded for partition m2topic-2 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,786] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions m2topic-0,m2topic-2 (kafka.server.ReplicaFetcherManager)
[zkkafka@yanfabu2-35 kafka_2.11-1.1.1]$ [2019-04-19 10:30:24,881] INFO [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Starting (kafka.server.ReplicaFetcherThread)
[2019-04-19 10:30:24,899] INFO [ReplicaFetcherManager on broker 0] Added fetcher for partitions List([m2topic-0, initOffset 0 to broker BrokerEndPoint(2,10.156.50.37,9092)] , [m2topic-2, initOffset 0 to broker BrokerEndPoint(1,10.156.50.36,9092)] ) (kafka.server.ReplicaFetcherManager)
[2019-04-19 10:30:24,906] INFO [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Starting (kafka.server.ReplicaFetcherThread)
[2019-04-19 10:30:24,927] INFO [ReplicaAlterLogDirsManager on broker 0] Added fetcher for partitions List() (kafka.server.ReplicaAlterLogDirsManager)
[2019-04-19 10:30:24,927] WARN [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Based on follower's leader epoch, leader replied with an unknown offset in m2topic-2. The initial fetch offset 0 will be used for truncation. (kafka.server.ReplicaFetcherThread)
[2019-04-19 10:30:24,934] WARN [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Based on follower's leader epoch, leader replied with an unknown offset in m2topic-0. The initial fetch offset 0 will be used for truncation. (kafka.server.ReplicaFetcherThread)
[2019-04-19 10:30:24,944] INFO [Log partition=m2topic-2, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log)
[2019-04-19 10:30:24,944] INFO [Log partition=m2topic-0, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log)
[2019-04-19 10:30:43,221] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)

=======================create topic log==10.156.50.36:2181===========================================
[2019-04-19 10:30:24,458] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions m2topic-2 (kafka.server.ReplicaFetcherManager)
[2019-04-19 10:30:24,581] INFO [Log partition=m2topic-2, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 10:30:24,596] INFO [Log partition=m2topic-2, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 62 ms (kafka.log.Log)
[2019-04-19 10:30:24,603] INFO Created log for partition m2topic-2 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 10:30:24,605] INFO [Partition m2topic-2 broker=1] No checkpointed highwatermark is found for partition m2topic-2 (kafka.cluster.Partition)
[2019-04-19 10:30:24,610] INFO Replica loaded for partition m2topic-2 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,611] INFO Replica loaded for partition m2topic-2 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,612] INFO Replica loaded for partition m2topic-2 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,615] INFO [Partition m2topic-2 broker=1] m2topic-2 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-04-19 10:30:24,667] INFO Replica loaded for partition m2topic-0 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,679] INFO [Log partition=m2topic-0, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 10:30:24,680] INFO [Log partition=m2topic-0, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2019-04-19 10:30:24,687] INFO Created log for partition m2topic-0 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 10:30:24,688] INFO [Partition m2topic-0 broker=1] No checkpointed highwatermark is found for partition m2topic-0 (kafka.cluster.Partition)
[2019-04-19 10:30:24,688] INFO Replica loaded for partition m2topic-0 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,688] INFO Replica loaded for partition m2topic-0 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,690] INFO Replica loaded for partition m2topic-1 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,690] INFO Replica loaded for partition m2topic-1 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,711] INFO [Log partition=m2topic-1, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 10:30:24,712] INFO [Log partition=m2topic-1, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2019-04-19 10:30:24,715] INFO Created log for partition m2topic-1 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 10:30:24,715] INFO [Partition m2topic-1 broker=1] No checkpointed highwatermark is found for partition m2topic-1 (kafka.cluster.Partition)
[2019-04-19 10:30:24,716] INFO Replica loaded for partition m2topic-1 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,717] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions m2topic-1,m2topic-0 (kafka.server.ReplicaFetcherManager)
[2019-04-19 10:30:24,831] INFO [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Starting (kafka.server.ReplicaFetcherThread)
[2019-04-19 10:30:24,841] INFO [ReplicaFetcherManager on broker 1] Added fetcher for partitions List([m2topic-0, initOffset 0 to broker BrokerEndPoint(2,10.156.50.37,9092)] , [m2topic-1, initOffset 0 to broker BrokerEndPoint(0,10.156.50.35,9092)] ) (kafka.server.ReplicaFetcherManager)
[2019-04-19 10:30:24,858] INFO [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Starting (kafka.server.ReplicaFetcherThread)
[2019-04-19 10:30:24,859] INFO [ReplicaAlterLogDirsManager on broker 1] Added fetcher for partitions List() (kafka.server.ReplicaAlterLogDirsManager)
[2019-04-19 10:30:24,876] WARN [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Based on follower's leader epoch, leader replied with an unknown offset in m2topic-0. The initial fetch offset 0 will be used for truncation. (kafka.server.ReplicaFetcherThread)
[2019-04-19 10:30:24,881] INFO [Log partition=m2topic-0, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log)
[2019-04-19 10:30:24,893] WARN [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Based on follower's leader epoch, leader replied with an unknown offset in m2topic-1. The initial fetch offset 0 will be used for truncation. (kafka.server.ReplicaFetcherThread)
[2019-04-19 10:30:24,893] INFO [Log partition=m2topic-1, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log)
[2019-04-19 10:30:24,931] ERROR [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition m2topic-1 at offset 0 (kafka.server.ReplicaFetcherThread)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition.

=======================create topic log==10.156.50.37:2181===========================================

ager)
[2019-04-19 10:30:24,435] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions m2topic-0 (kafka.server.ReplicaFetcherManager)
[2019-04-19 10:30:24,511] INFO [Log partition=m2topic-0, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 10:30:24,518] INFO [Log partition=m2topic-0, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 49 ms (kafka.log.Log)
[2019-04-19 10:30:24,522] INFO Created log for partition m2topic-0 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 10:30:24,522] INFO [Partition m2topic-0 broker=2] No checkpointed highwatermark is found for partition m2topic-0 (kafka.cluster.Partition)
[2019-04-19 10:30:24,525] INFO Replica loaded for partition m2topic-0 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,526] INFO Replica loaded for partition m2topic-0 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,526] INFO Replica loaded for partition m2topic-0 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,527] INFO [Partition m2topic-0 broker=2] m2topic-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-04-19 10:30:24,545] INFO Replica loaded for partition m2topic-1 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,548] INFO [Log partition=m2topic-1, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 10:30:24,549] INFO [Log partition=m2topic-1, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2019-04-19 10:30:24,550] INFO Created log for partition m2topic-1 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 10:30:24,550] INFO [Partition m2topic-1 broker=2] No checkpointed highwatermark is found for partition m2topic-1 (kafka.cluster.Partition)
[2019-04-19 10:30:24,550] INFO Replica loaded for partition m2topic-1 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,550] INFO Replica loaded for partition m2topic-1 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,551] INFO Replica loaded for partition m2topic-2 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,552] INFO Replica loaded for partition m2topic-2 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,556] INFO [Log partition=m2topic-2, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 10:30:24,557] INFO [Log partition=m2topic-2, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2019-04-19 10:30:24,558] INFO Created log for partition m2topic-2 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 10:30:24,558] INFO [Partition m2topic-2 broker=2] No checkpointed highwatermark is found for partition m2topic-2 (kafka.cluster.Partition)
[2019-04-19 10:30:24,558] INFO Replica loaded for partition m2topic-2 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 10:30:24,558] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions m2topic-1,m2topic-2 (kafka.server.ReplicaFetcherManager)
[2019-04-19 10:30:24,603] INFO [ReplicaFetcherManager on broker 2] Added fetcher for partitions List([m2topic-1, initOffset 0 to broker BrokerEndPoint(0,10.156.50.35,9092)] , [m2topic-2, initOffset 0 to broker BrokerEndPoint(1,10.156.50.36,9092)] ) (kafka.server.ReplicaFetcherManager)
[2019-04-19 10:30:24,605] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Starting (kafka.server.ReplicaFetcherThread)
[2019-04-19 10:30:24,608] INFO [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Starting (kafka.server.ReplicaFetcherThread)
[2019-04-19 10:30:24,621] INFO [ReplicaAlterLogDirsManager on broker 2] Added fetcher for partitions List() (kafka.server.ReplicaAlterLogDirsManager)
[2019-04-19 10:30:24,671] WARN [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Based on follower's leader epoch, leader replied with an unknown offset in m2topic-2. The initial fetch offset 0 will be used for truncation. (kafka.server.ReplicaFetcherThread)
[2019-04-19 10:30:24,673] INFO [Log partition=m2topic-2, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log)
[2019-04-19 10:30:24,702] WARN [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Based on follower's leader epoch, leader replied with an unknown offset in m2topic-1. The initial fetch offset 0 will be used for truncation. (kafka.server.ReplicaFetcherThread)
[2019-04-19 10:30:24,702] INFO [Log partition=m2topic-1, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.Log)
[2019-04-19 10:30:24,750] ERROR [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition m2topic-2 at offset 0 (kafka.server.ReplicaFetcherThread)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition.
[2019-04-19 10:30:24,822] ERROR [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition m2topic-1 at offset 0 (kafka.server.ReplicaFetcherThread)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition.

 

topic 状态

======================kafka topic ==================================================================
[zkkafka@yanfabu2-35 kafka_2.11-1.1.1]$ bin/kafka-topics.sh --zookeeper  10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181  --list
m2topic
[zkkafka@yanfabu2-35 kafka_2.11-1.1.1]$ 


 zk  topic 状态

======================zk topic log==================================================================
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 12] 
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 12] 
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 12] ls /brokers
[ids, topics, seqid]
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 13] ls
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 14] ls /brokers/topics
[m2topic]
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 15] 
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 15] 
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 15] 
[zk: 10.156.50.35:2181,10.156.50.36:2181,10.156.50.37:2181(CONNECTED) 15] 


 

4.logstash 安装 

 

4.1 标准输入输出

vi logstash_default.conf

input {
  stdin{
  }
}
 
output {
  stdout{
  }
}

 日志

[zkkafka@yanfabu2-36 logstash-7.0.0]$ sh bin/logstash -f config/logstash_default.conf 
Sending Logstash logs to /home/zkkafka/logstash-7.0.0/logs which is now configured via log4j2.properties
[2019-04-19T12:05:32,869][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-04-19T12:05:32,889][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.0.0"}
[2019-04-19T12:05:41,601][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, :thread=>"#<Thread:0x24407be1 run>"}
[2019-04-19T12:05:41,777][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>"main"}
The stdin plugin is now waiting for input:
[2019-04-19T12:05:42,012][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-04-19T12:05:42,609][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

/home/zkkafka/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated

{
      "@version" => "1",
       "message" => "",
    "@timestamp" => 2019-04-19T04:06:02.764Z,
          "host" => "yanfabu2-36.base.app.dev.yf"
}
name
{
      "@version" => "1",
       "message" => "name",
    "@timestamp" => 2019-04-19T04:06:09.289Z,
          "host" => "yanfabu2-36.base.app.dev.yf"
}
hahah 
{
      "@version" => "1",
       "message" => "hahah ",
    "@timestamp" => 2019-04-19T04:06:11.954Z,
          "host" => "yanfabu2-36.base.app.dev.yf"
}

 

4.2 kafka 输入,标准输出

input{
    kafka {
        codec => "plain"
        topics => ["m2topic"]
        bootstrap_servers => ["10.156.50.35:9092,10.156.50.36:9092,10.156.50.37:9092"]
   }
 
}
output{
    stdout{

    }
}

 日志

[zkkafka@yanfabu2-36 logstash-7.0.0]$ sh bin/logstash -f config/logstash_kafka_input.conf
Sending Logstash logs to /home/zkkafka/logstash-7.0.0/logs which is now configured via log4j2.properties
[2019-04-19T14:56:09,594][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-04-19T14:56:09,618][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.0.0"}
[2019-04-19T14:56:19,401][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, :thread=>"#<Thread:0x7d2abac6 run>"}
[2019-04-19T14:56:19,480][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>"main"}
[2019-04-19T14:56:19,674][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-04-19T14:56:19,897][INFO ][org.apache.kafka.clients.consumer.ConsumerConfig] ConsumerConfig values: 
	auto.commit.interval.ms = 5000
	auto.offset.reset = latest
	bootstrap.servers = [10.156.50.35:9092, 10.156.50.36:9092, 10.156.50.37:9092]
	check.crcs = true
	client.dns.lookup = default
	client.id = logstash-0
	connections.max.idle.ms = 540000
	default.api.timeout.ms = 60000
	enable.auto.commit = true
	exclude.internal.topics = true
	fetch.max.bytes = 52428800
	fetch.max.wait.ms = 500
	fetch.min.bytes = 1
	group.id = logstash
	heartbeat.interval.ms = 3000
	interceptor.classes = []
	internal.leave.group.on.close = true
	isolation.level = read_uncommitted
	key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
	max.partition.fetch.bytes = 1048576
	max.poll.interval.ms = 300000
	max.poll.records = 500
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	session.timeout.ms = 10000
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer

[2019-04-19T14:56:20,102][INFO ][org.apache.kafka.common.utils.AppInfoParser] Kafka version : 2.1.0
[2019-04-19T14:56:20,102][INFO ][org.apache.kafka.common.utils.AppInfoParser] Kafka commitId : eec43959745f444f
[2019-04-19T14:56:20,861][INFO ][org.apache.kafka.clients.Metadata] Cluster ID: 6PkAOxZQSne_bGuEWPR4CA
[2019-04-19 14:56:20,914] INFO Topic creation Map(__consumer_offsets-22 -> ArrayBuffer(2), __consumer_offsets-30 -> ArrayBuffer(1), __consumer_offsets-8 -> ArrayBuffer(0), __consumer_offsets-21 -> ArrayBuffer(1), __consumer_offsets-4 -> ArrayBuffer(2), __consumer_offsets-27 -> ArrayBuffer(1), __consumer_offsets-7 -> ArrayBuffer(2), __consumer_offsets-9 -> ArrayBuffer(1), __consumer_offsets-46 -> ArrayBuffer(2), __consumer_offsets-25 -> ArrayBuffer(2), __consumer_offsets-35 -> ArrayBuffer(0), __consumer_offsets-41 -> ArrayBuffer(0), __consumer_offsets-33 -> ArrayBuffer(1), __consumer_offsets-23 -> ArrayBuffer(0), __consumer_offsets-49 -> ArrayBuffer(2), __consumer_offsets-47 -> ArrayBuffer(0), __consumer_offsets-16 -> ArrayBuffer(2), __consumer_offsets-28 -> ArrayBuffer(2), __consumer_offsets-31 -> ArrayBuffer(2), __consumer_offsets-36 -> ArrayBuffer(1), __consumer_offsets-42 -> ArrayBuffer(1), __consumer_offsets-3 -> ArrayBuffer(1), __consumer_offsets-18 -> ArrayBuffer(1), __consumer_offsets-37 -> ArrayBuffer(2), __consumer_offsets-15 -> ArrayBuffer(1), __consumer_offsets-24 -> ArrayBuffer(1), __consumer_offsets-38 -> ArrayBuffer(0), __consumer_offsets-17 -> ArrayBuffer(0), __consumer_offsets-48 -> ArrayBuffer(1), __consumer_offsets-19 -> ArrayBuffer(2), __consumer_offsets-11 -> ArrayBuffer(0), __consumer_offsets-13 -> ArrayBuffer(2), __consumer_offsets-2 -> ArrayBuffer(0), __consumer_offsets-43 -> ArrayBuffer(2), __consumer_offsets-6 -> ArrayBuffer(1), __consumer_offsets-14 -> ArrayBuffer(0), __consumer_offsets-20 -> ArrayBuffer(0), __consumer_offsets-0 -> ArrayBuffer(1), __consumer_offsets-44 -> ArrayBuffer(0), __consumer_offsets-39 -> ArrayBuffer(1), __consumer_offsets-12 -> ArrayBuffer(1), __consumer_offsets-45 -> ArrayBuffer(1), __consumer_offsets-1 -> ArrayBuffer(2), __consumer_offsets-5 -> ArrayBuffer(0), __consumer_offsets-26 -> ArrayBuffer(0), __consumer_offsets-29 -> ArrayBuffer(0), __consumer_offsets-34 -> ArrayBuffer(2), __consumer_offsets-10 -> ArrayBuffer(2), __consumer_offsets-32 -> ArrayBuffer(0), __consumer_offsets-40 -> ArrayBuffer(2)) (kafka.zk.AdminZkClient)
[2019-04-19 14:56:20,948] INFO [KafkaApi-1] Auto creation of topic __consumer_offsets with 50 partitions and replication factor 1 is successful (kafka.server.KafkaApis)
[2019-04-19T14:56:21,055][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2019-04-19 14:56:21,415] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions __consumer_offsets-30,__consumer_offsets-21,__consumer_offsets-27,__consumer_offsets-9,__consumer_offsets-33,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-48,__consumer_offsets-6,__consumer_offsets-0,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45 (kafka.server.ReplicaFetcherManager)
[2019-04-19 14:56:21,494] INFO [Log partition=__consumer_offsets-0, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 14:56:21,494] INFO [Log partition=__consumer_offsets-0, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 13 ms (kafka.log.Log)
[2019-04-19 14:56:21,495] INFO Created log for partition __consumer_offsets-0 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 14:56:21,496] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition)
[2019-04-19 14:56:21,507] INFO Replica loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 14:56:21,516] INFO [Partition __consumer_offsets-0 broker=1] __consumer_offsets-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-04-19 14:56:21,530] INFO [Log partition=__consumer_offsets-48, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 14:56:21,530] INFO [Log partition=__consumer_offsets-48, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2019-04-19 14:56:21,531] INFO Created log for partition __consumer_offsets-48 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 14:56:21,531] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition)
[2019-04-19 14:56:21,531] INFO Replica loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 14:56:21,531] INFO [Partition __consumer_offsets-48 broker=1] __consumer_offsets-48 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-04-19 14:56:21,556] INFO [Log partition=__consumer_offsets-45, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 14:56:21,557] INFO [Log partition=__consumer_offsets-45, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2019-04-19 14:56:21,558] INFO Created log for partition __consumer_offsets-45 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 14:56:21,558] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition)
[2019-04-19 14:56:21,558] INFO Replica loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 14:56:21,558] INFO [Partition __consumer_offsets-45 broker=1] __consumer_offsets-45 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-04-19 14:56:21,571] INFO [Log partition=__consumer_offsets-42, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 14:56:21,572] INFO [Log partition=__consumer_offsets-42, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2019-04-19 14:56:21,572] INFO Created log for partition __consumer_offsets-42 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 14:56:21,573] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition)
[2019-04-19 14:56:21,573] INFO Replica loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 14:56:21,573] INFO [Partition __consumer_offsets-42 broker=1] __consumer_offsets-42 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-04-19 14:56:21,588] INFO [Log partition=__consumer_offsets-39, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 14:56:21,588] INFO [Log partition=__consumer_offsets-39, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2019-04-19 14:56:21,589] INFO Created log for partition __consumer_offsets-39 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 14:56:21,590] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition)
[2019-04-19 14:56:21,590] INFO Replica loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 14:56:21,590] INFO [Partition __consumer_offsets-39 broker=1] __consumer_offsets-39 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-04-19 14:56:21,604] INFO [Log partition=__consumer_offsets-36, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 14:56:21,605] INFO [Log partition=__consumer_offsets-36, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2019-04-19 14:56:21,605] INFO Created log for partition __consumer_offsets-36 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 14:56:21,606] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition)
[2019-04-19 14:56:21,606] INFO Replica loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 14:56:21,606] INFO [Partition __consumer_offsets-36 broker=1] __consumer_offsets-36 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-04-19 14:56:21,619] INFO [Log partition=__consumer_offsets-33, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 14:56:21,620] INFO [Log partition=__consumer_offsets-33, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2019-04-19 14:56:21,621] INFO Created log for partition __consumer_offsets-33 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 14:56:21,621] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition)
[2019-04-19 14:56:21,621] INFO Replica loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 14:56:21,621] INFO [Partition __consumer_offsets-33 broker=1] __consumer_offsets-33 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-04-19 14:56:21,634] INFO [Log partition=__consumer_offsets-30, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 14:56:21,635] INFO [Log partition=__consumer_offsets-30, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2019-04-19 14:56:21,635] INFO Created log for partition __consumer_offsets-30 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 14:56:21,636] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition)
[2019-04-19 14:56:21,636] INFO Replica loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 14:56:21,636] INFO [Partition __consumer_offsets-30 broker=1] __consumer_offsets-30 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-04-19 14:56:21,650] INFO [Log partition=__consumer_offsets-27, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 14:56:21,650] INFO [Log partition=__consumer_offsets-27, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2019-04-19 14:56:21,651] INFO Created log for partition __consumer_offsets-27 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 14:56:21,651] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition)
[2019-04-19 14:56:21,651] INFO Replica loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 14:56:21,652] INFO [Partition __consumer_offsets-27 broker=1] __consumer_offsets-27 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-04-19 14:56:21,665] INFO [Log partition=__consumer_offsets-24, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 14:56:21,666] INFO [Log partition=__consumer_offsets-24, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2019-04-19 14:56:21,666] INFO Created log for partition __consumer_offsets-24 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 14:56:21,667] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition)
[2019-04-19 14:56:21,667] INFO Replica loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 14:56:21,667] INFO [Partition __consumer_offsets-24 broker=1] __consumer_offsets-24 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-04-19 14:56:21,680] INFO [Log partition=__consumer_offsets-21, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 14:56:21,681] INFO [Log partition=__consumer_offsets-21, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2019-04-19 14:56:21,682] INFO Created log for partition __consumer_offsets-21 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 14:56:21,682] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition)
[2019-04-19 14:56:21,682] INFO Replica loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 14:56:21,682] INFO [Partition __consumer_offsets-21 broker=1] __consumer_offsets-21 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-04-19 14:56:21,697] INFO [Log partition=__consumer_offsets-18, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 14:56:21,698] INFO [Log partition=__consumer_offsets-18, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2019-04-19 14:56:21,699] INFO Created log for partition __consumer_offsets-18 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 14:56:21,699] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition)
[2019-04-19 14:56:21,699] INFO Replica loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 14:56:21,699] INFO [Partition __consumer_offsets-18 broker=1] __consumer_offsets-18 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-04-19 14:56:21,713] INFO [Log partition=__consumer_offsets-15, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 14:56:21,714] INFO [Log partition=__consumer_offsets-15, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2019-04-19 14:56:21,715] INFO Created log for partition __consumer_offsets-15 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 14:56:21,715] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition)
[2019-04-19 14:56:21,715] INFO Replica loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 14:56:21,715] INFO [Partition __consumer_offsets-15 broker=1] __consumer_offsets-15 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-04-19 14:56:21,728] INFO [Log partition=__consumer_offsets-12, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 14:56:21,729] INFO [Log partition=__consumer_offsets-12, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2019-04-19 14:56:21,730] INFO Created log for partition __consumer_offsets-12 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 14:56:21,730] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition)
[2019-04-19 14:56:21,730] INFO Replica loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 14:56:21,730] INFO [Partition __consumer_offsets-12 broker=1] __consumer_offsets-12 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-04-19 14:56:21,748] INFO [Log partition=__consumer_offsets-9, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 14:56:21,749] INFO [Log partition=__consumer_offsets-9, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2019-04-19 14:56:21,750] INFO Created log for partition __consumer_offsets-9 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 14:56:21,750] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition)
[2019-04-19 14:56:21,750] INFO Replica loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 14:56:21,750] INFO [Partition __consumer_offsets-9 broker=1] __consumer_offsets-9 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-04-19 14:56:21,764] INFO [Log partition=__consumer_offsets-6, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 14:56:21,764] INFO [Log partition=__consumer_offsets-6, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2019-04-19 14:56:21,765] INFO Created log for partition __consumer_offsets-6 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 14:56:21,765] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition)
[2019-04-19 14:56:21,765] INFO Replica loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 14:56:21,765] INFO [Partition __consumer_offsets-6 broker=1] __consumer_offsets-6 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-04-19 14:56:21,768] INFO [Log partition=__consumer_offsets-3, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Loading producer state from offset 0 with message format version 2 (kafka.log.Log)
[2019-04-19 14:56:21,768] INFO [Log partition=__consumer_offsets-3, dir=/home/zkkafka/kafka_2.11-1.1.1/logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2019-04-19 14:56:21,769] INFO Created log for partition __consumer_offsets-3 in /home/zkkafka/kafka_2.11-1.1.1/logs with properties {compression.type -> producer, message.format.version -> 1.1-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2019-04-19 14:56:21,769] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition)
[2019-04-19 14:56:21,769] INFO Replica loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Replica)
[2019-04-19 14:56:21,770] INFO [Partition __consumer_offsets-3 broker=1] __consumer_offsets-3 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2019-04-19 14:56:21,771] INFO [ReplicaAlterLogDirsManager on broker 1] Added fetcher for partitions List() (kafka.server.ReplicaAlterLogDirsManager)
[2019-04-19 14:56:21,801] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,802] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,810] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 5 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,811] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,811] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,811] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,811] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,811] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,811] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,812] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,812] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,812] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,812] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,812] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,812] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,812] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,812] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,813] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19 14:56:21,813] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-04-19T14:56:21,862][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-0, groupId=logstash] Discovered group coordinator 10.156.50.37:9092 (id: 2147483645 rack: null)
[2019-04-19T14:56:21,885][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-0, groupId=logstash] Revoking previously assigned partitions []
[2019-04-19T14:56:21,885][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-0, groupId=logstash] (Re-)joining group
[2019-04-19T14:56:24,996][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-0, groupId=logstash] Successfully joined group with generation 1
[2019-04-19T14:56:25,003][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-0, groupId=logstash] Setting newly assigned partitions [m2topic-2, m2topic-1, m2topic-0]
[2019-04-19T14:56:25,041][INFO ][org.apache.kafka.clients.consumer.internals.Fetcher] [Consumer clientId=logstash-0, groupId=logstash] Resetting offset for partition m2topic-2 to offset 1.
[2019-04-19T14:56:25,041][INFO ][org.apache.kafka.clients.consumer.internals.Fetcher] [Consumer clientId=logstash-0, groupId=logstash] Resetting offset for partition m2topic-0 to offset 1.
[2019-04-19T14:56:25,041][INFO ][org.apache.kafka.clients.consumer.internals.Fetcher] [Consumer clientId=logstash-0, groupId=logstash] Resetting offset for partition m2topic-1 to offset 2.

 

kafka 作为输入端,logstash 标准 作为输出端
bin/kafka-console-producer.sh  --broker-list 10.156.50.35:9092,10.156.50.36:9092,10.156.50.37:9092 --topic m2topic  
>aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

{
      "@version" => "1",
       "message" => "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
    "@timestamp" => 2019-04-19T06:58:31.533Z
}

 

4.3 kafka 输入,mongodb 输出

input{
    kafka {
        codec => "plain"
        topics => ["m2topic"]
        bootstrap_servers => ["10.156.50.35:9092,10.156.50.36:9092,10.156.50.37:9092"]
   }
 
}
output{
    mongodb {
            codec => line {format => "%{message}"}
            uri => "mongodb://10.156.50.206:27017"
            database => "m2"
            collection => "request_log"
    }
}

 

 

mongodb 输出 安装插件

[zkkafka@yanfabu2-36 logstash-7.0.0]$ sh bin/logstash -f config/logstash_kafka_mongodb.conf 
Sending Logstash logs to /home/zkkafka/logstash-7.0.0/logs which is now configured via log4j2.properties
[2019-04-19T15:16:04,371][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-04-19T15:16:04,392][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.0.0"}
[2019-04-19T15:16:12,683][ERROR][logstash.plugins.registry] Tried to load a plugin's code, but failed. {:exception=>#<LoadError: no such file to load -- logstash/outputs/mongodb>, :path=>"logstash/outputs/mongodb", :type=>"output", :name=>"mongodb"}
[2019-04-19T15:16:12,702][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::PluginLoadingError", :message=>"Couldn't find any output plugin named 'mongodb'. Are you sure this is correct? Trying to load the mongodb output plugin resulted in this error: no such file to load -- logstash/outputs/mongodb", :backtrace=>["/home/zkkafka/logstash-7.0.0/logstash-core/lib/logstash/plugins/registry.rb:211:in `lookup_pipeline_plugin'", "/home/zkkafka/logstash-7.0.0/logstash-core/lib/logstash/plugin.rb:137:in `lookup'", "org/logstash/plugins/PluginFactoryExt.java:200:in `plugin'", "org/logstash/plugins/PluginFactoryExt.java:137:in `buildOutput'", "org/logstash/execution/JavaBasePipelineExt.java:50:in `initialize'", "/home/zkkafka/logstash-7.0.0/logstash-core/lib/logstash/java_pipeline.rb:23:in `initialize'", "/home/zkkafka/logstash-7.0.0/logstash-core/lib/logstash/pipeline_action/create.rb:36:in `execute'", "/home/zkkafka/logstash-7.0.0/logstash-core/lib/logstash/agent.rb:325:in `block in converge_state'"]}
[2019-04-19T15:16:13,100][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2019-04-19T15:16:18,031][INFO ][logstash.runner          ] Logstash shut down.

下载
[m2@yanfabu2-29 logstash-7.0.0]$ bin/logstash-plugin install logstash-output-mongodb
Validating logstash-output-mongodb
Unable to download data from https://rubygems.org - Errno::ECONNREFUSED: Connection refused - Failed to open TCP connection to 127.0.0.1:3128 (Connection refused - connect(2) for "127.0.0.1" port 3128) (https://api.rubygems.org/latest_specs.4.8.gz)
ERROR: Installation aborted, verification failed for logstash-output-mongodb 
[m2@yanfabu2-29 logstash-7.0.0]$ su 


yum install -y ruby

https://rubygems.org/pages/download

ruby setup.rb
gem update --system
gem install mygem

https://github.com/logstash-plugins/logstash-output-mongodb.git




[m2@yanfabu2-29 logstash-7.0.0]$ sh bin/logstash -f config/logstash_kafka_mongodb.conf 
[ERROR] 2019-04-19 16:13:35.399 [main] Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error: (VersionConflict) Bundler could not find compatible versions for gem "mongo":
  In Gemfile:
    logstash-output-mongodb java was resolved to 3.1.5, which depends on
      mongo (~> 2.0.6) java

Could not find gem 'mongo (~> 2.0.6)', which is required by gem 'logstash-output-mongodb', in any of the sources.

gem sources --add https://gems.ruby-china.com/ --remove https://rubygems.org/
gem sources -l

bundle install
bundle config mirror.https://rubygems.org https://gems.ruby-china.com/



9 ~]# cat  /usr/local/lib64/gems/ruby/bson-4.4.2/gem_make.out
current directory: /usr/local/share/gems/gems/bson-4.4.2/ext/bson
/usr/bin/ruby -I /usr/local/share/ruby/site_ruby -r ./siteconf20190422-3296-11762x6.rb extconf.rb
mkmf.rb can't find header files for ruby at /usr/share/include/ruby.h

 yum install ruby-devel

 gem install bson
Building native extensions. This could take a while...
ERROR:  Error installing bson:
	ERROR: Failed to build gem native extension.

    current directory: /usr/local/share/gems/gems/bson-4.4.2/ext/bson
/usr/bin/ruby -I /usr/local/share/ruby/site_ruby -r ./siteconf20190422-3296-11762x6.rb extconf.rb
mkmf.rb can't find header files for ruby at /usr/share/include/ruby.h

extconf failed, exit code 1

Gem files will remain installed in /usr/local/share/gems/gems/bson-4.4.2 for inspection.
Results logged to /usr/local/lib64/gems/ruby/bson-4.4.2/gem_make.out

[root@yanfabu2-29 ~]# gem install bson
Building native extensions. This could take a while...
Successfully installed bson-4.4.2
Parsing documentation for bson-4.4.2
unable to convert "\xE9" from ASCII-8BIT to UTF-8 for /usr/local/lib64/gems/ruby/bson-4.4.2/bson_native.so, skipping
unable to convert "\xE9" from ASCII-8BIT to UTF-8 for lib/bson_native.so, skipping
Installing ri documentation for bson-4.4.2
1 gem installed



 ./logstash-plugin  list
Bundler::VersionConflict: Bundler could not find compatible versions for gem "mongo":
  In Gemfile:
    logstash-output-mongodb (= 3.1.5) java was resolved to 3.1.5, which depends on
      mongo (~> 2.0.6) java

Could not find gem 'mongo (~> 2.0.6)', which is required by gem 'logstash-output-mongodb (= 3.1.5)', in any of the sources.
            start at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/bundler-1.17.3/lib/bundler/resolver.rb:56
          resolve at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/bundler-1.17.3/lib/bundler/resolver.rb:22
          resolve at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/bundler-1.17.3/lib/bundler/definition.rb:258
            specs at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/bundler-1.17.3/lib/bundler/definition.rb:170
        specs_for at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/bundler-1.17.3/lib/bundler/definition.rb:237
  requested_specs at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/bundler-1.17.3/lib/bundler/definition.rb:226
  requested_specs at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/bundler-1.17.3/lib/bundler/runtime.rb:108
            setup at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/bundler-1.17.3/lib/bundler/runtime.rb:20
            setup at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/bundler-1.17.3/lib/bundler.rb:107
           setup! at /home/m2/logstash-7.0.0/lib/bootstrap/bundler.rb:62
          execute at /home/m2/logstash-7.0.0/lib/pluginmanager/list.rb:17
              run at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/command.rb:67
          execute at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/subcommand/execution.rb:11
              run at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/command.rb:67
              run at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/command.rb:132
           <main> at /home/m2/logstash-7.0.0/lib/pluginmanager/main.rb:48
[m2@yanfabu2-29 bin]$ ./logstash-plugin  -help





[root@yanfabu2-29 ~]# gem install bson
Building native extensions. This could take a while...
Successfully installed bson-4.4.2
Parsing documentation for bson-4.4.2
unable to convert "\xE9" from ASCII-8BIT to UTF-8 for /usr/local/lib64/gems/ruby/bson-4.4.2/bson_native.so, skipping
unable to convert "\xE9" from ASCII-8BIT to UTF-8 for lib/bson_native.so, skipping
Installing ri documentation for bson-4.4.2
1 gem installed
[root@yanfabu2-29 ~]# gem install mongo
Successfully installed mongo-2.8.0
Parsing documentation for mongo-2.8.0
Installing ri documentation for mongo-2.8.0
1 gem installed





 ./logstash-plugin  list
Bundler::VersionConflict: Bundler could not find compatible versions for gem "mongo":
  In Gemfile:
    logstash-output-mongodb (= 3.1.5) java was resolved to 3.1.5, which depends on
      mongo (~> 2.0.6) java

Could not find gem 'mongo (~> 2.0.6)', which is required by gem 'logstash-output-mongodb (= 3.1.5)', in any of the sources.
            start at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/bundler-1.17.3/lib/bundler/resolver.rb:56
          resolve at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/bundler-1.17.3/lib/bundler/resolver.rb:22
          resolve at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/bundler-1.17.3/lib/bundler/definition.rb:258
            specs at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/bundler-1.17.3/lib/bundler/definition.rb:170
        specs_for at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/bundler-1.17.3/lib/bundler/definition.rb:237
  requested_specs at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/bundler-1.17.3/lib/bundler/definition.rb:226
  requested_specs at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/bundler-1.17.3/lib/bundler/runtime.rb:108
            setup at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/bundler-1.17.3/lib/bundler/runtime.rb:20
            setup at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/bundler-1.17.3/lib/bundler.rb:107
           setup! at /home/m2/logstash-7.0.0/lib/bootstrap/bundler.rb:62
          execute at /home/m2/logstash-7.0.0/lib/pluginmanager/list.rb:17
              run at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/command.rb:67
          execute at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/subcommand/execution.rb:11
              run at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/command.rb:67
              run at /home/m2/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/command.rb:132
           <main> at /home/m2/logstash-7.0.0/lib/pluginmanager/main.rb:48
[m2@yanfabu2-29 bin]$ ./logstash-plugin  install logstash-output-mongodb
Validating logstash-output-mongodb
Installing logstash-output-mongodb
Installation successful

 

日志

nohup bin/kafka-server-start.sh config/server.properties  1>/dev/null 2>&1 &






[m2@yanfabu2-29 logstash-7.0.0]$ sh bin/logstash -f config/logstash_kafka_mongodb.conf 
Sending Logstash logs to /home/m2/logstash-7.0.0/logs which is now configured via log4j2.properties
[2019-04-22T10:30:26,397][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/home/m2/logstash-7.0.0/data/queue"}
[2019-04-22T10:30:26,426][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/home/m2/logstash-7.0.0/data/dead_letter_queue"}
[2019-04-22T10:30:27,006][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-04-22T10:30:27,022][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.0.0"}
[2019-04-22T10:30:27,075][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"e350b899-20cb-4990-8130-fbfb7b7256e3", :path=>"/home/m2/logstash-7.0.0/data/uuid"}
[2019-04-22T10:30:38,675][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, :thread=>"#<Thread:0x371280a5 run>"}
[2019-04-22T10:30:38,733][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>"main"}
[2019-04-22T10:30:38,858][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-04-22T10:30:39,006][INFO ][org.apache.kafka.clients.consumer.ConsumerConfig] ConsumerConfig values: 
	auto.commit.interval.ms = 5000
	auto.offset.reset = latest
	bootstrap.servers = [10.156.50.35:9092, 10.156.50.36:9092, 10.156.50.37:9092]
	check.crcs = true
	client.dns.lookup = default
	client.id = logstash-0
	connections.max.idle.ms = 540000
	default.api.timeout.ms = 60000
	enable.auto.commit = true
	exclude.internal.topics = true
	fetch.max.bytes = 52428800
	fetch.max.wait.ms = 500
	fetch.min.bytes = 1
	group.id = logstash
	heartbeat.interval.ms = 3000
	interceptor.classes = []
	internal.leave.group.on.close = true
	isolation.level = read_uncommitted
	key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
	max.partition.fetch.bytes = 1048576
	max.poll.interval.ms = 300000
	max.poll.records = 500
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	session.timeout.ms = 10000
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer

[2019-04-22T10:30:39,176][INFO ][org.apache.kafka.common.utils.AppInfoParser] Kafka version : 2.1.0
[2019-04-22T10:30:39,176][INFO ][org.apache.kafka.common.utils.AppInfoParser] Kafka commitId : eec43959745f444f
[2019-04-22T10:30:39,635][INFO ][org.apache.kafka.clients.Metadata] Cluster ID: 6PkAOxZQSne_bGuEWPR4CA
[2019-04-22T10:30:39,639][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-0, groupId=logstash] Discovered group coordinator 10.156.50.37:9092 (id: 2147483645 rack: null)
[2019-04-22T10:30:39,648][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-0, groupId=logstash] Revoking previously assigned partitions []
[2019-04-22T10:30:39,649][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-0, groupId=logstash] (Re-)joining group
[2019-04-22T10:30:39,937][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2019-04-22T10:30:42,709][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-0, groupId=logstash] Successfully joined group with generation 1
[2019-04-22T10:30:42,715][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-0, groupId=logstash] Setting newly assigned partitions [m2topic-2, m2topic-1, m2topic-0]
[2019-04-22T10:30:42,758][INFO ][org.apache.kafka.clients.consumer.internals.Fetcher] [Consumer clientId=logstash-0, groupId=logstash] Resetting offset for partition m2topic-1 to offset 6.
[2019-04-22T10:30:42,758][INFO ][org.apache.kafka.clients.consumer.internals.Fetcher] [Consumer clientId=logstash-0, groupId=logstash] Resetting offset for partition m2topic-0 to offset 6.
[2019-04-22T10:30:42,772][INFO ][org.apache.kafka.clients.consumer.internals.Fetcher] [Consumer clientId=logstash-0, groupId=logstash] Resetting offset for partition m2topic-2 to offset 6.


[zkkafka@yanfabu2-36 kafka_2.11-1.1.1]$ bin/kafka-console-producer.sh  --broker-list 10.156.50.35:9092,10.156.50.36:9092,10.156.50.37:9092 --topic m2topic
>{"request":{"req_data":"{\"req_no\":\"1550686650036\"}"},"start_time" : "2019-04-22 10:12:17.012"}
>{"request":{"req_data":"{\"req_no\":\"1550686650036\"}"},"start_time" : "2019-04-22 10:12:17.012"}
>

 

 

input{
    kafka {
        codec => json {
            charset => "UTF-8"
        }
        topics => ["m2topic"]
        bootstrap_servers => ["10.156.50.35:9092,10.156.50.36:9092,10.156.50.37:9092"]
   }
 
}
 

output{
    mongodb {
            codec => line {format => "%{message}"}
            uri => "mongodb://10.156.50.206:27017"
            database => "m2"
            collection => "request_log"
    }
   stdout{

    }
}

 

 

mongodb 日志

/* 1 */
{
    "_id" : ObjectId("5cbe7a64c8fd0f8e3a000979"),
    "@timestamp" : "\"2019-04-23T02:37:21.315Z\"",
    "aaaaaaaaaaaa" : "aaaaaaaaaaaa",
    "@version" : "1"
}

/* 2 */
{
    "_id" : ObjectId("5cbe7a64c8fd0f8e3a000978"),
    "@timestamp" : "\"2019-04-23T02:37:21.294Z\"",
    "aaaaaaaaaaaa" : "aaaaaaaaaaaa",
    "@version" : "1"
}

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

捐助开发者 

在兴趣的驱动下,写一个免费的东西,有欣喜,也还有汗水,希望你喜欢我的作品,同时也能支持一下。 当然,有钱捧个钱场(支持支付宝和微信 以及扣扣群),没钱捧个人场,谢谢各位。

 

个人主页http://knight-black-bob.iteye.com/



 
 
 谢谢您的赞助,我会做的更好!

分享到:
评论

相关推荐

    最新版linux logstash-7.14.1-linux-x86_64.tar.gz

    3. **输出与转发**: 输出插件让Logstash能够将处理好的数据发送到不同的目的地,如Elasticsearch、Kafka、MongoDB等。在本例中,常见的目标是Elasticsearch,因为ELK堆栈通常一起使用,提供强大的日志管理和分析能力...

    日志处理系统 (springboot+kafka)Log.zip

    该项目利用了基于springboot + vue + mysql的开发...Java、Python、Node.js、Spring Boot、Django、Express、MySQL、PostgreSQL、MongoDB、React、Angular、Vue、Bootstrap、Material-UI、Redis、Docker、Kubernetes

    最新版linux logstash-7.9.2.tar.gz

    这个版本是针对Linux环境优化的,用于从各种日志源获取数据,进行过滤、转换,并将结果发送到Elasticsearch存储或其他输出目的地,如Kafka、MongoDB等。 首先,Logstash的安装过程通常包括下载、解压和配置。在获得...

    Go-Filebeat是一个开源文件收集器主要用于获取日志文件并将其提供给logstash

    Logstash是一个数据处理管道,它可以接收来自各种源的数据(如Filebeat),对这些数据进行过滤、转换,并将它们发送到各种目标,如Elasticsearch、Kafka、MongoDB等。这种分离的设计模式使得日志收集和数据分析更加...

    logstash-7.14.2-linux-x86_64.tar.gz

    输入插件包括 beats、syslog、tcp 等,过滤器插件有 grok(用于模式匹配)、mutate(修改字段)、date(解析时间戳)等,而输出插件覆盖了各种存储和传输选项,如 Kafka、MongoDB、HTTP 等。 5. **日志管理和分析**...

    基于spark streaming+flume+kafka+hbase的实时日志处理分析系统(分为控制台版本和基于s.zip

    该项目利用了基于springboot + vue + mysql的开发...Java、Python、Node.js、Spring Boot、Django、Express、MySQL、PostgreSQL、MongoDB、React、Angular、Vue、Bootstrap、Material-UI、Redis、Docker、Kubernetes

    学习docker和docker-compose,集成ClickHouse、Elasticsearch、Kafka、M.zip

    1. Docker:Docker 是一个开源的应用容器引擎,允许开发者打包他们的应用以及依赖包到一个可移植的容器中,然后发布到任何流行的 Linux 机器上,也可以实现虚拟化。Docker 提供了一种标准化的方法来创建、部署和运行...

    logstash-7.2.1.rpm

    4. **输出插件**:输出插件决定了数据流向何处,如Elasticsearch、Kafka、MongoDB等。在Logstash 7.2.1中,Elasticsearch通常是默认和推荐的输出目标,用于构建日志分析和搜索平台。 **二、Logstash 7.2.1的特性** ...

    openjdk11+tomcat9+CASServer.zip

    4. **启动CAS Server**: 使用Tomcat的startup.sh(Unix/Linux)或startup.bat(Windows)脚本来启动服务。 5. **测试SSO功能**: 创建一个CAS客户端应用,配置相应的服务URL和验证机制,然后尝试登录,看是否能够...

    大数据实践项目 - nginx 日志分析可视化

    例如,Apache Flink或Kafka Streams可以用来实时捕获日志流,实时计算如当前在线用户数量、每分钟请求数等。这有助于实时监控服务器状态,及时发现并解决问题。 3. **Flask框架**:Flask是一款轻量级的Python Web...

    logstash-7.3.0.zip

    - **输出插件**:输出插件决定了数据的去向,可能包括 Elasticsearch、Kafka、MongoDB、MySQL 等。Elasticsearch output 插件是常见的选择,它将数据写入 Elasticsearch,便于进一步的搜索和分析。 **Logstash ...

    IT运维中间件全面学习文档

    【IT运维中间件全面学习文档】涵盖了众多中间件技术,如MySQL、Redis、Tomcat、Nginx、Zabbix、Ansible、Docker、LVS+Keepalived、JDK、Kafka、MongoDB、Zookeeper、Kubernetes(K8s)、ELK、HBase、HDFS、Elastic...

    麋鹿

    是一个数据收集、处理和转发工具,能够从各种数据源(如日志文件、网络设备、应用程序等)中收集数据,然后通过过滤、转换、增强数据,最后将处理后的数据发送到Elasticsearch或其他目的地,如Kafka、MongoDB等。...

    公司要求实时监控服务器,写个Web的监控系统

    标题中的“公司要求实时监控服务器,写个Web的监控系统”意味着我们需要构建一个基于Web的系统,用于实时监测公司的服务器状态。这样的系统可以帮助管理员及时发现并处理潜在的问题,确保服务的稳定运行。以下是一些...

    大数据课程分类.doc

    Flume用于分布式日志收集,Zookeeper提供分布式协调服务,Kafka则是一个高吞吐量的分布式消息系统。这些工具帮助构建稳定可靠的大数据处理架构。 **大数据实时计算阶段:** Mahout是一个机器学习库,Spark提供了...

    企业大数据分析平台建设方案.docx

    Flume、Logstash、NDC等工具用于收集来自不同源头的日志数据,而数据库日志和关系型数据库可以通过Sqoop进行接入。对于实时性要求高的场景,Storm和Spark Streaming可用于流数据处理。消息系统如Kafka可以协调上游...

    大数据课程分类.pdf

    Flume用于分布式日志收集,Zookeeper提供分布式协调服务,Kafka则是消息队列系统,这些工具在大数据架构中扮演着重要角色,确保数据的流动和处理效率。 **大数据实时计算:** Mahout是机器学习库,Spark提供快速、...

    【白雪红叶】JAVA学习技术栈梳理思维导图.xmind

    linux 代码控制 自动化代码检查 sonar 代码规范 阿里巴巴Java开发规范手册 UMPAY——编码规范 日志规范 异常规范 网络 协议 TCP/IP HTTP hession file HTTPS 负载均衡 容器 JBOSS tomcat resin...

    大数据岗位以及技术路线

    需要熟悉Shell脚本和Linux操作系统。 3. **大数据开发工程师**:专注于大数据平台的开发和维护,包括Hadoop、Hive、HDFS等,需要深入理解分布式系统和数据处理原理。 4. **数仓工程师**:设计和构建数据仓库,如...

Global site tag (gtag.js) - Google Analytics