Kafka Web Console是kafka的开源web监控程序.
功能介绍如下:
brokers列表
连接kafka的zk集群列表
所有topic列表,操作相应topic可以浏览查看相应message生产和消费流量图.
[root@node1 opt]# ls
collectd es5.0 hadoop_data mq path storm096 zookeepe346
elasticsearch-2.0.0-rc1 flume1.6 httpd-2.2.23 nagios php-5.4.10 stormTest.jar
elasticsearch-2.1.1 gnu influxdb nagios-plugins-1.4.13 Python-2.6.6 wget-log
elasticsearch-jdbc-2.2.0.0 grafana-2.5.0 kafka_2.10-0.9.0.1 openssl-1.0.0e Python-2.6.6.tgz
elasticsearch-jdbc-2.2.0.0.zip hadoop kafka-web-console-2.1.0-SNAPSHOT.zip ORCLfmap soft yum-3.2.26.tar.gz
[root@node1 opt]# cat zookeepe346/conf/zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/opt/zookeepe346/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
[root@node1 opt]# sh zookeepe346/bin/zkServer.sh start
JMX enabled by default
Using config: /opt/zookeepe346/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@node1 opt]# jps
4104 QuorumPeerMain
4121 Jps
[root@node1 opt]#
[root@node1 opt]# unzip kafka-web-console-2.1.0-SNAPSHOT.zip
Archive: kafka-web-console-2.1.0-SNAPSHOT.zip
inflating: kafka-web-console-2.1.0-SNAPSHOT/lib/default.kafka-web-console-2.1.0-SNAPSHOT.jar
inflating: kafka-web-console-2.1.0-SNAPSHOT/lib/finagle-kafka_2.10-0.1.2-SNAPSHOT.jar
[root@node1 opt]# cd kafka-web-console-2.1.0-SNAPSHOT
[root@node1 kafka-web-console-2.1.0-SNAPSHOT]# ls
bin conf lib README.md share
[root@node1 kafka-web-console-2.1.0-SNAPSHOT]# cd bin/
[root@node1 bin]# ls
kafka-web-console kafka-web-console.bat
[root@node1 bin]# cat ../conf/application.conf
# This is the main configuration file for the application.
# ~~~~~
http.port=9001
[root@node1 bin]# sh kafka-web-console
Play server process ID is 4154
[info] play - database [default] connected at jdbc:h2:file:play
[warn] play - Your production database [default] needs evolutions!
INSERT INTO settings (key_, value) VALUES ('PURGE_SCHEDULE', '0 0 0 ? * SUN *');
INSERT INTO settings (key_, value) VALUES ('OFFSET_FETCH_INTERVAL', '30');
[warn] play - Run with -DapplyEvolutions.default=true if you want to run them automatically (be careful)
Oops, cannot start the server.
@74e0p173o: Database 'default' needs evolution!
at play.api.db.evolutions.EvolutionsPlugin$$anonfun$onStart$1$$anonfun$apply$1.apply$mcV$sp(Evolutions.scala:484)
at play.api.db.evolutions.EvolutionsPlugin.withLock(Evolutions.scala:507)
at play.api.db.evolutions.EvolutionsPlugin$$anonfun$onStart$1.apply(Evolutions.scala:461)
at play.api.db.evolutions.EvolutionsPlugin$$anonfun$onStart$1.apply(Evolutions.scala:459)
at scala.collection.immutable.List.foreach(List.scala:318)
at play.api.db.evolutions.EvolutionsPlugin.onStart(Evolutions.scala:459)
at play.api.Play$$anonfun$start$1$$anonfun$apply$mcV$sp$1.apply(Play.scala:88)
at play.api.Play$$anonfun$start$1$$anonfun$apply$mcV$sp$1.apply(Play.scala:88)
at scala.collection.immutable.List.foreach(List.scala:318)
at play.api.Play$$anonfun$start$1.apply$mcV$sp(Play.scala:88)
at play.api.Play$$anonfun$start$1.apply(Play.scala:88)
at play.api.Play$$anonfun$start$1.apply(Play.scala:88)
at play.utils.Threads$.withContextClassLoader(Threads.scala:18)
at play.api.Play$.start(Play.scala:87)
at play.core.StaticApplication.<init>(ApplicationProvider.scala:52)
at play.core.server.NettyServer$.createServer(NettyServer.scala:243)
at play.core.server.NettyServer$$anonfun$main$3.apply(NettyServer.scala:279)
at play.core.server.NettyServer$$anonfun$main$3.apply(NettyServer.scala:274)
at scala.Option.map(Option.scala:145)
at play.core.server.NettyServer$.main(NettyServer.scala:274)
at play.core.server.NettyServer.main(NettyServer.scala)
第一次启动时要加个参数:
./kafka-web-console -DapplyEvolutions.default=true
1)Zookeeper菜单
2)Brokers菜单
3)Topics菜单
4)Topics菜单(带消息)
[root@node1 bin]# sh kafka-web-console -DapplyEvolutions.default=true
Play server process ID is 4233
[info] play - database [default] connected at jdbc:h2:file:play
[info] play - Starting application default Akka system.
[info] play - Application started (Prod)
[info] play - Listening for HTTP on /0:0:0:0:0:0:0:0:9000
5)Topic对应的消费组情况
[root@node1 bin]# ./kafka-topics.sh --create --zookeeper localhost:2181 --topic test1 --partitions 1 --replication-factor 1
Created topic "test1".
[root@node1 bin]# [2017-06-23 14:34:00,497] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions [test1,0] (kafka.server.ReplicaFetcherManager)
[2017-06-23 14:34:00,653] INFO Completed load of log test1-0 with log end offset 0 (kafka.log.Log)
[2017-06-23 14:34:00,688] INFO Created log for partition [test1,0] in /tmp/kafka-logs with properties {flush.messages -> 9223372036854775807, segment.bytes -> 1073741824, preallocate -> false, cleanup.policy -> delete, delete.retention.ms -> 86400000, segment.ms -> 604800000, min.insync.replicas -> 1, file.delete.delay.ms -> 60000, retention.ms -> 604800000, max.message.bytes -> 1000012, index.interval.bytes -> 4096, segment.index.bytes -> 10485760, retention.bytes -> -1, segment.jitter.ms -> 0, min.cleanable.dirty.ratio -> 0.5, compression.type -> producer, unclean.leader.election.enable -> true, flush.ms -> 9223372036854775807}. (kafka.log.LogManager)
[2017-06-23 14:34:00,693] INFO Partition [test1,0] on broker 0: No checkpointed highwatermark is found for partition [test1,0] (kafka.cluster.Partition)
[root@node1 bin]# ./kafka-topics.sh --create --zookeeper localhost:2181 --topic test2 --partitions 1 --replication-factor 2
Error while executing topic command : replication factor: 2 larger than available brokers: 1
[2017-06-23 14:34:51,064] ERROR kafka.admin.AdminOperationException: replication factor: 2 larger than available brokers: 1
at kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:77)
at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:236)
at kafka.admin.TopicCommand$.createTopic(TopicCommand.scala:105)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:60)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
(kafka.admin.TopicCommand$)
[root@node1 bin]# ./kafka-topics.sh --create --zookeeper localhost:2181 --topic test2 --partitions 4 --replication-factor 1
Created topic "test2".
[root@node1 bin]# [2017-06-23 14:35:04,588] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions [test2,1],[test2,2],[test2,3],[test2,0] (kafka.server.ReplicaFetcherManager)
[2017-06-23 14:35:04,643] INFO Completed load of log test2-1 with log end offset 0 (kafka.log.Log)
[2017-06-23 14:35:04,681] INFO Created log for partition [test2,1] in /tmp/kafka-logs with properties {flush.messages -> 9223372036854775807, segment.bytes -> 1073741824, preallocate -> false, cleanup.policy -> delete, delete.retention.ms -> 86400000, segment.ms -> 604800000, min.insync.replicas -> 1, file.delete.delay.ms -> 60000, retention.ms -> 604800000, max.message.bytes -> 1000012, index.interval.bytes -> 4096, segment.index.bytes -> 10485760, retention.bytes -> -1, segment.jitter.ms -> 0, min.cleanable.dirty.ratio -> 0.5, compression.type -> producer, unclean.leader.election.enable -> true, flush.ms -> 9223372036854775807}. (kafka.log.LogManager)
[2017-06-23 14:35:04,691] INFO Partition [test2,1] on broker 0: No checkpointed highwatermark is found for partition [test2,1] (kafka.cluster.Partition)
[2017-06-23 14:35:04,824] INFO Completed load of log test2-2 with log end offset 0 (kafka.log.Log)
[2017-06-23 14:35:04,832] INFO Created log for partition [test2,2] in /tmp/kafka-logs with properties {flush.messages -> 9223372036854775807, segment.bytes -> 1073741824, preallocate -> false, cleanup.policy -> delete, delete.retention.ms -> 86400000, segment.ms -> 604800000, min.insync.replicas -> 1, file.delete.delay.ms -> 60000, retention.ms -> 604800000, max.message.bytes -> 1000012, index.interval.bytes -> 4096, segment.index.bytes -> 10485760, retention.bytes -> -1, segment.jitter.ms -> 0, min.cleanable.dirty.ratio -> 0.5, compression.type -> producer, unclean.leader.election.enable -> true, flush.ms -> 9223372036854775807}. (kafka.log.LogManager)
[2017-06-23 14:35:04,834] INFO Partition [test2,2] on broker 0: No checkpointed highwatermark is found for partition [test2,2] (kafka.cluster.Partition)
[2017-06-23 14:35:04,873] INFO Completed load of log test2-3 with log end offset 0 (kafka.log.Log)
[2017-06-23 14:35:04,879] INFO Created log for partition [test2,3] in /tmp/kafka-logs with properties {flush.messages -> 9223372036854775807, segment.bytes -> 1073741824, preallocate -> false, cleanup.policy -> delete, delete.retention.ms -> 86400000, segment.ms -> 604800000, min.insync.replicas -> 1, file.delete.delay.ms -> 60000, retention.ms -> 604800000, max.message.bytes -> 1000012, index.interval.bytes -> 4096, segment.index.bytes -> 10485760, retention.bytes -> -1, segment.jitter.ms -> 0, min.cleanable.dirty.ratio -> 0.5, compression.type -> producer, unclean.leader.election.enable -> true, flush.ms -> 9223372036854775807}. (kafka.log.LogManager)
[2017-06-23 14:35:04,893] INFO Partition [test2,3] on broker 0: No checkpointed highwatermark is found for partition [test2,3] (kafka.cluster.Partition)
[2017-06-23 14:35:04,966] INFO Completed load of log test2-0 with log end offset 0 (kafka.log.Log)
[2017-06-23 14:35:04,974] INFO Created log for partition [test2,0] in /tmp/kafka-logs with properties {flush.messages -> 9223372036854775807, segment.bytes -> 1073741824, preallocate -> false, cleanup.policy -> delete, delete.retention.ms -> 86400000, segment.ms -> 604800000, min.insync.replicas -> 1, file.delete.delay.ms -> 60000, retention.ms -> 604800000, max.message.bytes -> 1000012, index.interval.bytes -> 4096, segment.index.bytes -> 10485760, retention.bytes -> -1, segment.jitter.ms -> 0, min.cleanable.dirty.ratio -> 0.5, compression.type -> producer, unclean.leader.election.enable -> true, flush.ms -> 9223372036854775807}. (kafka.log.LogManager)
[2017-06-23 14:35:04,975] INFO Partition [test2,0] on broker 0: No checkpointed highwatermark is found for partition [test2,0] (kafka.cluster.Partition)
6)Topic对应的消费组列表,可以看出消费和生产的速率
修改http服务端口:
默认是9000端口。
修改conf/application.conf 里的http.port,貌似不起作用。。
可以通过命令行传递参数进去:
./kafka-web-console -Dhttp.port=9001
相关推荐
Kafka Web Console 2.1.0 是一个针对 Apache Kafka 的强大管理界面,它提供了可视化的操作方式,帮助用户更便捷地管理和监控 Kafka 集群。这个资源是该版本的打包文件,分为两个部分,本部分为“kafka-web-console-...
Kafka Web Console 2.1.0 是一个针对 Apache Kafka 的强大管理界面,它提供了可视化的工具,使得用户能够更方便地管理和监控 Kafka 集群。这个版本是经过打包优化的资源,分为两个部分(本文件为第二部分),确保了...
《Kafka Web Console 2.1.0-SNAPSHOT:一款高效的Kafka监控工具》 在大数据处理领域,Apache Kafka作为一个分布式流处理平台,扮演着至关重要的角色。它被广泛用于构建实时数据管道和流应用,能够高效地处理大规模...
首先,"kafka连接工具客户端.rar"这个压缩包中包含的是针对Windows平台设计的Kafka可视化工具,这类工具通常具有图形化界面,使得用户无需编写复杂的命令行指令,就能直观地管理和监控Kafka集群。它们简化了创建、...
在你提到的资源中,提到了三种针对Kafka的监控软件:Kafka Web Console、Kafka Manager和KafkaOffsetMonitor。下面将详细介绍这三款工具及其功能。 1. **Kafka Web Console**: Kafka Web Console是一个基于Web的...
**KafkaEagle** 是一个基于Web的Kafka管理和监控平台,提供了图形界面方便用户管理Kafka集群。 **步骤1:下载KafkaEagle** 同样地,根据提供的链接下载KafkaEagle安装包。注意检查链接是否正确。 ```bash wget ...
此外,本章还介绍了Kafka的基本使用方法以及一些管理工具,包括kafka-web-console、KafkaOffsetMonitor和KafkaManager。 第四章则聚焦于Kafka的监控问题。监控是任何生产级系统不可或缺的部分,本章阐述了如何通过...
6. 发布和消费消息,分别使用`bin/kafka-console-producer.sh`和`bin/kafka-console-consumer.sh`。 五、实战应用 1. 日志收集:Kafka常用于收集Web服务器、应用程序等产生的日志,然后通过Logstash等工具进行处理...
7. **生产与消费消息**:可以使用`kafka-console-producer.sh`和`kafka-console-consumer.sh`命令进行消息的生产和消费测试。 **二、Eagle 2.0.6安装与使用** Eagle是一款针对大数据平台监控和管理的开源工具,...
CMAK提供了一个Web界面,帮助管理员监控Kafka集群的状态,包括Brokers、Topics、Partitions、Replication Factor、Consumer Groups等关键指标,使得管理和排查问题更为便捷。 **offsetexplorer_64bit.exe** 是一个...
其设计之初就是为了处理大规模的数据流,能够高效地从各种来源(如web服务器、应用程序等)收集日志数据,并将其传输到指定的目的地,如Hadoop的HDFS、HBase或其他的数据存储系统。 Flume的特点主要包括: - **...
Kafka是一款分布式流处理平台,而Flume可以将数据源(如Web服务器日志)收集的数据发送到Kafka主题,作为后续数据处理的输入。这种组合使得实时数据流的处理和分析成为可能,广泛应用于日志分析、监控系统和大数据...
这个实验只是一个基础示例,实际应用中,Flume 可以配置为更复杂的拓扑结构,支持多级数据处理和过滤,以及与其他大数据组件(如 Kafka、Spark)的集成,以满足更复杂的数据采集需求。在大数据环境中,理解并熟练...
还可以通过控制台管理工具(如Web Console)来监控和管理ActiveMQ。 4. **配置 ActiveMQ**:主要修改`conf/activemq.xml`文件,设置 broker 配置、网络连接、存储策略等。例如,调整内存大小、持久化策略、主题和...
4. 监控与管理:Flume 提供了一个Web界面,可以用来监控和管理Flume实例。启动Web界面的方法是在Flume命令中添加 `--webui` 参数。 在部署Flume时,需要考虑以下几点: - 高可用性:通过设置多个Flume代理和复制...
- `rocketmq-console-ng`: 提供了Web界面的管理控制台,用于查看和管理RocketMQ集群的运行状态,包括主题、队列、消费组、消息发送与接收等各项指标。 - `rocketmq-exporter`: 是一个Prometheus Exporter,用于将...
4. **Web Console管理**:通过MQ提供的Web界面进行消息监控、队列管理等操作。 5. **集群与高可用**:学习如何搭建MQ集群,实现故障转移和负载均衡。 6. **消息确认机制**:理解ACK(Acknowledgement)机制,确保...
2. **Sinks**:数据的去向,通常是数据存储系统,如 HDFS、HBase 或 Kafka。 3. **Channels**:临时存储数据的媒介,确保数据在传输过程中的可靠性。 Flume 的工作流程是线性的,数据从 Source 流入 Channel,然后...
- 配套的Web界面(Druid Console)用于监控集群状态、执行查询和管理数据源。 9. **社区与生态**: - Apache Druid 是一个活跃的开源项目,拥有丰富的社区资源和不断更新的文档,便于用户学习和解决问题。 - ...
它还介绍了如何使用管理控制台(Management Console),这是一个图形化界面,用于直观地管理和操作Vertica数据库。 “Analyzing Data”章节深入探讨了Vertica的数据分析能力,包括使用SQL进行数据查询、复杂查询...