`

meta之前辈kafka

阅读更多

原写于2012-05-21

 

原文:kafka设计原则 http://incubator.apache.org/kafka/design.html

 

1.why we built this

asdactivity stream data)数据是任何网站的一部分,反映网站使用情况,如:那些内容被搜索、展示。通常,此部分数据被以log方式记录在文件,然后定期的整合和分析。odoperation data)是关于机器性能数据,和其它不同途径整合的操作数据。

在近几年,asdod变成一个网站重要的一部分,更复杂的基础设施是必须的。

数据特点:

a、大吞吐量的不变的od,对实时计算是一个挑战,会很容易超过10or100倍。 

b、传统的记录log方式是respectable and scalable方式去支持离线处理,但延迟太高。

Kafka is intended to be a single queuing platform that can support both offline and online use cases.

 

2.major Design Elements

There is a small number of major design decisions that make Kafka different from most other messaging systems

-Kafka is designed for persistent messages as the common case;消息持久

-Throughput rather than features are the primary design constraint;吞吐量是第一要求

-State about what has been consumed is maintained as part of the consumer not the server;状态由客户端维护

-Kafka is explicitly distributed. It is assumed that producers, brokers, and consumers are all spread over multiple machines;必须是分布式

 

3.basics

Messages are the fundamental unit of communication

Messages are published to a topic by a producer which means they are physically sent to a server acting as a broker,消息被生产者发布到一个topic,意味着物理的发送消息到broker

多个consumer订阅一个topic,则此topic的每个消息都会被分发到每个consumer

kafka是分布式:producerbrokerconsumer,均可以由集群的多台机器组成,相互协作 a logic group

属于同一个consumer group的每一个consumer process,每个消息能准确的由其中的一个process消费;A more common case in our own usage is that we have multiple logical consumer groups, each consisting of a cluster of consuming machines that act as a logical whole.

kafka不管一个topic有多少个consumer,其消息仅会存储一份。

 

4.message Persistence and Caching

 

4.1 Don't fear the filesystem !

kafka完全依赖文件系统去存储和cache消息;

大家通常对磁盘的直觉是'很慢',则使人们对持久化结构,是否能提供有竞争力的性能表示怀疑;实际上,磁盘到底有多慢或多块,完全取决于如何使用磁盘,a properly designed disk structure can often be as fast as the network.

磁盘顺序读写的性能非常高, linear writes on a 6 7200rpm SATA RAID-5 array is about 300MB/secThese linear reads and writes are the most predictable of all usage patterns, and hence the one detected and optimized best by the operating system using read-ahead and write-behind techniques

现代操作系统,用mem作为diskcacheAny modern OS will happily divert all free memory to disk caching with little performance penalty when the memory is reclaimed. All disk reads and writes will go through this unified cache. 

Jvma、对象的内存开销是非常大的,通常是数据存储的2倍;b、当heap数据增大时,gc代价越来越大;

As a result of these factors using the filesystem and relying on pagecache is superior to maintaining an in-memory cache or other structure。依赖文件系统和pagecache是优于mem cahce或其它结构的。

数据压缩,Doing so will result in a cache of up to 28-30GB on a 32GB machine without GC penalties. 

This suggests a design which is very simple: maintain as much as possible in-memory and flush to the filesystem only when necessary. 尽可能的维持在内存中,仅当必须时写回到文件系统.

当数据被立即写回到持久化的文件,而未调用flush,其意味着数据仅被写入到os pagecahe,在后续某个时间由os flushThen we add a configuration driven flush policy to allow the user of the system to control how often data is flushed to the physical disk (every N messages or every M seconds) to put a bound on the amount of data "at risk" in the event of a hard crash. 提供flush策略。

 

4.2 Constant Time Suffices

The persistent data structure used in messaging systems metadata is often a BTree. BTrees are the most versatile data structure available, and make it possible to support a wide variety of transactional and non-transactional semantics in the messaging system.

Disk seeks come at 10 ms a pop, and each disk can do only one seek at a time so parallelism is limited. Hence even a handful of disk seeks leads to very high overhead. 

Furthermore BTrees require a very sophisticated page or row locking implementation to avoid locking the entire tree on each operation. 

The implementation must pay a fairly high price for row-locking or else effectively serialize all reads.

持久化消息的元数据通常是BTree结构,但磁盘结构,其代价太大。原因:寻道、避免锁整棵树。

Intuitively a persistent queue could be built on simple reads and appends to files as is commonly the case with logging solutions.

持久化队列可以构建在读和append to 文件。所以不支持BTree的一些语义,但其好处是:O(1)消耗,无锁读写。

the performance is completely decoupled from the data size--one server can now take full advantage of a number of cheap, low-rotational speed 1+TB SATA drives.  Though they have poor seek performance, these drives often have comparable performance for large reads and writes at 1/3 the price and 3x the capacity.

 

4.3 Maximizing Efficiency

Furthermore we assume each message published is read at least once (and often multiple times), hence we optimize for consumption rather than production. 更进一步,我们假设被发布的消息至少会读一次,因此优化consumer优先于producer

There are two common causes of inefficiency : 

-two many network requests, APIs are built around a "message set" abstraction This allows network requests to group messages together and amortize the overhead of the network roundtrip rather than sending a single message at a time. 仅提供批量操作api,则每次网络开销是平分在一组消息,而不是单个消息。

-and excessive byte copying. The message log maintained by the broker is itself just a directory of message sets that have been written to disk. Maintaining this common format allows optimization of the most important operation : network transfer of persistent log chunks.

To understand the impact of sendfile, it is important to understand the common data path for transfer of data from file to socket:

-The operating system reads data from the disk into pagecache in kernel space

-The application reads the data from kernel space into a user-space buffer

-The application writes the data back into kernel space into a socket buffer

-The operating system copies the data from the socket buffer to the NIC buffer where it is sent over the network

利用os提供的zero-copy only the final copy to the NIC buffer is needed. 

 

4.4 End-to-end Batch Compression

In many cases the bottleneck is actually not CPU but network. This is particularly true for a data pipeline that needs to send messages across data centers.Efficient compression requires compressing multiple messages together rather than compressing each message individually.  

Ideally this would be possible in an end-to-end fashion — that is, data would be compressed prior to sending by the producer and remain compressed on the server, only being decompressed by the eventual consumers. 

A batch of messages can be clumped together compressed and sent to the server in this form. This batch of messages will be delivered all to the same consumer and will remain in compressed form until it arrives there.

producer api 提供批量压缩,broker不对此批消息做任何操作,且以压缩的方式,一起被发送到consumer

 

4.5 Consumer state

Keeping track of what has been consumed is one of the key things a messaging system must provide.  State tracking requires updating a persistent entity and potentially causes random accesses. 

Most messaging systems keep metadata about what messages have been consumed on the broker. That is, as a message is handed out to a consumer, the broker records that fact locally. 

问题:当consumer消费失败后,会导致消息丢失 ?每次consumer消费后,给broker ack,若broker在超时时间未收到ack,则重发此消息。

问题:1.当消费成功,但未ack时,会导致消费2  2. now the broker must keep multiple states about every single message  3.broker是多台机器时,则状态之间需要同步。

 

4.5.1 Message delivery semantics

So clearly there are multiple possible message delivery guarantees that could be provided : at most onceat least onceexactly once

This problem is heavily studied, and is a variation of the "transaction commit" problem. Algorithms that provide exactly once semantics exist, two-or three-phase commits and Paxos variants being examples, but they come with some drawbacks. They typically require multiple round trips and may have poor guarantees of liveness (they can halt indefinitely). 

消费分发语义,是事务提交问题的变种。算法提供 exactly onece 语义,两阶段 or 三阶段提交,paxos 均是例子,但它们存在缺点。典型的问题是要求多次round trip,且 poor guarantees of liveness

Kafka does two unusual things with respect to metadata.  First the stream is partitioned on the brokers into a set of distinct partitions.  Within a partition messages are stored in the order in which they arrive at the broker, and will be given out to consumers in that same order. This means that rather than store metadata for each message (marking it as consumed, say), we just need to store the "high water mark" for each combination of consumer, topic, and partition.  

 

4.5.2 Consumer state

In Kafka, the consumers are responsible for maintaining state information (offset) on what has been consumed. Typically, the Kafka consumer library writes their state data to zookeeper.

This solves a distributed consensus problem, by removing the distributed part! There is a side benefit of this decision. A consumer can deliberately rewind back to an old offset and re-consume data.

 

4.5.3 Push vs. pull

A related question is whether consumers should pull data from brokers or brokers should push data to the subscriber. 

There are pros and cons to both approaches. However a push-based system has difficulty dealing with diverse consumers as the broker controls the rate at which data is transferred. push目标是consumer能在最大速率去消费,可不幸的是,当consume速率小于生产速率时,the consumer tends to be overwhelmed

A pull-based system has the nicer property that the consumer simply falls behind and catches up when it can. This can be mitigated with some kind of backoff protocol by which the consumer can indicate it is overwhelmed, but getting the rate of transfer to fully utilize (but never over-utilize) the consumer is trickier than it seems. Previous attempts at building systems in this fashion led us to go with a more traditional pull model.  不存在push问题,且也保证充分利用consumer能力。

 

5. Distribution

Kafka is built to be run across a cluster of machines as the common case. There is no central "master" node. Brokers are peers to each other and can be added and removed at anytime without any manual configuration changes. Similarly, producers and consumers can be started dynamically at any time. Each broker registers some metadata (e.g., available topics) in Zookeeper. Producers and consumers can use Zookeeper to discover topics and to co-ordinate the production and consumption. The details of producers and consumers will be described below.

 

6. Producer

 

6.1 Automatic producer load balancing

Kafka supports client-side load balancing for message producers or use of a dedicated load balancer to balance TCP connections.  

The advantage of using a level-4 load balancer is that each producer only needs a single TCP connection, and no connection to zookeeper is needed.  The disadvantage is that the balancing is done at the TCP connection level, and hence it may not be well balanced (if some producers produce many more messages then others, evenly dividing up the connections per broker may not result in evenly dividing up the messages per broker).

Client-side zookeeper-based load balancing solves some of these problems. It allows the producer to dynamically discover new brokers, and balance load on a per-request basis. It allows the producer to partition data according to some key instead of randomly.

The working of the zookeeper-based load balancing is described below. Zookeeper watchers are registered on the following events—

-a new broker comes up

-a broker goes down

-a new topic is registered

-a broker gets registered for an existing topic

Internally, the producer maintains an elastic pool of connections to the brokers, one per broker. This pool is kept updated to establish/maintain connections to all the live brokers, through the zookeeper watcher callbacks. When a producer request for a particular topic comes in, a broker partition is picked by the partitioner (see section on semantic partitioning). The available producer connection is used from the pool to send the data to the selected broker partition.

producer通过zk,管理与broker的连接。当一个请求,根据partition rule 计算分区,从连接池选择对应的connection,发送数据。

 

6.2 Asynchronous send

Asynchronous non-blocking operations are fundamental to scaling messaging systems.

This allows buffering of produce requests in a in-memory queue and batch sends that are triggered by a time interval or a pre-configured batch size. 

 

6.3 Semantic partitioning

The producer has the capability to be able to semantically map messages to the available kafka nodes and partitions. 

This allows partitioning the stream of messages with some semantic partition function based on some key in the message to spread them over broker machines. 

 

 

相关资料:

Metahttps://github.com/killme2008/Metamorphosis 【推荐】

raid5http://baike.baidu.com/view/969385.htm

磁盘种类:http://www.china001.com/show_hdr.php?xname=PPDDMV0&dname=66IP341&xpos=172

 

分享到:
评论

相关推荐

    图解 Kafka 之实战指南

    ### 图解Kafka之实战指南知识点详述 #### 一、Kafka简介 **Kafka** 起初由LinkedIn采用Scala语言开发,后捐赠给Apache基金会,现已成为一款广泛应用于分布式流处理平台的成熟软件。它凭借高吞吐量、可持久化存储、...

    kafkatool 连接kafka工具

    **Kafka Tool 连接 Kafka 工具详解** 在大数据处理和实时流处理领域,Apache Kafka 是一个不可或缺的组件,它作为一个分布式的消息中间件,提供高效、可扩展且可靠的发布订阅服务。为了方便管理和操作 Kafka 集群,...

    Kafka管理工具Kafka Tool

    **Kafka Tool:高效管理...总的来说,Kafka Tool是Kafka管理员和开发者的得力助手,它简化了与Kafka交互的过程,提高了工作效率,是管理复杂Kafka集群不可或缺的工具之一。无论是日常运维还是问题排查,都能从中受益。

    zabbix监控之kafka模板

    zabbix监控之kafka模板

    kafka可视化工具--kafkatool

    **Kafka工具详解——Kafkatool** Kafka作为一个分布式流处理平台,广泛应用于大数据实时处理和消息传递。然而,管理Kafka集群和操作其组件(如topics、partitions、offsets等)可能会变得复杂,这时就需要一些可视...

    尚硅谷大数据技术之Kafka1

    【尚硅谷大数据技术之Kafka1】章节主要介绍了Apache Kafka,这是一个分布式消息中间件,常用于大数据实时处理。Kafka采用发布/订阅模式,提供高效的数据传输能力。 1. **Kafka概述** - **定义**:Kafka是一个...

    zabbix监控之kafka模板_zbx_kafka_templates

    "zabbix监控之kafka模板_zbx_kafka_templates"这个标题表明我们将专注于配置和使用Zabbix针对Kafka的监控模板。 Kafka是LinkedIn开发的一款高吞吐量、分布式的发布/订阅消息系统,目前已成为Apache软件基金会的顶级...

    springboot 基于spring-kafka动态创建kafka消费者

    在Spring Boot应用中,我们可以利用Spring Kafka框架来与Apache Kafka进行集成,实现高效的消息传递。本文将详细探讨如何在Spring Boot项目中基于Spring Kafka动态创建Kafka消费者。 首先,了解Kafka基本概念:...

    StormStorm集成Kafka 从Kafka中读取数据

    本文将深入探讨如何实现Storm与Kafka的集成,重点在于如何从Kafka中读取数据。 **一、整合说明** Apache Storm是一个开源的分布式实时计算系统,它能够持续处理无限的数据流,确保每个事件都得到精确一次(Exactly...

    5、kafka监控工具Kafka-Eagle介绍及使用

    Apache Kafka 是一个分布式流处理平台,常用于构建实时的数据管道和应用。Kafka 提供了高吞吐量、低延迟的消息传递能力,是大数据领域中重要的消息队列(MQ)解决方案。Kafka-Eagle 是针对 Kafka 集群设计的一款高效...

    Apache Kafka实战.pdf

    《Apache Kafka实战》这本书深入浅出地介绍了Apache Kafka这一分布式流处理平台的各个方面,旨在帮助读者掌握Kafka的实际应用和核心概念。Kafka是一个高吞吐量、低延迟的消息发布订阅系统,常用于构建实时数据管道和...

    kafka2种工具 kafkatool-64bit.exe kafka-eagle-bin-1.4.6.tar.gz

    在IT行业中,Kafka是一种广泛使用的分布式流处理平台,它由Apache软件基金会开发,主要用于构建实时数据管道和流应用。本文将围绕标题和描述中提到的两种Kafka工具——kafkatool-64bit.exe和kafka-eagle-bin-1.4.6....

    Kafka详细课程讲义

    **Kafka详细课程讲义** 本课程主要涵盖了Apache Kafka的核心概念、安装配置、架构解析、API使用以及监控与面试知识点,旨在帮助学习者全面理解并掌握这一强大的分布式流处理平台。 **第 1 章 Kafka 概述** Apache...

    Kafka Tool linux版本,适用于kafka0.11及以上

    7. **安全支持**:如果Kafka集群启用了SASL/SSL或Kerberos等安全机制,Kafka Tool也能很好地与之兼容,确保管理操作的安全性。 8. **命令行集成**:虽然Kafka Tool提供了一个直观的UI,但它也支持通过命令行执行...

    kafka-java-demo 基于java的kafka生产消费者示例

    【Kafka基础知识】 Kafka是由Apache开发的分布式流处理平台,它主要被设计用来处理实时数据流。在大数据处理领域,Kafka常被用于构建实时数据管道和流应用,能够高效地处理大量的实时数据。 【Java与Kafka的结合】...

    Kafka尚硅谷.rar

    **Kafka概述** Kafka是由LinkedIn开发并贡献给Apache软件基金会的一个开源消息系统,它是一个高性能、可扩展的分布式消息中间件。Kafka最初设计的目标是处理网站活动流数据,但随着时间的发展,它已被广泛应用于...

    大数据之Kafka

    ### 大数据之Kafka #### 一、Kafka简介 Kafka是由Apache软件基金会开发的一款开源流处理平台,主要用于构建实时数据管道以及基于流的数据处理应用。它以一种高吞吐量、低延迟的方式处理数据,适用于离线和在线的...

    Kafka技术内幕:图文详解Kafka源码设计与实现+书签.pdf+源码

    7. **连接器(Connectors)和流处理(Kafka Streams)**:Kafka Connect允许用户方便地集成其他系统,如数据库,而Kafka Streams则提供了一种在Kafka之上进行流处理的API,使得实时分析和复杂事件处理变得简单。...

    Kafka技术内幕-图文详解Kafka源码设计与实现

    Kafka自LinkedIn开源以来就以高性能、高吞吐量、分布式的特性著称,本书以0.10版本的源码为基础,深入分析了Kafka的设计与实现,包括生产者和消费者的消息处理流程,新旧消费者不同的设计方式,存储层的实现,协调者...

    kafka安装包-2.13-3.6.2

    **Kafka介绍** Apache Kafka是一款高性能、分布式的消息中间件,由LinkedIn开发并捐献给Apache软件基金会。它最初设计的目标是构建一个实时的数据管道,能够高效地处理大量的数据流,同时支持发布订阅和队列模型,...

Global site tag (gtag.js) - Google Analytics