一、副本基本信息

  (1)Kafka 副本作用:提高数据可靠性。
(2)Kafka 默认副本 1 个,生产环境一般配置为 2 个,保证数据可靠性;太多副本会
         增加磁盘存储空间,增加网络上数据传输,降低效率。
(3)Kafka 中副本分为:Leader 和 Follower。Kafka 生产者只会把数据发往 Leader,
         然后 Follower 找 Leader 进行同步数据。
(4)Kafka 分区中的所有副本统称为 AR(Assigned Repllicas)。

         AR = ISR + OSR

ISR,表示和 Leader 保持同步的 Follower 集合。如果 Follower 长时间未向 Leader 发送通信请求或同步数据,则该 Follower 将被踢出 ISR。该时间阈值由 replica.lag.time.max.ms参数设定,默认 30s。Leader 发生故障之后,就会从 ISR 中选举新的 Leader。
OSR,表示 Follower 与 Leader 副本同步时,延迟过多的副本。


二、Leader 选举流程

1、简介

        Kafka 集群中有一个 broker 的 Controller 会被选举为 Controller Leader,负责管理集群
broker 的上下线,所有 topic 的分区副本分配和== Leader 选举==等工作。
        Controller 的信息同步工作是依赖于 Zookeeper 的。

kafka配置默认副本数 kafka副本默认几个_json

(1)创建一个新的 topic,4 个分区,4 个副本

[dhapp@conch01 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server conch01:9092 --create --topic atguigu1 --partitions 4 --replication-factor 4
Created topic atguigu1.
[dhapp@conch01 kafka_2.12-3.0.0]$

 (2)查看 Leader 分布情况

[dhapp@conch01 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server conch01:9092 --describe --topic atguigu1
Topic: atguigu1 TopicId: kYmWU1iuRoGLN-fqKab99A PartitionCount: 4       ReplicationFactor: 4    Configs: segment.bytes=1073741824
        Topic: atguigu1 Partition: 0    Leader: 2       Replicas: 2,1,0,3       Isr: 2,1,0,3
        Topic: atguigu1 Partition: 1    Leader: 3       Replicas: 3,0,2,1       Isr: 3,0,2,1
        Topic: atguigu1 Partition: 2    Leader: 1       Replicas: 1,2,3,0       Isr: 1,2,3,0
        Topic: atguigu1 Partition: 3    Leader: 0       Replicas: 0,3,1,2       Isr: 0,3,1,2
[dhapp@conch01 kafka_2.12-3.0.0]$

 由上可知Isr中包含了四个节点,都是存活的

(3)停止掉 conch04 的 kafka 进程,并查看 Leader 分区情况

[dhapp@conch04 kafka_2.12-3.0.0]$ bin/kafka-server-stop.sh
[dhapp@conch04 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server conch01:9092 --describe --topic atguigu1
Topic: atguigu1 TopicId: kYmWU1iuRoGLN-fqKab99A PartitionCount: 4       ReplicationFactor: 4    Configs: segment.bytes=1073741824
        Topic: atguigu1 Partition: 0    Leader: 2       Replicas: 2,1,0,3       Isr: 2,1,0
        Topic: atguigu1 Partition: 1    Leader: 0       Replicas: 3,0,2,1       Isr: 0,2,1
        Topic: atguigu1 Partition: 2    Leader: 1       Replicas: 1,2,3,0       Isr: 1,2,0
        Topic: atguigu1 Partition: 3    Leader: 0       Replicas: 0,3,1,2       Isr: 0,1,2
[dhapp@conch04 kafka_2.12-3.0.0]$

由上可知conch04停掉后,原本的Leader为3的也切换了0,Isr中只有2,1,0

(4)停止掉 conch03的 kafka 进程,并查看 Leader 分区情况

[dhapp@conch03 kafka_2.12-3.0.0]$ bin/kafka-server-stop.sh
[dhapp@conch03 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server conch01:9092 --describe --topic atguigu1
Topic: atguigu1 TopicId: kYmWU1iuRoGLN-fqKab99A PartitionCount: 4       ReplicationFactor: 4    Configs: segment.bytes=1073741824
        Topic: atguigu1 Partition: 0    Leader: 1       Replicas: 2,1,0,3       Isr: 1,0
        Topic: atguigu1 Partition: 1    Leader: 0       Replicas: 3,0,2,1       Isr: 0,1
        Topic: atguigu1 Partition: 2    Leader: 1       Replicas: 1,2,3,0       Isr: 1,0
        Topic: atguigu1 Partition: 3    Leader: 0       Replicas: 0,3,1,2       Isr: 0,1
[dhapp@conch03 kafka_2.12-3.0.0]$

(5)启动 conch04的 kafka 进程,并查看 Leader 分区情况

[dhapp@conch04 kafka_2.12-3.0.0]$ bin/kafka-server-start.sh -daemon config/server.properties
[dhapp@conch04 kafka_2.12-3.0.0]$ jps
3844 Jps
3813 Kafka
[dhapp@conch04 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server conch01:9092 --describe --topic atguigu1
Topic: atguigu1 TopicId: kYmWU1iuRoGLN-fqKab99A PartitionCount: 4       ReplicationFactor: 4    Configs: segment.bytes=1073741824
        Topic: atguigu1 Partition: 0    Leader: 1       Replicas: 2,1,0,3       Isr: 1,0,3
        Topic: atguigu1 Partition: 1    Leader: 0       Replicas: 3,0,2,1       Isr: 0,1,3
        Topic: atguigu1 Partition: 2    Leader: 1       Replicas: 1,2,3,0       Isr: 1,0,3
        Topic: atguigu1 Partition: 3    Leader: 0       Replicas: 0,3,1,2       Isr: 0,1,3
[dhapp@conch04 kafka_2.12-3.0.0]$

(6)启动 conch03 的 kafka 进程,并查看 Leader 分区情况

[dhapp@conch03 kafka_2.12-3.0.0]$ bin/kafka-server-start.sh -daemon config/server.properties
[dhapp@conch03 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server conch01:9092 --describe --topic atguigu1
Topic: atguigu1 TopicId: kYmWU1iuRoGLN-fqKab99A PartitionCount: 4       ReplicationFactor: 4    Configs: segment.bytes=1073741824
        Topic: atguigu1 Partition: 0    Leader: 1       Replicas: 2,1,0,3       Isr: 1,0,3,2
        Topic: atguigu1 Partition: 1    Leader: 0       Replicas: 3,0,2,1       Isr: 0,1,3,2
        Topic: atguigu1 Partition: 2    Leader: 1       Replicas: 1,2,3,0       Isr: 1,0,3,2
        Topic: atguigu1 Partition: 3    Leader: 0       Replicas: 0,3,1,2       Isr: 0,1,3,2
[dhapp@conch03 kafka_2.12-3.0.0]$

(7)停止掉 conch02 的 kafka 进程,并查看 Leader 分区情况

[dhapp@conch02 kafka_2.12-3.0.0]$  bin/kafka-server-stop.sh
[dhapp@conch02 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server conch01:9092 --describe --topic atguigu1
Topic: atguigu1 TopicId: kYmWU1iuRoGLN-fqKab99A PartitionCount: 4       ReplicationFactor: 4    Configs: segment.bytes=1073741824
        Topic: atguigu1 Partition: 0    Leader: 2       Replicas: 2,1,0,3       Isr: 0,3,2
        Topic: atguigu1 Partition: 1    Leader: 3       Replicas: 3,0,2,1       Isr: 0,3,2
        Topic: atguigu1 Partition: 2    Leader: 2       Replicas: 1,2,3,0       Isr: 0,3,2
        Topic: atguigu1 Partition: 3    Leader: 0       Replicas: 0,3,1,2       Isr: 0,3,2
[dhapp@conch02 kafka_2.12-3.0.0]$

三、Leader 和 Follower 故障处理细节

kafka配置默认副本数 kafka副本默认几个_json_02


kafka配置默认副本数 kafka副本默认几个_kafka配置默认副本数_03


四、分区副本分配

如果 kafka 服务器只有 4 个节点,那么设置 kafka 的分区数大于服务器台数,在 kafka底层如何分配存储副本呢?
1)创建 16 分区,3 个副本
(1)创建一个新的 topic,名称为 second。

[dhapp@conch01 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server conch01:9092 --create --partitions 16 --replication-factor 3 --topic second
Created topic second.
[dhapp@conch01 kafka_2.12-3.0.0]$

(2)查看分区和副本情况。

[dhapp@conch01 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server conch01:9092 --describe --topic second
Topic: second   TopicId: WSdQ_UR0RnOdfMrDollB3Q PartitionCount: 16      ReplicationFactor: 3    Configs: segment.bytes=1073741824
        Topic: second   Partition: 0    Leader: 0       Replicas: 0,3,1 Isr: 0,3,1
        Topic: second   Partition: 1    Leader: 2       Replicas: 2,1,0 Isr: 2,1,0
        Topic: second   Partition: 2    Leader: 3       Replicas: 3,0,2 Isr: 3,0,2
        Topic: second   Partition: 3    Leader: 1       Replicas: 1,2,3 Isr: 1,2,3
        Topic: second   Partition: 4    Leader: 0       Replicas: 0,1,2 Isr: 0,1,2
        Topic: second   Partition: 5    Leader: 2       Replicas: 2,0,3 Isr: 2,0,3
        Topic: second   Partition: 6    Leader: 3       Replicas: 3,2,1 Isr: 3,2,1
        Topic: second   Partition: 7    Leader: 1       Replicas: 1,3,0 Isr: 1,3,0
        Topic: second   Partition: 8    Leader: 0       Replicas: 0,2,3 Isr: 0,2,3
        Topic: second   Partition: 9    Leader: 2       Replicas: 2,3,1 Isr: 2,3,1
        Topic: second   Partition: 10   Leader: 3       Replicas: 3,1,0 Isr: 3,1,0
        Topic: second   Partition: 11   Leader: 1       Replicas: 1,0,2 Isr: 1,0,2
        Topic: second   Partition: 12   Leader: 0       Replicas: 0,3,1 Isr: 0,3,1
        Topic: second   Partition: 13   Leader: 2       Replicas: 2,1,0 Isr: 2,1,0
        Topic: second   Partition: 14   Leader: 3       Replicas: 3,0,2 Isr: 3,0,2
        Topic: second   Partition: 15   Leader: 1       Replicas: 1,2,3 Isr: 1,2,3
[dhapp@conch01 kafka_2.12-3.0.0]$

分区副本分配

kafka配置默认副本数 kafka副本默认几个_bootstrap_04


五、手动调整分区副本存储

kafka配置默认副本数 kafka副本默认几个_json_05

手动调整分区副本存储的步骤如下:
(1)创建一个新的 topic,名称为 three。

[dhapp@conch01 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server conch01:9092 --create --partitions 4 --replication-factor 2 --topic three
Created topic three.
[dhapp@conch01 kafka_2.12-3.0.0]$

 (2)查看分区副本存储情况。

[dhapp@conch01 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server conch01:9092 --describe --topic three
Topic: three    TopicId: E95kRGLIS3C4dFCA5XcgdA PartitionCount: 4       ReplicationFactor: 2    Configs: segment.bytes=1073741824
        Topic: three    Partition: 0    Leader: 2       Replicas: 2,1   Isr: 2,1
        Topic: three    Partition: 1    Leader: 3       Replicas: 3,0   Isr: 3,0
        Topic: three    Partition: 2    Leader: 1       Replicas: 1,2   Isr: 1,2
        Topic: three    Partition: 3    Leader: 0       Replicas: 0,3   Isr: 0,3
[dhapp@conch01 kafka_2.12-3.0.0]$

(3)创建副本存储计划(所有副本都指定存储在 broker0、broker1 中)。

[dhapp@conch01 kafka_2.12-3.0.0]$ vim increase-replication-factor.json
[dhapp@conch01 kafka_2.12-3.0.0]$ cat increase-replication-factor.json
{
"version":1,
"partitions":[
{"topic":"three","partition":0,"replicas":[0,1]},
{"topic":"three","partition":1,"replicas":[0,1]},
{"topic":"three","partition":2,"replicas":[1,0]},
{"topic":"three","partition":3,"replicas":[1,0]}]
}
[dhapp@conch01 kafka_2.12-3.0.0]$

(4)执行副本存储计划。

[dhapp@conch01 kafka_2.12-3.0.0]$ bin/kafka-reassign-partitions.sh --bootstrap-server conch01:9092 --reassignment-json-file increase-replication-factor.json --execute
Current partition replica assignment

{"version":1,"partitions":[{"topic":"three","partition":0,"replicas":[2,1],"log_dirs":["any","any"]},{"topic":"three","partition":1,"replicas":[3,0],"log_dirs":["any","any"]},{"topic":"three","partition":2,"replicas":[1,2],"log_dirs":["any","any"]},{"topic":"three","partition":3,"replicas":[0,3],"log_dirs":["any","any"]}]}

Save this to use as the --reassignment-json-file option during rollback
Successfully started partition reassignments for three-0,three-1,three-2,three-3
[dhapp@conch01 kafka_2.12-3.0.0]$

(5)验证副本存储计划。

[dhapp@conch01 kafka_2.12-3.0.0]$ bin/kafka-reassign-partitions.sh --bootstrap-server conch01:9092 --reassignment-json-file increase-replication-factor.json --verify
Status of partition reassignment:
Reassignment of partition three-0 is complete.
Reassignment of partition three-1 is complete.
Reassignment of partition three-2 is complete.
Reassignment of partition three-3 is complete.

Clearing broker-level throttles on brokers 0,1,2,3
Clearing topic-level throttles on topic three
[dhapp@conch01 kafka_2.12-3.0.0]$

(6)查看分区副本存储情况。

[dhapp@conch01 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server conch01:9092 --describe --topic three
Topic: three    TopicId: E95kRGLIS3C4dFCA5XcgdA PartitionCount: 4       ReplicationFactor: 2    Configs: segment.bytes=1073741824
        Topic: three    Partition: 0    Leader: 0       Replicas: 0,1   Isr: 1,0
        Topic: three    Partition: 1    Leader: 0       Replicas: 0,1   Isr: 0,1
        Topic: three    Partition: 2    Leader: 1       Replicas: 1,0   Isr: 1,0
        Topic: three    Partition: 3    Leader: 1       Replicas: 1,0   Isr: 0,1
[dhapp@conch01 kafka_2.12-3.0.0]$

由上可以看到所有的副本都存储在broker0和broker1服务上了


六、Leader Partition 负载平衡

kafka配置默认副本数 kafka副本默认几个_json_06

  •  auto.leader.rebalance.enable默认是 true。 自动 Leader Partition 平衡。生产环境中,leader 重选举的代价比较大,可能会带来性能影响,建议设置为 false 关闭。
  • leader.imbalance.per.broker.percentage 默认是 10%。每个 broker 允许的不平衡的 leader的比率。如果每个 broker 超过了这个值,控制器会触发 leader 的平衡。
  • leader.imbalance.check.interval.seconds默认值 300 秒。检查 leader 负载是否平衡的间隔时间。

七、增加副本因子

在生产环境当中,由于某个主题的重要等级需要提升,我们考虑增加副本。副本数的增加需要先制定计划,然后根据计划执行。
不能通过命令行的方法添加副本。
1)创建 topic

[dhapp@conch01 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server conch01:9092 --create --partitions 3 --replication-factor 1 --topic four
Created topic four.
[dhapp@conch01 kafka_2.12-3.0.0]$

2)手动增加副本存储
(1)创建副本存储计划(所有副本都指定存储在 broker0、broker1、broker2 中)。

[dhapp@conch01 kafka_2.12-3.0.0]$ vim increase-replication-factor.json
[dhapp@conch01 kafka_2.12-3.0.0]$ cat increase-replication-factor.json
{"version":1,"partitions":[
{"topic":"four","partition":0,"replicas":[0,1,2]},
{"topic":"four","partition":1,"replicas":[0,1,2]},
{"topic":"four","partition":2,"replicas":[0,1,2]}]
}
[dhapp@conch01 kafka_2.12-3.0.0]$

(2)执行副本存储计划。

[dhapp@conch01 kafka_2.12-3.0.0]$ bin/kafka-reassign-partitions.sh --bootstrap-server conch01:9092 --reassignment-json-file increase-replication-factor.json --execute
Current partition replica assignment

{"version":1,"partitions":[{"topic":"four","partition":0,"replicas":[3],"log_dirs":["any"]},{"topic":"four","partition":1,"replicas":[1],"log_dirs":["any"]},{"topic":"four","partition":2,"replicas":[0],"log_dirs":["any"]}]}

Save this to use as the --reassignment-json-file option during rollback
Successfully started partition reassignments for four-0,four-1,four-2
[dhapp@conch01 kafka_2.12-3.0.0]$

(3)查看详情

[dhapp@conch01 kafka_2.12-3.0.0]$ bin/kafka-topics.sh --bootstrap-server conch01:9092 --describe --topic four
Topic: four     TopicId: WG70RfSsRDipj6VpEXgeVw PartitionCount: 3       ReplicationFactor: 3    Configs: segment.bytes=1073741824
        Topic: four     Partition: 0    Leader: 0       Replicas: 0,1,2 Isr: 2,0,1
        Topic: four     Partition: 1    Leader: 1       Replicas: 0,1,2 Isr: 1,2,0
        Topic: four     Partition: 2    Leader: 0       Replicas: 0,1,2 Isr: 0,1,2
[dhapp@conch01 kafka_2.12-3.0.0]$

八、文件存储机制

1)Topic 数据的存储机制

kafka配置默认副本数 kafka副本默认几个_bootstrap_07

 2)思考:Topic 数据到底存储在什么位置?
(1)启动生产者,并发送消息。

[dhapp@conch01 kafka_2.12-3.0.0]$ bin/kafka-console-producer.sh --
bootstrap-server conch01:9092 --topic first
>hello world

(2)查看 conch01(或者 conch02、conch03)的/home/dhapp/software/kafka_2.12-3.0.0/kafka-logs/first-1(first-0、first-2)路径上的文件。

[dhapp@conch01 first-1]$ ll
total 20
-rw-rw-r--. 1 dhapp dhapp 10485760 Apr  8 20:48 00000000000000000000.index
-rw-rw-r--. 1 dhapp dhapp     1459 Apr  9 20:06 00000000000000000000.log
-rw-rw-r--. 1 dhapp dhapp 10485756 Apr  8 20:48 00000000000000000000.timeindex
-rw-rw-r--. 1 dhapp dhapp       10 Apr  7 23:19 00000000000000000032.snapshot
-rw-rw-r--. 1 dhapp dhapp       14 Apr  9 18:08 leader-epoch-checkpoint
-rw-rw-r--. 1 dhapp dhapp       43 Apr  7 20:49 partition.metadata
[dhapp@conch01 first-1]$

(3)直接查看 log 日志,发现是乱码。

[dhapp@conch01 first-1]$ cat 00000000000000000000.log
;U▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒aaa;▒Շ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒aaam▒y/▒▒!7▒▒!7▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒
test0
test1
test2
test3
-▒*]▒▒*]▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒."test测试回调0."test测试回调1."test测试回调2."test测试回调3"test测试回调4
▒L▒▒l▒▒▒l▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒RFtest测试指定分区没有key值0RFtest测试指定分区没有key值1RFtest测试指定分区没有key值2RFtest测试指定分区没有key值3Ftest测试指定分区没有key值z▒▒
▒o.a▒o.q▒▒▒▒▒▒▒▒▒▒▒▒▒▒TaFtest测试指定分区没有key值0T aFtest测试指定分区没有key值1T aFtest测试指定分区没有key值2T aFtest测试指定分区没有key值3TaFtest测试指定分区没有key值4▒J[v▒zYR▒zY]▒▒▒▒▒▒▒▒▒▒▒▒▒▒@4test测试自定义分区0@4test测试自定义分区1@4test测试自定义分区2@4test测试自定义分区34test测试自定义分区4
                                                                                                                  hello[dhapp@conch01 first-1]$ xterm-256colorxterm-256colorxterm-256colorxterm-256colorxterm-256color

(4)通过工具查看 index 和 log 信息。

[dhapp@conch01 first-1]$ kafka-run-class.sh kafka.tools.DumpLogSegments --files ./00000000000000000000.index
Dumping ./00000000000000000000.index
offset: 0 position: 0
[dhapp@conch01 first-1]$[dhapp@conch01 first-1]$ kafka-run-class.sh kafka.tools.DumpLogSegments --files ./00000000000000000000.log
Dumping ./00000000000000000000.log
Starting offset: 0
baseOffset: 0 lastOffset: 0 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false isControl: false position: 0 CreateTime: 1649336338617 size: 71 magic: 2 compresscodec: none crc: 1436348543 isvalid: true
baseOffset: 1 lastOffset: 1 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false isControl: false position: 71 CreateTime: 1649336375070 size: 71 magic: 2 compresscodec: none crc: 3738535845 isvalid: true
baseOffset: 2 lastOffset: 6 count: 5 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false isControl: false position: 142 CreateTime: 1649336727461 size: 121 magic: 2 compresscodec: none crc: 2289643454 isvalid: true
baseOffset: 7 lastOffset: 11 count: 5 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false isControl: false position: 263 CreateTime: 1649337327079 size: 181 magic: 2 compresscodec: none crc: 4196928813 isvalid: true
baseOffset: 12 lastOffset: 16 count: 5 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false isControl: false position: 444 CreateTime: 1649341668601 size: 271 magic: 2 compresscodec: none crc: 4246228217 isvalid: true
baseOffset: 17 lastOffset: 21 count: 5 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false isControl: false position: 715 CreateTime: 1649341836913 size: 276 magic: 2 compresscodec: none crc: 2058921226 isvalid: true
baseOffset: 22 lastOffset: 26 count: 5 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false isControl: false position: 991 CreateTime: 1649342568797 size: 226 magic: 2 compresscodec: none crc: 122313590 isvalid: true
baseOffset: 27 lastOffset: 31 count: 5 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false isControl: false position: 1217 CreateTime: 1649342899451 size: 169 magic: 2 compresscodec: snappy crc: 2967939924 isvalid: true
baseOffset: 32 lastOffset: 32 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 12 isTransactional: false isControl: false position: 1386 CreateTime: 1649505958878 size: 73 magic: 2 compresscodec: none crc: 3745951668 isvalid: true
[dhapp@conch01 first-1]$

3)index 文件和 log 文件详解

kafka配置默认副本数 kafka副本默认几个_json_08

 说明:日志存储参数配置
参数 描述
log.segment.bytes Kafka 中 log 日志是分成一块块存储的,此配置是指 log 日志划分成块的大小,默认值 1G。
log.index.interval.bytes 默认 4kb,kafka 里面每当写入了 4kb 大小的日志(.log),然后就往 index 文件里面记录一个索引。 稀疏索引。


九、文件清理策略

Kafka 中默认的日志保存时间为 7 天,可以通过调整如下参数修改保存时间。

⚫ log.retention.hours,最低优先级小时,默认 7 天。
⚫ log.retention.minutes,分钟。
⚫ log.retention.ms,最高优先级毫秒。
⚫ log.retention.check.interval.ms,负责设置检查周期,默认 5 分钟。

那么日志一旦超过了设置的时间,怎么处理呢?
Kafka 中提供的日志清理策略有delete 和 compact 两种。

1)delete 日志删除:将过期数据删除
⚫ log.cleanup.policy = delete 所有数据启用删除策略
(1)基于时间:默认打开。以 segment 中所有记录中的最大时间戳作为该文件时间戳。
(2)基于大小:默认关闭。超过设置的所有日志总大小,删除最早的 segment。log.retention.bytes,默认等于-1,表示无穷大。

思考:如果一个 segment 中有一部分数据过期,一部分没有过期,怎么处理?

2)compact 日志压缩

kafka配置默认副本数 kafka副本默认几个_json_09


十、高效读写数据

1)Kafka 本身是分布式集群,可以采用分区技术,并行度高

        提高生产端和消费端并行度,同时可以把海量的数据打散。

2)读数据采用稀疏索引,可以快速定位要消费的数据
3)顺序写磁盘

        Kafka 的 producer 生产数据,要写入到 log 文件中,写的过程是一直追加到文件末端,为顺序写。官网有数据表明,同样的磁盘,顺序写能到 600M/s,而随机写只有 100K/s。这与磁盘的机械机构有关,顺序写之所以快,是因为其省去了大量磁头寻址的时间。

kafka配置默认副本数 kafka副本默认几个_kafka配置默认副本数_10

4)页缓存 + 零拷贝技术

        主要的是在集群中不处理数据,处理数据主要在生产端和消费端的拦截器,序列化和反序列化中进行。

kafka配置默认副本数 kafka副本默认几个_kafka_11

 

 log.flush.interval.messages 强制页缓存刷写到磁盘的条数,默认是 long 的最大值,9223372036854775807。一般不建议修改,交给系统自己管理。
log.flush.interval.ms 每隔多久,刷数据到磁盘,默认是 null。一般不建议修改,交给系统自己管理。