目录

kafka使用

kafka事件监听

kafka原理

拓扑结构

消费者结构 


kafka使用

我们在SpringBoot下使用kafka

pom引入spring-kafka,注意SpringBoot的版本,不同的SpringBoot版本需要引入不同的Spring Kafka,官网信息:

  • Spring Boot 1.5 (EOL) users should use 1.3.x (Boot dependency management will use 1.1.x by default so this should be overridden).
  • Spring Boot 2.1 (EOL) users should use 2.2.x (Boot dependency management will use the correct version).
  • Spring Boot 2.2 users should use 2.3.x (Boot dependency management will use the correct version) or override version to 2.4.x).
  • Spring Boot 2.3 users should use 2.5.x (Boot dependency management will use the correct version).
  • Spring Boot 2.4 users should use 2.6.x (Boot dependency management will use the correct version)

本人用的SpringBoot为2.3.5.RELEASE,所以spring kafka 2.5.1

<dependency>
  <groupId>org.springframework.kafka</groupId>
  <artifactId>spring-kafka</artifactId>
  <version>2.5.1.RELEASE</version>
</dependency>

yml文件配置

spring:
  application:
    name: xxx
  profiles:
    active: local
  kafka:
    consumer:
      group-id: ${spring.application.name}
      enable-auto-commit: false
      auto-offset-reset: latest
    producer:
      client-id: ${spring.application.name}
      retries: 3
    bootstrap-servers: localhost:9092

生产者:

首先创建topic,

在kafka server端执行

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 3 --topic test

 这条命令会创建一个名为test的topic,有3个分区,每个分区需分配3个副本。

@Autowired
KafkaTemplate kafkaTemplate;
private void pushKafka(Long aaa, String bbb) throws Exception {
    try {
        Map<String, Object> paramMap = new HashMap<>();
        paramMap.put("aaa", aaa);
        paramMap.put("bbb", bbb);
        kafkaTemplate.send("topicName",
                UUID.randomUUID().toString().replace("-", ""),
                JSONObject.toJSONString(paramMap));
       
    } catch (Exception e) {
        throw new Exception("");
    }
}

如此,map中的消息会被发出去 

消费者:

 新建一个Listener监听某一topic的消息

@Component
public class xxxListener {
    protected Logger logger = LogManager.getLogger(this.getClass());

    @KafkaListener(topics = {"topicName"}, clientIdPrefix = "xxxListener")
    public void capitalHandler(@Payload String value, @Header(KafkaHeaders.OFFSET) int offset,
                               @Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) String key,
                               @Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
                               @Header(KafkaHeaders.RECEIVED_TOPIC) String topic) {
        logger.info("xxxListener kafkaConsume: key:{} value:{} topic:{} partition:{} offset:{}", key, value,topic, partition, offset);
        try {
            JSONObject jsonObject = JSON.parseObject(value);
            Boolean bbb = jsonObject.getString("aaa");
            Long aaa = jsonObject.getLong("bbb");
                
        } catch (Exception e) {
            logger.error("listener error,value is {}", value, e);
        }
    }

}

如此,发送的消息被接收并且进行业务处理 

kafka事件监听

监听produce事件,实现ProducerListener接口

public class LoggingProducerListener<K,V> implements ProducerListener<K, V> {
    protected Logger logger = LogManager.getLogger(getClass());

    @Override public void onSuccess(String topic, Integer partition, K key, V value,
                                    RecordMetadata recordMetadata) {
        logger.info("kafkaSendSuccess: topic:{} p:{} key:{} value:{}",topic, recordMetadata.partition(), key, value );
    }

    @Override
    public void onError(String topic, Integer partition, K key, V value, Exception exception) {
        String tmp = String.format("kafkaSendError: topic:%s p:%s key:%s value:%s",topic,partition, key, value);
        logger.error(tmp, exception);
    }

    @Override public boolean isInterestedInSuccess() {
        return true;
    }
}

 可以监听send Kafka消息时成功/失败,进行相应的业务逻辑处理

kafka原理

拓扑结构

kafka注解版_ide

一个topic可以有1个或多个partition

kafka注解版_spring_02

消费者结构 

kafka注解版_spring_03

同一topic的消息会发给不同的Consumer Group,但是一个partition只能被一个group中的一个consumer消费,一个consumer可以消费多个partition

一个典型的Kafka集群中包含若干Producer(可以是web前端产生的Page View,或者是服务器日志,系统CPU、Memory等),若干broker(Kafka支持水平扩展,一般broker数量越多,集群吞吐率越高),若干Consumer Group,以及一个Zookeeper集群。Kafka通过Zookeeper管理集群配置,选举leader,以及在Consumer Group发生变化时进行rebalance。Producer使用push模式将消息发布到broker,Consumer使用pull模式从broker订阅并消费消息。

Topic & Partition

  Topic在逻辑上可以被认为是一个queue,每条消费都必须指定它的Topic,可以简单理解为必须指明把这条消息放进哪个queue里。为了使得Kafka的吞吐率可以线性提高,物理上把Topic分成一个或多个Partition,每个Partition在物理上对应一个文件夹,该文件夹下存储这个Partition的所有消息和索引文件。

  

Spring Boot对Kafka的支持

Spring提供了KafkaAutoConfiguration类,我们只需在yml中设置必须的参数,其他的都由Spring帮我们处理

@Configuration
@ConditionalOnClass(KafkaTemplate.class)
@EnableConfigurationProperties(KafkaProperties.class)
@Import(KafkaAnnotationDrivenConfiguration.class)
public class KafkaAutoConfiguration {

   private final KafkaProperties properties;

   private final RecordMessageConverter messageConverter;

   public KafkaAutoConfiguration(KafkaProperties properties,
         ObjectProvider<RecordMessageConverter> messageConverter) {
      this.properties = properties;
      this.messageConverter = messageConverter.getIfUnique();
   }

   @Bean
   @ConditionalOnMissingBean(KafkaTemplate.class)
   public KafkaTemplate<?, ?> kafkaTemplate(
         ProducerFactory<Object, Object> kafkaProducerFactory,
         ProducerListener<Object, Object> kafkaProducerListener) {
      KafkaTemplate<Object, Object> kafkaTemplate = new KafkaTemplate<>(
            kafkaProducerFactory);
      if (this.messageConverter != null) {
         kafkaTemplate.setMessageConverter(this.messageConverter);
      }
      kafkaTemplate.setProducerListener(kafkaProducerListener);
      kafkaTemplate.setDefaultTopic(this.properties.getTemplate().getDefaultTopic());
      return kafkaTemplate;
   }

   @Bean
   @ConditionalOnMissingBean(ProducerListener.class)
   public ProducerListener<Object, Object> kafkaProducerListener() {
      return new LoggingProducerListener<>();
   }

   @Bean
   @ConditionalOnMissingBean(ConsumerFactory.class)
   public ConsumerFactory<?, ?> kafkaConsumerFactory() {
      return new DefaultKafkaConsumerFactory<>(
            this.properties.buildConsumerProperties());
   }

   @Bean
   @ConditionalOnMissingBean(ProducerFactory.class)
   public ProducerFactory<?, ?> kafkaProducerFactory() {
      DefaultKafkaProducerFactory<?, ?> factory = new DefaultKafkaProducerFactory<>(
            this.properties.buildProducerProperties());
      String transactionIdPrefix = this.properties.getProducer()
            .getTransactionIdPrefix();
      if (transactionIdPrefix != null) {
         factory.setTransactionIdPrefix(transactionIdPrefix);
      }
      return factory;
   }

   @Bean
   @ConditionalOnProperty(name = "spring.kafka.producer.transaction-id-prefix")
   @ConditionalOnMissingBean
   public KafkaTransactionManager<?, ?> kafkaTransactionManager(
         ProducerFactory<?, ?> producerFactory) {
      return new KafkaTransactionManager<>(producerFactory);
   }

   @Bean
   @ConditionalOnProperty(name = "spring.kafka.jaas.enabled")
   @ConditionalOnMissingBean
   public KafkaJaasLoginModuleInitializer kafkaJaasInitializer() throws IOException {
      KafkaJaasLoginModuleInitializer jaas = new KafkaJaasLoginModuleInitializer();
      Jaas jaasProperties = this.properties.getJaas();
      if (jaasProperties.getControlFlag() != null) {
         jaas.setControlFlag(jaasProperties.getControlFlag());
      }
      if (jaasProperties.getLoginModule() != null) {
         jaas.setLoginModule(jaasProperties.getLoginModule());
      }
      jaas.setOptions(jaasProperties.getOptions());
      return jaas;
   }

   @Bean
   @ConditionalOnMissingBean
   public KafkaAdmin kafkaAdmin() {
      KafkaAdmin kafkaAdmin = new KafkaAdmin(this.properties.buildAdminProperties());
      kafkaAdmin.setFatalIfBrokerNotAvailable(this.properties.getAdmin().isFailFast());
      return kafkaAdmin;
   }

}

注意自动配置类中的部分注解

@ConditionalOnMissingBean(KafkaTemplate.class)

当这个bean不存在的时候才会进行构建,即我们在自己的配置类中进行初始化,则不会在自动配置类中再次处理。