目录

1.集群规划

2.下载kafka安装包

3.安装kafka

4.创建软连接

5.添加到环境变量

6.修改kafka配置文件 server.properties

7.在安装目录下创建kafka-logs文件夹(本例中,安装目录是:/usr/local/kafka)

8.将配置好的kafka安装包拷贝到其他节点,并创建软连接

9.分别修改其他节点的配置文件 server.properties

10.分别在node01、node02、node03启动kafka

11.日志目录

12. Java实现kafak生产者、消费者


列的比较详细,截图就不放了,请大家仔细耐心阅读!

1.集群规划

在3台机器上安装kafka,分别是node01(192.168.183.150),node02(192.168.183.151),node03(192.168.183.152)

2.下载kafka安装包

http://kafka.apache.org/downloads

选择对应版本进行下载,本例中是使用 kafka_2.11-2.2.1.tgz

3.安装kafka

将安装包上传到其中的一台机器上 (这里上传到node01),并解压到 对应安装目录下(/bigdata)

在 /bigdata 目录下,通过 rz 命令上传 

解压命令: tar -zxvf kafka_2.11-2.2.1.tgz

4.创建软连接

ln -s /bigdata/kafka_2.11-2.2.1 /usr/local/kafka

5.添加到环境变量

vim /etc/profile

添加内容:

export KAFKA_HOME=/usr/local/kafka 
export PATH=$PATH:${KAFKA_HOME}/bin

刷新环境变量:

source /etc/profile

6.修改kafka配置文件 server.properties

vim /usr/local/kafka/config/server.properties

修改或增加配置内容如下:

############################# Server Basics ############################# 

#每个borker的id是唯一的,多个broker要设置不同的id 

broker.id=0 

#访问地址 
host.name=192.168.183.150 

#访问端口号 
port=9092 

#允许删除topic 
delete.topic.enable=true

############################# Log Basics ############################# 

#存储数据路径,默认是在/tmp目录下,需要修改 

log.dirs=/usr/local/kafka/kafka-logs 

#创建topic默认分区数 
num.partitions=1 

############################# Log Retention Policy ############################# 

#数据保存时间,默认7天,单位小时 
log.retention.hours=168 

############################# Zookeeper ############################# 

#zookeeper地址,多个地址用逗号隔开 
zookeeper.connect=192.168.183.150:2181,192.168.183.151:2181,192.168.183.152:2181 

# Timeout in ms for connecting to zookeeper 

zookeeper.connection.timeout.ms=6000

7.在安装目录下创建kafka-logs文件夹(本例中,安装目录是:/usr/local/kafka)

mkdir /usr/local/kafka/kafka-logs

8.将配置好的kafka安装包拷贝到其他节点,并创建软连接

scp -r /bigdata/kafka_2.11-2.2.1 root@node02:/bigdata/ 
scp -r /bigdata/kafka_2.11-2.2.1 root@node03:/bigdata/

分别在node02和node03创建软连接:

ln -s /bigdata/kafka_2.11-2.2.1 /usr/local/kafka

9.分别修改其他节点的配置文件 server.properties

9.1 node02修改项:

vim /usr/local/kafka/config/server.properties

修改内容:

broker.id=1 
host.name=192.168.183.151

9.2 node03修改项:

vim /usr/local/kafka/config/server.properties

修改内容:

broker.id=2 
host.name=192.168.183.152

10.分别在node01、node02、node03启动kafka

cd /usr/local/kafka

启动的时候使用-daemon选项,则kafka将以守护进程的方式启动

bin/kafka-server-start.sh -daemon config/server.properties

11.日志目录

默认在kafka安装路径生成的logs文件夹中。

12. Java实现kafak生产者、消费者(自动提交、手动提交、以时间戳查询消息)

下面以自动提交为例:(其他示例已经上传至github,感兴趣的童鞋可以自行下载!)

12.1 生产者  KafkaProducer.java

package com.hs.net.kafka;

import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.KafkaException;
import org.apache.kafka.common.errors.AuthorizationException;
import org.apache.kafka.common.errors.OutOfOrderSequenceException;
import org.apache.kafka.common.errors.ProducerFencedException;
import org.apache.kafka.common.serialization.StringSerializer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
import java.util.Properties;
import java.util.concurrent.Executors;

/**
 * @author Higmin
 * @date 2019/11/26 8:25
 *  kafka 生产者
 **/
public class KafkaProducer {

	private static final Logger logger = LoggerFactory.getLogger(KafkaProducer.class);
	private Properties props;
	private org.apache.kafka.clients.producer.KafkaProducer<String, String> producer;
	public final static String TOPIC = "TEST-TOPIC";

	/**
	 * 初始化
	 */
	@PostConstruct
	public void init() {
		props = new Properties();
		/**
		 * 用于自举(bootstrapping ),producer只是用它来获得元数据(topic, partition, replicas)
		 * 实际用户发送消息的socket会根据返回的元数据来确定
		 */
		props.put("bootstrap.servers", "192.168.183.150:9092,192.168.183.151:9092,192.168.183.152:9092"); // kafka 地址
		props.put("transactional.id", "my-transactional-id"); // 事务id
		// producer发送消息后是否等待broker的ACK,默认是0 。( 1 表示等待ACK,保证消息的可靠性)
		props.put("request.required.acks", "1");
		// 泛型参数分别表示 The first is the type of the Partition key, the second the type of the message
		producer = new org.apache.kafka.clients.producer.KafkaProducer<String, String>(props, new StringSerializer(), new StringSerializer());
		producer.initTransactions(); // 初始化事务
	}

	/**
	 * 生产消息 => 发送到 Kafka topic
	 * key:消息的 key,同时也会作为 partition 的 key
	 * msg:发送的消息
	 */
	public void produceMsg(String key, String msg) throws Exception{
		try {
			producer.beginTransaction(); // 开启事务
			// 生产消息
			producer.send(new ProducerRecord<>(TOPIC, key, msg));
			producer.commitTransaction(); // 提交事务
			logger.info(msg + "发送成功!");
		} catch (ProducerFencedException | OutOfOrderSequenceException | AuthorizationException e) {
			// We can't recover from these exceptions, so our only option is to close the producer and exit.
			producer.close();
		} catch (KafkaException e) {
			// For all other exceptions, just abort the transaction and try again.
			producer.abortTransaction(); // 中止事务
		}
	}

	/**
	 * 销毁
	 */
	@PreDestroy
	public void destroy() {
		producer.close();
	}

	public static void main(String[] args) {
		System.setProperty("org.apache.commons.logging.Log", "org.apache.commons.logging.impl.SimpleLog");
		System.setProperty("org.apache.commons.logging.simplelog.showdatetime", "false");
		System.setProperty("org.apache.commons.logging.simplelog.log.org.apache.commons.httpclient", "stdout");
		logger.info("开始发送消息 ...");
		KafkaProducer producer = new KafkaProducer();
		Executors.newSingleThreadExecutor().execute(new Runnable() {
			public void run() {
				producer.init();
				while (true) {
					try {
						long timestamp = System.currentTimeMillis();
						producer.produceMsg("test", "( 消息" + timestamp + " )");
						Thread.sleep(2000);
					} catch (Throwable e) {
						if (producer != null) {
							try {
								producer.destroy();
							} catch (Throwable e1) {
								System.out.println("Turn off Kafka producer error! " + e);
							}
						}
					}

				}

			}
		});
	}
}

12.2 消费者 KafkaConsumer.java

package com.hs.net.kafka;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
import java.time.Duration;
import java.util.Arrays;
import java.util.Properties;
import java.util.concurrent.Executors;

/**
 * @author Higmin
 * @date 2019/11/26 8:26
 * kafka 消费者
 **/
public class KafkaConsumer {

	private static final Logger logger = LoggerFactory.getLogger(KafkaConsumer.class);
	private Properties props;
	private org.apache.kafka.clients.consumer.KafkaConsumer<String, String> consumer;
	public final static String TOPIC = "TEST-TOPIC";

	/**
	 * 初始化
	 */
	@PostConstruct
	public void init(){
		props = new Properties();
		props.put("bootstrap.servers", "192.168.183.150:9092,192.168.183.151:9092,192.168.183.152:9092"); // kafka 地址
		props.put("group.id", "test"); // 设置消费组
		props.put("enable.auto.commit", "true"); //开启自动提交
		props.put("auto.commit.interval.ms", "1000"); // 自动提交时间间隔
		props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); // key反序列化
		props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); // value反序列化
		consumer = new org.apache.kafka.clients.consumer.KafkaConsumer<String, String>(props);
	}

	/**
	 * 消费消息 -- 自动偏移量提交
	 * 消费消息由两种方式:
	 * 1. 自动偏移量提交:使用自动偏移量提交还可以“至少一次”交付,但要求是必须在每次后续调用之前或关闭使用者之前使用每次调用poll(Duration)所返回的所有数据。
	 * 2. 手动偏移控制:使用手动偏移控制的优点是可以直接控制何时将 Records 视为“消耗”。
	 */
	public void consumeMsg() throws Exception{
		// 可以订阅多个 topic 中间用逗号隔开
		consumer.subscribe(Arrays.asList(TOPIC));
		while (true) {
			ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(1));
			for (ConsumerRecord<String, String> record : records) {
				long offset = record.offset();
				String key = record.key();
				String msg = record.value();
				logger.info("消费成功! offset: " + offset + "  key: " + key + "  value: " +  msg);
			}
		}
	}

	/**
	 * 销毁
	 */
	@PreDestroy
	public void destroy(){
		consumer.close();
	}

	public static void main(String[] args) {
		KafkaConsumer consumer = new KafkaConsumer();
		logger.info("开始消费消息...");
		Executors.newSingleThreadExecutor().execute(new Runnable() {
			@Override
			public void run() {
				consumer.init();
				while (true){
					try {
						consumer.consumeMsg();
					} catch (Exception e) {
						if (consumer != null) {
							try {
								consumer.destroy();
							} catch (Throwable e1) {
								logger.error("Turn off Kafka consumer error! " + e);
							}
						}
					}finally {
						consumer.destroy();
					}
				}
			}
		});
	}
}

12.3 附使用的 Kafka 依赖

<dependency>
      <groupId>org.apache.kafka</groupId>
      <artifactId>kafka_2.12</artifactId>
      <version>${kafka_2.12.version}</version>
 </dependency>

代码已经上传至github,这里有更多示例  ==>

SpringAOP、死锁、同步锁、读-写同步锁、BIO、NIO、AIO、Netty服务,客户端、ThreadLocal使用、数据结构(待完善)、23种设计模式(未完待续...)、生成 XML 文件、mybatis逆向工程、接口并发测试、Kafka 生产者消费者示例(持续更新...)等:https://github.com/higminteam/practice/blob/master/src/main/java/com/practice