本文主要讨论了以下内容:
(1)如何配置kafka集群SASL_PLAINTEXT加密认证;
(2)如何给前台的生产者和消费者配置加密认证;
(3)如何使用metricbeat(生产者)配置加密认证;
(4)如何给logstash(消费者)配置加密认证;
(5)如何给kafkaTools配置加密认证;
(6)如何在新的kafka集群上消费已加密的kafka集群数据;
(7)新的kafka集群配置对已加密的kafka集群的认证后,会影响自身吗?
1、在三个节点的kafka的server.properties中加入以下内容,启用SASL:
[root@kafka1 ~]# cat /usr/local/kafka/config/server.properties
listeners=SASL_PLAINTEXT://192.168.1.8:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
如果你的kafka节点的配置文件中配置了advertised.listeners的话,也要把advertised.listeners的IP地址前加上SASL_PLAINTEXT,比如:
listeners=SASL_PLAINTEXT://192.168.1.8:9092
advertised.listeners=SASL_PLAINTEXT://192.168.1.8:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
2、创建server端的认证文件,尽量不要在此文件添加任何多余的内容,包括注释,否则启动kafka会报错。最后两行必须要有分号:
[root@kafka1 ~]# cat /usr/local/kafka/config/kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="kafka"
password="kafka#secret"
user_kafka="kafka#secret"
user_alice="alice#secret";
};
内容解释:配置文件命名为:kafka_server_jaas.conf,放置在/usr/local/kafka/config。
使用user_来定义多个用户,供客户端程序(生产者、消费者程序)认证使用,可以定义多个。
上例我定义了两个用户,一个是kafka,一个是alice,等号后面是对应用户的密码(如user_kafka定义了用户名为kafka,密码为kafka#secret的用户)。
后续配置可能还可以根据不同的用户定义ACL。
3、创建client认证文件,此文件是后面console的生产者和消费者使用,同样此文件不适宜添加任何多余的内容,包括注释:
[root@kafka1 ~]# cat /usr/local/kafka/config/kafka_client_jaas.conf
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="kafka"
password="kafka#secret";
};
4、添加kafka-server-start.sh认证文件路径,启动kafka时会加载此文件,其他应用服务如果要写数据到kafka,会先匹配用户名和密码:
[root@kafka1 ~]# cat /usr/local/kafka/bin/kafka-server-start.sh
export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/local/kafka/config/kafka_server_jaas.conf"
5、添加kafka-console-producer.sh认证文件路径,后面启动生产者测试时使用:
[root@kafka1 ~]# cat /usr/local/kafka/bin/kafka-console-producer.sh
export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/local/kafka/config/kafka_client_jaas.conf"
6、添加kafka-console-consumer.sh认证文件路径,后面启动消费者测试时使用:
[root@kafka1 ~]# cat /usr/local/kafka/bin/kafka-console-consumer.sh
export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/local/kafka/config/kafka_client_jaas.conf"
7、修改/usr/local/kafka/config/producer.properties,在配置最后加入以下两行内容:
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
8、修改/usr/local/kafka/config/consumer.properties,要添加的内容和producer的内容一样:
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
9、启动kafka,注意观察logs/server.log日志文件中是否有报错:
bin/kafka-server-start.sh -daemon config/server.properties
如果kafka起不来,基本上就是上面几个文件的问题,一个是配置的认证文件路径,再一个是认证文件的内容末尾两行是否添加分号,是否添加了多余的注释。
10、开启一个生产者
kafka-console-producer.sh --broker-list 192.168.1.8:9092 --topic test --producer-property security.protocol=SASL_PLAINTEXT --producer-property sasl.mechanism=PLAIN
11、开启一个消费者
kafka-console-consumer.sh --bootstrap-server 192.168.1.8:9092 --topic test --from-beginning --consumer-property security.protocol=SASL_PLAINTEXT --consumer-property sasl.mechanism=PLAIN
12、在metricbeat中配置kafka认证
output.kafka:
hosts: ["192.168.1.8:9092", "192.168.1.9:9092", "192.168.1.10:9092"]
topic: 'metricbeat_95598'
username: kafka
password: kafka#secret
启动metricbeat服务后,在metricbeat的日志中可以看到如下内容,显示认证通过并成功连接到kafka:
2020-08-12T15:49:47.032+0800 INFO kafka/log.go:53 kafka message: Successful SASL handshake. Available mechanisms: %!(EXTRA []string=[PLAIN])
2020-08-12T15:49:47.034+0800 INFO kafka/log.go:53 kafka message: Successful SASL handshake. Available mechanisms: %!(EXTRA []string=[PLAIN])
2020-08-12T15:49:47.035+0800 INFO kafka/log.go:53 kafka message: Successful SASL handshake. Available mechanisms: %!(EXTRA []string=[PLAIN])
2020-08-12T15:49:47.040+0800 INFO kafka/log.go:53 SASL authentication successful with broker 192.168.1.8:9092:4 - [0 0 0 0]
2020-08-12T15:49:47.040+0800 INFO kafka/log.go:53 Connected to broker at 192.168.1.8:9092 (registered as #0)
2020-08-12T15:49:47.041+0800 INFO kafka/log.go:53 SASL authentication successful with broker 192.168.1.9:9092:4 - [0 0 0 0]
2020-08-12T15:49:47.041+0800 INFO kafka/log.go:53 Connected to broker at 192.168.1.9:9092 (registered as #1)
2020-08-12T15:49:47.049+0800 INFO kafka/log.go:53 SASL authentication successful with broker 192.168.1.10:9092:4 - [0 0 0 0]
2020-08-12T15:49:47.049+0800 INFO kafka/log.go:53 Connected to broker at 192.168.1.10:9092 (registered as #2)
然后再次启动一个消费者,让消息输出到前端,可以看到metricbeat生产的消息都打印到了屏幕上。
13、配置logstash的input,在input中加入sasl_jaas_config、security_protocol和sasl_mechanism:
input {
kafka {
bootstrap_servers => "192.168.1.8:9092,192.168.1.9:9092,192.168.1.10:9092"
topics => ["metricbeat_95598"]
sasl_jaas_config => "org.apache.kafka.common.security.plain.PlainLoginModule required username='kafka' password='kafka#secret';"
security_protocol => "SASL_PLAINTEXT"
sasl_mechanism => "PLAIN"
group_id => "logstash_product"
consumer_threads => 3
codec => "json"
type => "metricbeat_95598"
client_id => "metricbeat_95598"
}
}
14、使用Kafka-Tools工具连接SASL加密的Kafka集群:
(1)properties中ClusterName输入集群名称
(2)Security中Type选择SASL Plaintext
(3)Advanced中Bootstrap Servers输入192.168.1.8:9092,192.168.1.9:9092,192.168.1.10:9092,也就是kafka节点的IP和端口,SASL Mechianism输入PLAIN;
(4)JAAS config中输入上面server端认证文件的内容:
org.apache.kafka.common.security.plain.PlainLoginModule required
username="kafka"
password="kafka#secret";
以上配置完成后,点击test测试一下。
15、如果要在其他的kafka集群上消费SASL_PLAINTEXT加密的kafka的话,需要做以下配置(不需要重启kafka集群,直接添加即可):
(1)添加KafkaClient配置
[root@localhost ~]# cat /usr/local/kafka/config/kafka_client_jaas.conf
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="kafka"
password="kafka#secret";
};
(2)添加kafka-console-consumer配置:
[root@localhost ~]# cat /usr/local/kafka/bin/kafka-console-consumer.sh
export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/local/kafka/config/kafka_client_jaas.conf"
(3)消费测试:
先在已加密的kafka集群上启动生产者,并发送消息:
[root@kafka2 kafka]# kafka-console-producer.sh --broker-list 192.168.1.8:9092 --topic test --producer-property security.protocol=SASL_PLAINTEXT --producer-property sasl.mechanism=PLAIN
>11111
>222222222
>3333333333.
>444
在新的kafka集群上启动消费者消费该消息:
[root@localhost kafka]# kafka-console-consumer.sh --bootstrap-server 192.168.1.8:9092 --topic test --from-beginning --consumer-property security.protocol=SASL_PLAINTEXT --consumer-property sasl.mechanism=PLAIN
11111
222222222
3333333333.
444
以上说明,在别的kafka集群上通过配置KafkaClient和kafka-console-consumer.sh也是可以成功消费SASL_PLAINTEXT加密的kafka的。
16、在第15节中我们给新的kafka集群配置上了KafkaClient和kafka-console-consumer.sh 的加密配置,是否会影响本身kafka集群的消费,经过测试,是不影响的,只不过在执行消费者命令时,不需要加:
(1)本集群内部生产消息:
[root@localhost kafka]# kafka-console-consumer.sh --bootstrap-server 192.168.1.52:9092 --topic test --from-beginning
1111111
(2)本集群内部消费消息:
[root@localhost ~]# kafka-console-producer.sh --broker-list 192.168.1.52:9092 --topic test
>1111111