1.下载安装包并解压
kafka_2.11-2.1.0.tgz 详解: https://blog.csdn.net/lingbo229/article/details/80761778
2. 配置kafka集群
cd /usr/local/kafka_2.11-2.1.0/config ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker. broker.id=0 # 这个标识是kafka这台服务器 ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured. # FORMAT: # listeners = listener_name://host_name:port # EXAMPLE: # listeners = PLAINTEXT://your.host.name:9092 #listeners=PLAINTEXT://:9092 # kafka 监听地址端口 ############################# Log Basics ############################# # A comma separated list of directories under which to store log files log.dirs=/tmp/kafka-logs # 日志存储目录 & 保存kafka数据(维护消息持久化) # The default number of log partitions per topic. More partitions allow greater # parallelism for consumption, but this will also result in more files across # the brokers. num.partitions=1 #存储kafka每条消息的分区数量 ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details). # This is a comma separated host:port pairs, each corresponding to a zk # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". # You can also append an optional chroot string to the urls to specify the # root directory for all kafka znodes. zookeeper.connect=10.10.23.39:2181,10.10.23.40:2181,10.10.23.41:2181 # Timeout in ms for connecting to zookeeper zookeeper.connection.timeout.ms=6000 auto.create.topics.enable=true #自动创建topics,防止信息无法存储 delete.topic.enable=true #彻底删除topics
3. kafka 启动
./kafka-server-start.sh ../config/server.properties 或 nohup ./kafka-server-start.sh ../config/server.properties
4. kafka常用操作
提供查看、创建、修改、删除topic信息。 ./kafka-topics.sh -zookeeper 10.10.23.39:2181,10.10.23.40:2181,10.10.23.41:2181 --list#查询topics ./kafka-topics.sh --create --zookeeper 10.10.23.39:2181,10.10.23.40:2181,10.10.23.41:2181 --replication-factor 1 --partitions 3 --topic mytopic #创建一个topics,副本 1 分片 3 ./kafka-topics.sh --describe --zookeeper 10.10.23.39:2181,10.10.23.40:2181,10.10.23.41:2181 --topic mytopic #查看topic 属性 ./kafka-console-producer.sh --broker-list 10.10.23.39:9092,10.10.23.40:9092,10.10.23.41:9092\ --topic mytopic #指定topic 生产消息 ./kafka-console-consumer.sh --bootstrap-server 10.10.23.39:9092,10.10.23.40:9092,10.10.23.41:9092\ --from-beginning --topic test01 #消费指定topic消息 ./kafka-topics.sh --delete --zookeeper 10.10.23.39:2181,10.10.23.40:2181,10.10.23.41:2181 \ --topic osmessage #删除一个topic消息
5.解决跨网段日志传输问题:
新增kafka&zookeeper主机 域名:IP [root@localhost bin]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.10.23.39 kafka01.zk.cn 10.10.23.40 kafka02.zk.cn 10.10.23.41 kafka03.zk.cn 修改kafka&zookeeper配置文件 # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 server.1=kafka01.zk.cn:2888:3888 server.2=kafka02.zk.cn:2888:3888 server.3=kafka03.zk.cn:2888:3888 ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details). # This is a comma separated host:port pairs, each corresponding to a zk # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". # You can also append an optional chroot string to the urls to specify the # root directory for all kafka znodes. #zookeeper.connect=10.10.23.39:2181,10.10.23.40:2181,10.10.23.41:2181 zookeeper.connect=zookeeper.connect=kafka01.zk.cn:2181,kafka02.zk.cn:2181,kafka03.zk.cn:2181 修改filebeat主机hosts文件 [root@localhost local]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 36.7.152.1xx kafka01.zk.cn 36.7.152.1xx kafka02.zk.cn 36.7.152.1xx kafka03.zk.cn 修改logstash主机hosts文件 [root@localhost local]# cat /etc/hosts 127.0.0.1 localhost shop-blog.server localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.10.23.39 kafka01.zk.cn 10.10.23.40 kafka02.zk.cn 10.10.23.41 kafka03.zk.cn