Logback日志发送到Kafka


文章目录

  • Logback日志发送到Kafka
  • 一、使用logback将日志发送至kafka
  • 1.1 引入依赖
  • 1.2 `logback.xml`简单Demo
  • 1.3 兼容性
  • 1.4 完整的样例
  • 1.5 启动程序收集日志
  • 1.6 项目Git地址


一、使用logback将日志发送至kafka

1.1 引入依赖

如果存在则跳过该步骤

pom.xml

<dependency>
    <groupId>com.github.danielwegener</groupId>
    <artifactId>logback-kafka-appender</artifactId>
    <version>0.2.0</version>
    <scope>runtime</scope>
</dependency>
<dependency>
    <groupId>ch.qos.logback</groupId>
    <artifactId>logback-classic</artifactId>
    <version>1.2.3</version>
    <scope>runtime</scope>
</dependency>

1.2 logback.xml简单Demo

<configuration>

    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>

    <!-- This is the kafkaAppender -->
    <appender name="kafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
            <encoder>
                <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
            </encoder>
            <topic>logs</topic>
            <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy" />
            <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />
            
            <!-- Optional parameter to use a fixed partition -->
            <!-- <partition>0</partition> -->
            
            <!-- Optional parameter to include log timestamps into the kafka message -->
            <!-- <appendTimestamp>true</appendTimestamp> -->

            <!-- each <producerConfig> translates to regular kafka-client config (format: key=value) -->
            <!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs -->
            <!-- bootstrap.servers is the only mandatory producerConfig -->
            <producerConfig>bootstrap.servers=localhost:9092</producerConfig>

            <!-- this is the fallback appender if kafka is not available. -->
            <appender-ref ref="STDOUT" />
        </appender>

    <root level="info">
        <appender-ref ref="kafkaAppender" />
    </root>
</configuration>

1.3 兼容性

logback-kafka-appender 依赖于org.apache.kafka:kafka-clients:1.0.0:jar. 它可以将日志附加到版本为 0.9.0.0 或更高版本的 kafka 代理。

对 kafka-clients 的依赖不会被隐藏,并且可以通过依赖覆盖升级到更高的、api 兼容的版本。

1.4 完整的样例

<configuration>

    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <target>System.out</target>
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>
    <appender name="STDERR" class="ch.qos.logback.core.ConsoleAppender">
        <target>System.err</target>
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>

    <!-- This example configuration is probably most unreliable under
    failure conditions but wont block your application at all -->
    <appender name="very-relaxed-and-fast-kafka-appender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
        <topic>boring-logs</topic>
        <!-- we don't care how the log messages will be partitioned  -->
        <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy" />

        <!-- use async delivery. the application threads are not blocked by logging -->
        <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />

        <!-- each <producerConfig> translates to regular kafka-client config (format: key=value) -->
        <!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs -->
        <!-- bootstrap.servers is the only mandatory producerConfig -->
        <producerConfig>bootstrap.servers=localhost:9092</producerConfig>
        <!-- don't wait for a broker to ack the reception of a batch.  -->
        <producerConfig>acks=0</producerConfig>
        <!-- wait up to 1000ms and collect log messages before sending them as a batch -->
        <producerConfig>linger.ms=1000</producerConfig>
        <!-- even if the producer buffer runs full, do not block the application but start to drop messages -->
        <producerConfig>max.block.ms=0</producerConfig>
        <!-- define a client-id that you use to identify yourself against the kafka broker -->
        <producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback-relaxed</producerConfig>

        <!-- there is no fallback <appender-ref>. If this appender cannot deliver, it will drop its messages. -->
    </appender>

    <!-- This example configuration is more restrictive and will try to ensure that every message
     is eventually delivered in an ordered fashion (as long the logging application stays alive) -->
    <appender name="very-restrictive-kafka-appender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>

        <topic>important-logs</topic>
        <!-- ensure that every message sent by the executing host is partitioned to the same partition strategy -->
        <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.HostNameKeyingStrategy" />
        <!-- block the logging application thread if the kafka appender cannot keep up with sending the log messages -->
        <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.BlockingDeliveryStrategy">
            <!-- wait indefinitely until the kafka producer was able to send the message -->
            <timeout>0</timeout>
        </deliveryStrategy>

        <!-- each <producerConfig> translates to regular kafka-client config (format: key=value) -->
        <!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs -->
        <!-- bootstrap.servers is the only mandatory producerConfig -->
        <producerConfig>bootstrap.servers=localhost:9092</producerConfig>
        <!-- restrict the size of the buffered batches to 8MB (default is 32MB) -->
        <producerConfig>buffer.memory=8388608</producerConfig>

        <!-- If the kafka broker is not online when we try to log, just block until it becomes available -->
        <producerConfig>metadata.fetch.timeout.ms=99999999999</producerConfig>
        <!-- define a client-id that you use to identify yourself against the kafka broker -->
        <producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback-restrictive</producerConfig>
        <!-- use gzip to compress each batch of log messages. valid values: none, gzip, snappy  -->
        <producerConfig>compression.type=gzip</producerConfig>

        <!-- Log every log message that could not be sent to kafka to STDERR -->
        <appender-ref ref="STDERR"/>
    </appender>

    <root level="info">
        <appender-ref ref="very-relaxed-and-fast-kafka-appender" />
        <appender-ref ref="very-restrictive-kafka-appender" />
    </root>
</configuration>

1.5 启动程序收集日志

  • 创建接收日志的topic
  • 启动程序即可将Kafka数据发送至Topic

1.6 项目Git地址

https://github.com/danielwegener/logback-kafka-appender