需求描述:需要从kafka里读取日志实时给前端推送,做一个实时查看日志详情的功能

原解决方案:刚开始想的解决方案是celery异步从kafka里读取数据写到文件中,前端页面使用定时器给每隔一秒就访问一次服务器获取这个文件里的数据

存在问题:

  • 日志数据过多且一直刷新,写到文件里 服务器 内存、CPU 占用多大。
  • 前端定时器每一秒访问后端接口,导致后端服务器访问量过多,服务器有时响应不过来,太占用资源。

解决方案:使用 channels+celery+websocket+kafka 解决问题

channels介绍

  channels是以django插件的形式存在,它不仅能处理http请求,还提供对websocket、MQTT等长连接支持。不仅如此,channels在保留了原生django的同步和易用的特性上还带来了异步处理方式,并且将django自带的认证系统以及session集成到模块中,扩展性非常强。官方文档:https://channels.readthedocs.io/en/latest/index.html

安装

Django==2.2.2
channels==2.4.0
channels-redis==3.0.1 
dwebsocket==0.5.9
asgiref==3.3.4

使用

一、配置settings.py

INSTALLED_APPS = [
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',
    'celery',
    'dwebsocket',
    'channels'
]
# 配置channels layer:
ASGI_APPLICATION = 'demo.routing.application'
CHANNEL_LAYERS = {
    'default': {
        'BACKEND': 'channels_redis.core.RedisChannelLayer',
        'CONFIG': {
            "hosts": [('192.168.1.19', 6779)],
        },
    },
}

二、路由配置

在项目settings文件同级目录中新增routing.py

from channels.auth import AuthMiddlewareStack
from channels.routing import ProtocolTypeRouter, URLRouter
import workorder.routing

application = ProtocolTypeRouter({
    'websocket': AuthMiddlewareStack(
        URLRouter(
            workorder.routing.websocket_urlpatterns
        )
    ),
})

最后在app里配置路由和对应的消费者,这里我们是在workorder下的routing.py:

from django.urls import path,re_path
from workorder.consumers import KafkaLogConsumer


websocket_urlpatterns = [
    path('kafka_log/', KafkaLogConsumer),
]

项目结构如下

三、编写webscoket消息处理方法(消费者)

首先说明,消费者是Channels代码的基本单元,当一个新的Socket进入的时候,Channels会根据路由表找到正确的消费者,以下代码中每个方法都可以看作一个消费者,他们消费不同的event,比如刚刚接受连接时候connect方法进行消费处理并接受连接,关闭websocket时候使用disconnect进行消费处理。

import urllib
from asgiref.sync import async_to_sync
from billiard.exceptions import Terminated
from channels.generic.websocket import WebsocketConsumer
import json
import datetime, time
from workorder.tasks import kafka_log_task
from threading import Thread


class KafkaLogConsumer(WebsocketConsumer):
    def connect(self):  # 收到连接时候处理
        # self.name = self.scope["url_route"]["kwargs"]["name"]
        # self.cluster_id = self.scope["url_route"]["kwargs"]["cluster_id"]
        # self.namespace = self.scope["url_route"]["kwargs"]["namespace"]
        # self.topic = self.scope["url_route"]["kwargs"]["project_type"]
        # print(self.scope['query_string'].decode())
        
        result = self.scope['query_string'].decode()
        result = str(result).split('&')
        self.topic = result[0].split('=')
        self.topic = urllib.parse.unquote(self.topic[-1])
        self.keyword = result[1].split('=')
        self.keyword = self.keyword[1]
        
        # print(self.topic)
        # print(self.keyword)
        self.result = kafka_log_task.delay(self.topic, self.channel_name, keyword=self.keyword)
        self.accept()

    def disconnect(self, close_code):  # 关闭channel时候处理
        # 中止执行中的Task
        self.result.revoke(terminate=True)
        # app.control.revoke(self.result.id,terminate=True)

    def send_message(self, event):  # 处理客户端发来的消息,实时推送
        print(event)
        self.send(text_data=json.dumps({
            "message": event["message"]
        }))

    def receive(self, text_data):  # 收到消息时候处理
        print(text_data)
        self.result = kafka_log_task.delay(self.topic, self.channel_name)

celery异步操作写在 app文件路径下的tasks.py文件里面

from __future__ import absolute_import, unicode_literals
# 导入原始的celery模块中shared_task    from xx import xx
import os
import time
import urllib

from channels.layers import get_channel_layer
from asgiref.sync import async_to_sync
from demo.celery import app
from kafka import KafkaConsumer, TopicPartition
import json


@app.task(name='获取容器日志')
def kafka_log_task(topic, channel_name, keyword=None):
    channel_layer = get_channel_layer()
    if topic == '网页':
        topic = 'sdk'
    elif topic == '小程序':
        topic = 'wechat-logs'
    elif topic == 'APP':
        topic = 'app'

    consumer = KafkaConsumer(group_id='111',
                             bootstrap_servers=['192.168.1.111:9092', '192.168.1.112:9092', '192.168.1.113:9092'])

    # consumer 指定主题和分区
    consumer.assign(
        [TopicPartition(topic, partition=0), TopicPartition(topic, partition=1), TopicPartition(topic, partition=2)])

    # end_obj = consumer.end_offsets(consumer.assignment())
    # end_offset = end_obj.get(TopicPartition(topic=topic, partition=0))
    # consumer.seek(TopicPartition(topic, partition=0), end_offset - 5)
    consumer.seek_to_end()
    index = 0
    try:
        for msg in consumer:
            recv = "%s:%d:%d: key=%s value=%s" % (msg.topic, msg.partition, msg.offset, msg.key, msg.value)
            a = msg.value.decode()
            a = json.loads(a)
            data = a.get('message')
            data = str(data).replace('\\x', '%')
            data = json.loads(data)
            result = data

            for k, v in dict(data).items():
                value = str(v).replace(' ', '')
                value = value.replace('\'', "\"")
                result[k] = urllib.parse.unquote(value)

            data = result
            if keyword:
                print('条件')
                flag = 0
                keyword = urllib.parse.unquote(keyword)
                for k, v in dict(data).items():
                    if keyword in v:
                        # data[k] = v[:v.find(keyword)] + f"<span class='msg_style'>{keyword}</span>" + v[v.find(keyword) + len(keyword):]
                        flag = 1
                        break

                if flag == 1:
                    print('找到了')
                    print(msg.value.decode())
                    async_to_sync(channel_layer.send)(
                        channel_name,
                        {
                            "type": "send.message",
                            "message": data
                        }
                    )

                    time.sleep(1)
            else:
                print('全部')
                async_to_sync(channel_layer.send)(
                    channel_name,
                    {
                        "type": "send.message",
                        "message": data
                    }
                )

                time.sleep(1)

    except Exception as e:
        pass

四、用前端js发送websocket请求

close_websocket(){
      if (window.s) {
        window.s.close();//关闭websocket
        console.log('websocket已关闭');
      }
    },
    connect_websocket(){
      let this_ = this
      if (window.s) {
          window.s.close()
      }
      /*创建socket连接*/
      this_.socket = new WebSocket("ws://139.196.79.152:8600/kafka_log/?project_type=" + this_.project_type + '&keyword=' + this.keyword);
      this_.socket.onopen = function () {
          console.log('WebSocket open');//成功连接上Websocket
      };
      if(this.keyword.length>=1){
        this.message = []
      }
      this_.socket.onmessage = function (e) {
          console.log(e.data);//打印出服务端返回过来的数据
          if(this_.message.length>=30){
            this_.message = []
          }


          let data = e.data
          data = JSON.parse(data)
          data = data.message

          if(data.params.length>=1){
            try{
              data.params = JSON.parse(data.params)
            }
            catch(exception){
              console.log(data.params)
            }
            
          }

          console.log(data);//打印出服务端返回过来的数据
          
          this_.message.unshift(data)
      };
      if(this_.socket.readyState == WebSocket.OPEN) this_.socket.onopen();
      window.s = this_.socket;
     }

问题解决!:这样查看日志详情的时候就调用连接websocket的方法,后端有数据了就直接推送给前端,前端对数据进行处理后进行展示,已完成实时推送日志功能,解决了读写文件时 内存、CPU 飙高问题和前端轮询发送请求占用资源、接口响应问题。