ELK + filebeat集群部署 ELK简介

  1. Elasticsearch Elasticsearch是一个实时的分布式搜索分析引擎, 它能让你以一个之前从未有过的速度和规模,去探索你的数据。它被用作全文检索、结构化搜索、分析以及这三个功能的组合

2.Logstash Logstash是一款强大的数据处理工具,它可以实现数据传输,格式处理,格式化输出,还有强大的插件功能,常用于日志处理。

3.Kibana kibana是一个开源和免费的工具,它可以为Logstash和ElasticSearch提供的日志分析友好的Web界面,可以帮助您汇总、分析和搜索重要数据日志。

官网地址:https://www.elastic.co/cn/downloads/

注意:配置文件ip要根据实际情况修改

环境准备,三台Linux服务器,系统统一

elk-node1	192.168.3.181	数据、主节点(安装elasticsearch、logstash、kabana、filebeat)

elk-node2	192.168.3.182	数据节点(安装elasticsearch、filebeat)

elk-node3	192.168.3.183	数据节点(安装elasticsearch、filebeat)

修改hosts文件每台hosts均相同

vim /etc/hosts
192.168.243.162         elk-node1
192.168.243.163         elk-node2

安装jdk11,二进制安装

已安装java则略过此步骤

{{

cd /home/tools &&
wget https://download.java.net/java/GA/jdk11/13/GPL/openjdk-11.0.1_linux-x64_bin.tar.gz

解压到指定目录

tar -xzvf jdk-11.0.4_linux-x64_bin.tar.gz -C /usr/local/jdk	

配置环境变量(set java environment)

JAVA_HOME=/usr/local/jdk/jdk-11.0.1
CLASSPATH=$JAVA_HOME/lib/
PATH=$PATH:$JAVA_HOME/bin
export PATH JAVA_HOME CLASSPATH

使环境变量生效

source  /etc/profile

yum源安装

yum -y install java
java -version   

}} 修改系统内核参数 调整最大虚拟内存映射空间 在末尾追加如下

vim  /etc/sysctl.conf
vm.max_map_count=262144   

在末尾追加如下

vim /etc/security/limit.conf  

		* soft nofile  1000000
		* hard nofile 1000000
		* soft nproc  1000000
		* hard nproc 1000000
		* soft memlock unlimited
		* hard memlock unlimited
sysctl -p
cd /etc/security/limits.d
vi 20-nproc.conf 
-# Default limit for number of user's processes to prevent
-# accidental fork bombs.
-# See rhbz #432903 for reasoning.

*          soft    nproc     4096
root       soft    nproc     unlimited
将*号改成用户名
esyonghu   soft    nproc     4096
root       soft    nproc     unlimited

下载依赖包,安装repo源

yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools vim lrzsz tree screen lsof tcpdump wget ntpdate
vim /etc/yum.repos.d/elastic.repo		
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1	
autorefresh=1
type=rpm-md

[kibana-7.x]
name=Kibana repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
yum repolist

#部署elasticsearch集群,在所有节点上操作

yum -y install elasticsearch
grep "^[a-Z]" /etc/elasticsearch/elasticsearch.yml
vim /etc/elasticsearch/elasticsearch.yml
cluster.name: my-elk
node.name: elk-node1(对应主机名)
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
transport.tcp.compress: true
network.host: 0.0.0.0
http.port: 9200
transport.tcp.port: 9300 ##只在其他节点上配置
discovery.seed_hosts: ["192.168.243.162", "192.168.243.163","192.168.243.164"]
cluster.initial_master_nodes: ["192.168.243.162", "192.168.243.163","192.168.243.164"]
discovery.zen.minimum_master_nodes: 2 #防止集群“脑裂”,需要配置集群最少主节点数目,通常为 (主节点数目/2) + 1
node.master: true
node.data: true
xpack.security.enabled: true
http.cors.enabled: true	##
http.cors.allow-origin: "*"	##跨域访问,支持head插件可以访问es
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/elastic-certificates.p12

elasticesearch在实际生产中非常消耗cpu,需要将初始申请的JVM内存调高,默认是1G,按照实际情况调整

vim /etc/elasticsearch/jvm.options 
#修改这两行
	-Xms4g #设置最小堆的值为4g
	-Xmx4g #设置组大堆的值为4g

配置TLS和身份验证 -- 此步为了安全也可跳过该步 {{

在Elasticsearch主节点上配置TLS.
cd /usr/share/elasticsearch/
./bin/elasticsearch-certutil ca ##一直用enter键
./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
 ll

-rw-------  1 root root   3443 Jun 28 16:46 elastic-certificates.p12
-rw-------  1 root root   2527 Jun 28 16:43 elastic-stack-ca.p12
#####给生产的文件添加elasticsearch组权限
chgrp elasticsearch /usr/share/elasticsearch/elastic-certificates.p12 /usr/share/elasticsearch/elastic-stack-ca.p12 
#####给这两个文件赋640权限
chmod 640 /usr/share/elasticsearch/elastic-certificates.p12 /usr/share/elasticsearch/elastic-stack-ca.p12
######把这两个文件移动到elasticsearch配置文件夹中
mv /usr/share/elasticsearch/elastic-* /etc/elasticsearch/

将tls身份验证文件拷贝到节点配置文件夹中

scp /etc/elasticsearch/elastic-certificates.p12 root@192.168.243.163:/etc/elasticsearch/
scp /etc/elasticsearch/elastic-stack-ca.p12 root@192.168.243.163:/etc/elasticsearch/

}} 启动服务,验证集群 先启动主节点集群,在随后启动其他节点

systemctl start elasticsearch

设置密码--统一设置密码为123456

/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive

验证集群--##浏览器访问 http://192.168.243.163:9200/_cluster/health?pretty
** 返回如下**

		{
  		  "cluster_name" : "my-elk",
   		  "status" : "green",
 		  "timed_out" : false,
 		  "number_of_nodes" : 3,##节点数
		  "number_of_data_nodes" : 3, ##数据节点数
		  "active_primary_shards" : 4,
		  "active_shards" : 8,
		  "relocating_shards" : 0,
		  "initializing_shards" : 0,
		  "unassigned_shards" : 0,
		  "delayed_unassigned_shards" : 0,
		  "number_of_pending_tasks" : 0,
		  "number_of_in_flight_fetch" : 0,
		  "task_max_waiting_in_queue_millis" : 0,
 		  "active_shards_percent_as_number" : 100.0
		}

#部署kibana

yum源安装 ##在任意节点上安装

yum -y install kibana

修改kibana配置文件

vim /etc/kibana/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
server.name: "elk-node2"
elasticsearch.hosts: ["http://192.168.243.162:9200","http://192.168.243.163:9200"]
elasticsearch.username: "elastic"
elasticsearch.password: "123456"
i18n.locale: "en"

启动服务

systemctl start kibana

浏览器访问 http://192.168.243.162:5601/

安装logstash

在主节点上进行部署

yum -y install logstash  ##yum 源安装
##二进制安装
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.4.1.tar.gz
tar -zvxf logstash-7.4.1.tar.gz -C /home/elk
mkdir -p /data/logstash/{logs,data} 

修改配置文件

vim /etc/logstash/logstash.conf
				egrep "#|^$" /etc/logstash/conf.d/logstash_debug.conf
input {
	beats {
			port => 5044
	}
}

filter {
	grok {
			match => {
					"message" => "(?<temMsg>(?<=logBegin ).*?(?=logEnd))"
			}

			overwrite => ["temMsg"]
	}

 grok {
			match => {
					"temMsg" => "(?<reqId>(?<=reqId:).*?(?=,operatName))"
			}
			overwrite => ["reqId"]
	}
 grok {
			match => {
					"temMsg" => "(?<operatName>(?<=operatName:).*?(?=,operatUser))"
			}
			overwrite => ["operatName"]
 }
 grok {
			match => {
					"temMsg" => "(?<operatUser>(?<=operatUser:).*?(?=,userType))"
			}
			overwrite => ["operatUser"]
	}
 grok {
			match => {
					"temMsg" => "(?<userType>(?<=userType:).*?(?=,requestTime))"
			}
			overwrite => ["userType"]
	}
 grok {
			match => {
					"temMsg" => "(?<requestTime>(?<=requestTime:).*?(?=,method))"
			}
			overwrite => ["requestTime"]
	}
grok {
			match => {
					"temMsg" => "(?<method>(?<=method:).*?(?=,params))"
			}
			overwrite => ["method"]
	}
 grok {
			match => {
					"temMsg" => "(?<params>(?<=params:).*?(?=,operatIp))"
			}
			overwrite => ["params"]
	}
 grok {
			match => {
					"temMsg" => "(?<operatIp>(?<=operatIp:).*?(?=,executionTime))"
			}
			overwrite => ["operatIp"]
	}
 grok {
			match => {
					"temMsg" => "(?<executionTime>(?<=executionTime:).*?(?=,operatDesc))"
			}
			overwrite => ["executionTime"]
	}
 grok {
			match => {
					"temMsg" => "(?<operatDesc>(?<=operatDesc:).*?(?=result))"
			}
			overwrite => ["operatDesc"]
	}
	 grok {
			match => {
					"temMsg" => "(?<result>(?<=result:).*?(?=,siteCode))"
			}
			overwrite => ["result"]
	}
 grok {
			match => {
					"temMsg" => "(?<siteCode>(?<=siteCode:).*?(?=,module))"
			}
			overwrite => ["siteCode"]
	}
 grok {
			match => {
					"temMsg" => "(?<module>(?<=module:).*?(?= ))"
			}
			overwrite => ["module"]
	}
grok {
			match => [
							"message", "%{NOTSPACE:temMsg}"
											]
			}
	json {
			source => "temMsg"
#       field_split => ","
#       value_split => ":"
#       remove_field => [ "@timestamp","message","path","@version","path","host" ]
			}
			urldecode {
							all_fields => true
							}

		mutate {
			rename => {"temMsg" => "message"}
			remove_field => [ "message" ]
			}
}
output {
	elasticsearch {
			hosts => ["192.168.243.162:9200","192.168.243.163:9200","192.168.243.164:9200"]	
			user => "elastic"
			password => "123456"
			index => "logstash-%{+YYYY.MM.dd}"
	}
}
	vim /etc/logstash/logstash.yml
	http.host: "ELK1"
	path.data: /home/elk/data/logstash/data
	path.logs: /data/logstash/logstash/logs
	xpack.monitoring.enabled: true #kibana监控插件中启动监控logstash
	xpack.monitoring.elasticsearch.hosts: ["192.168.243.162:9200","192.168.243.163:9200","192.168.243.164:9200"]

启动logstash服务

systemctl start logstash

二进制启动方式

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash_debug.conf

部署filebeat

yum -y install filebeat 
 vim /etc/filebaet/filebaet.conf
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /srv/docker/produce/*/*/cloud*.log
  include_lines: [".*logBegin.*",".*logEnd.*"]
  #  multiline.pattern: ^\[
  #  multiline.negate: true
  #  multiline.match: after
filebeat.config.modules: 
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:
  hosts: ["192.168.243.162:5601"]
output.logstash:
  hosts: ["192.168.243.162:5044"]
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~ 	

启动filebeat

systemctl start filebeat