一、简介

1、组成

ELK由Elasticsearch、Logstash和Kibana三部分组件组成;

Elasticsearch是个开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等。

Logstash是一个完全开源的工具,它可以对你的日志进行收集、分析,并将其存储供以后使用

kibana 是一个开源和免费的工具,它可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以帮助您汇总、分析和搜索重要数据日志。

2、组件

Logstash: logstash server端用来搜集日志;

Elasticsearch: 存储各类日志;

Kibana: web化接口用作查寻和可视化日志;

Logstash Forwarder: logstash client端用来通过lumberjack 网络协议发送日志到logstash server;

3、工作流程

在需要收集日志的所有服务上部署logstash,作为logstash_agent(logstash shipper)用于监控并过滤收集日志,将过滤后的内容发送到Redis,然后logstash_server 将日志收集在一起交给全文搜索服务ElasticSearch,可以用ElasticSearch进行自定义搜索通过Kibana 来结合自定义搜索进行页面展示。

 

4、服务分布


主机A 192.168.0.100   Elasticsearch+Logstash-server+Kinaba+Redis
主机B 192.168.0.101   Logstash-agent


 

二、开始部署服务

 

在主机B上面  192.168.0.101

部署java环境

#下载软件包、解压、设置环境变量


wget http://download.oracle.com/otn-pub/java/jdk/8u111-b14/jdk-8u111-linux-x64.tar.gz
tar -xf jdk-8u111-linux-x64.tar.gz -C /usr/local
mv /usr/local/jdk-8u111-linux-x64 /usr/local/java
echo "export PATH=\$PATH:/usr/local/java/bin" > /etc/profile.d/java.sh
. /etc/profile


2.部署Logstash-agent


wget https://download.elastic.co/logstash/logstash/logstash-2.2.0.tar.gz
tar -xf logstash-2.2.0.tar.gz -C /usr/local
echo "export PATH=\$PATH:/usr/local/logstash-2.2.0/bin" > /etc/profile.d/logstash.sh
. /etc/profile


3、logstash常用参数



 -e :指定logstash的配置信息,可以用于快速测试;  -f :指定logstash的配置文件;可以用于生产环境;



4、启动logstash

4.1 通过-e参数指定logstash的配置信息,用于快速测试,直接输出到屏幕。



#logstash -e "input {stdin{}} output {stdout{}}" 
            my name is zhengyansheng.    //手动输入后回车,
            等待10秒后会有返回结果Logstash startup completed2015-10-08T13:55:50.660Z 0.0.0.0 
            my name is zhengyansheng.这种输出是直接原封不动的返回...


4.2 通过-e参数指定logstash的配置信息,用于快速测试,以json格式输出到屏幕。

5.1 logstash输出信息存储到redis数据库中


将logstash的输出信息保存到redis数据库中,如下

前提是(192.168.0.100)有redis数据库,那么下一步我们就是安装redis数据库.



# cat logstash_agent.conf
input { stdin { } }
output {
    stdout { codec => rubydebug }
    redis {
        host => '192.168.0.100'
         port => '6379'
        password  => '12345678'
 
        data_type => 'list'
        key => 'logstash:redis'
    }
}
 
如果提示Failed to send event to Redis,表示连接Redis失败或者没有安装,请检查...



6、 查看logstash进程


# logstash agent -f logstash_agent.conf --verbose
#  ps -ef|grep logstash



在主机A上面 192.168.0.100


部署java环境(同上)

部署redis



wget http://download.redis.io/releases/redis-2.8.19.tar.gz
yum install tcl -y
tar zxf redis-2.8.19.tar.gz
cd redis-2.8.19
make MALLOC=libc
make test    //这一步时间会稍久点...
make install
 
cd utils/
./install_server.sh     //脚本执行后,所有选项都以默认参数为准即可
Welcome to the redis service installer
This script will help you easily set up a running redis server
 
Please select the redis port for this instance: [6379] 
Selecting default: 6379
Please select the redis config file name [/etc/redis/6379.conf] 
Selected default - /etc/redis/6379.conf
Please select the redis log file name [/var/log/redis_6379.log] 
Selected default - /var/log/redis_6379.log
Please select the data directory for this instance [/var/lib/redis/6379] 
Selected default - /var/lib/redis/6379
Please select the redis executable path [/usr/local/bin/redis-server] 
Selected config:
Port           : 6379
Config file    : /etc/redis/6379.conf
Log file       : /var/log/redis_6379.log
Data dir       : /var/lib/redis/6379
Executable     : /usr/local/bin/redis-server
Cli Executable : /usr/local/bin/redis-cli
Is this ok? Then press ENTER to go on or Ctrl-C to abort.
Copied /tmp/6379.conf => /etc/init.d/redis_6379
Installing service...
Successfully added to chkconfig!
Successfully added to runlevels 345!
Starting Redis server...
Installation successful!



查看redis的监控端口

 


# netstat -tnlp |grep redis
tcp        0      0 0.0.0.0:6379                0.0.0.0:*                   LISTEN      3843/redis-server * 
tcp        0      0 127.0.0.1:21365             0.0.0.0:*                   LISTEN      2290/src/redis-serv 
tcp        0      0 :::6379                     :::*                        LISTEN      3843/redi


测试redis是否正常工作

 



# cd redis-2.8.19/src/
# ./redis-cli -h 192.168.0。100 -p 6379 //连接redis
192.168.0.100:6379> ping
PONG
192.168.0.100:6379> set name zhengyansheng
OK
192.168.0.100:6379> get name
"zhengyansheng"
192.168.0.100:6379> quit
   
启动redis
/usr/local/redis/bin/redis-server /usr/local/redis/conf/redis.conf


 

基于入口redis启动logstash(在主机B上面启动logstash-agent)




# cat logstash_agent.conf
input { stdin { } }
output {
    stdout { codec => rubydebug }
    redis {
        host => '192.168.0.100'
        data_type => 'list'
        key => 'logstash:redis'
    }
}
# logstash agent -f logstash_agent.conf --verbose
Pipeline started {:level=>:info}
Logstash startup completed
dajihao linux
{
       "message" => "dajihao linux",
      "@version" => "1",
    "@timestamp" => "2015-10-08T14:42:07.550Z",
          "host" => "0.0.0.0"
}




在主机A上面安装elasticsearch

1、安装Elasticsearch




# wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-2.2.0.tar.gz
# tar zxf elasticsearch-2.2.0.tar.gz -C /usr/local/




2、修改elasticsearch配置文件elasticsearch.yml并且做以下修改.



# vim /usr/local/elasticsearch-2.2.0/config/elasticsearch.yml
discovery.zen.ping.multicast.enabled: false         #关闭广播,如果局域网有机器开9300 端口,服务会启动不了
network.host: 192.168.0.100    #指定主机地址,其实是可选的,但是最好指定因为后面跟kibana集成的时候会报http连接出错(直观体现好像是监听了:::9200 而不是0.0.0.0:9200)
http.port: 9200

3、启动elasticsearch服务



nohup  /usr/local/elasticsearch-2.2.0/bin/elasticsearch >/usr/local/elasticsearch-2.2.0/nohub &


 

如果此种方式无法启动请创建普通用户es启动

 



groupadd elk
useradd es -g elk
chown -R es.elk /usr/local/elasticsearch-2.2.0
su - es
nohup  /usr/local/elasticsearch-2.2.0/bin/elasticsearch >/usr/local/elasticsearch-2.2.0/nohub &




4、查看elasticsearch的监听端口

# netstat -tnlp |grep java
tcp        0      0 :::9200                     :::*                        LISTEN      7407/java           
tcp        0      0 :::9300                     :::*                        LISTEN      7407/java

 

在主机A上安装logstash-server(同上)注意配置文件有所不同



cat logstash_server.conf input {    redis {        host => '192.168.0.100'    port => '6379'    password => '12345678'        data_type => 'list'        key => 'logstash:redis'        type => "redis-input"   }}output {    elasticsearch {        hosts => "192.168.0.100"        index => "logstash-%{+YYYY.MM.dd}"   }}
启动logstash-server (从redis中获取数据并输送到es中)
logstash agent -f logstash_server.conf --verbose



 

安装elasticsearch插件


#Elasticsearch-kopf插件可以查询Elasticsearch中的数据,安装elasticsearch-kopf,只要在你安装Elasticsearch的目录中执行以下命令即可:
 
  
# cd /usr/local/elasticsearch-2.2.0/bin/
# ./plugin install lmenezes/elasticsearch-kopf
-> Installing lmenezes/elasticsearch-kopf...
Trying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip...
Downloading .............................................................................................
Installed lmenezes/elasticsearch-kopf into /usr/local/elasticsearch-2.2.0/plugins/kopf
 
执行插件安装后会提示失败,很有可能是网络等情况...
-> Installing lmenezes/elasticsearch-kopf...
Trying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip...
Failed to install lmenezes/elasticsearch-kopf, reason: failed to download out of all possible locations..., use --verbose to get detailed information
 
解决办法就是手动下载该软件,不通过插件安装命令...
cd /usr/local/elasticsearch-2.2.0/plugins
wget https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip
unzip master.zip
mv elasticsearch-kopf-master kopf
以上操作就完全等价于插件的安装命令




浏览器访问kopf页面访问elasticsearch保存的数据


在主机A上面安装kibana

1、安装Kinaba

# wget https://download.elastic.co/kibana/kibana/kibana-4.4.0-linux-x64.tar.gz
# tar zxf kibana-4.4.0-linux-x64.tar.gz -C /usr/local



2、修改kinaba配置文件kinaba.yml




# vim /usr/local/kibana-4.4.0-linux-x64/config/kibana.yml
elasticsearch_url: "http://192.168.0.100:9200"


3、启动kinaba




nohup  /usr/local/kibana-4.4.0-linux-x64/bin/kibana > /usr/local/kibana-4.4.0-linux-x64/nohub.out &


 

输出以下信息,表明kinaba成功.

{"name":"Kibana","hostname":"localhost.localdomain","pid":1943,"level":30,"msg":"No existing kibana index found","time":"2015-10-08T00:39:21.617Z","v":0}
{"name":"Kibana","hostname":"localhost.localdomain","pid":1943,"level":30,"msg":"Listening on 0.0.0.0:5601","time":"2015-10-08T00:39:21.637Z","v":0}
kinaba默认监听在本地的5601端口上

   


4、浏览器访问kinaba


4.1 使用默认的logstash-*的索引名称,并且是基于时间的,点击“Create”即可。

 

redis日志数据存放在哪_java


4.2 看到如下界面说明索引创建完成。

redis日志数据存放在哪_java_02

4.3 点击“Discover”,可以搜索和浏览Elasticsearch中的数据。

 

redis日志数据存放在哪_redis日志数据存放在哪_03


 

>>>结束<<<


  



1、ELK默认端口号
elasticsearch:9200 9300
logstash     : 9301
kinaba       : 5601
 
  
2、错误汇总
(1)java版本过低
[2015-10-07 18:39:18.071]  WARN -- Concurrent: [DEPRECATED] Java 7 is deprecated, please use Java 8.
 
(2)Kibana提示Elasticsearch版本过低...
This version of Kibana requires Elasticsearch 2.0.0 or higher on all nodes. I found the following incompatible nodes in your cluster: 
Elasticsearch v1.7.2 @ inet[/192.168.1.104:9200] (127.0.0.1)

解决办法:

  



转载于:https://blog.51cto.com/imork/1882959