文章目录

  • 一、搭建Elasticsearch
  • 一、搭建单机ELasticsearch
  • 1.下载
  • 1.elk 下载官网
  • 2.传到liunx服务器
  • 3.创建 es 用户组
  • 4.修改配置
  • 2.安装head 监控elasticsearch
  • 1. 安装git
  • 2. 安装node
  • 3. 安装head
  • 二、搭建elasticsearch集群
  • 1.修改es1中config/elasticsearch.yml配置
  • 2.将es1目录复制两份分别为es2、es3
  • 3.修改es2的 config/elasticsearch.yml
  • 4.修改防火墙端口号,让外部访问
  • 5.启动机器
  • 三、IK Analysis中文分词器
  • 1.下载IK
  • 2.配置到elasticsearch中
  • 二、搭建logstash
  • 一、安装logstash并同步mysql数据到Elasticsearch
  • 1.下载logstash
  • 2.下载mysql驱动
  • 3.配置启动logstash
  • 二、同步log
  • 三、搭建kibana
  • 1.下载kibana
  • 2. 解压
  • 3.修改配置
  • 4.启动kibana
  • 5.配置索引
  • 四、springboot连接elasticsearch搜索内容
  • 1. 启动一个springboot工程
  • 2. 配置pom和yml文件
  • 3.配置类
  • 总结



一、搭建Elasticsearch

今天做了新的一个商城项目,里面搜索用的就是elk,把mysql数据库中存的数据通过logstash搜集给elasticsearch,springboot通过 RestHighLevelClient 即可查询数据库中的数据。经过测试是真的快啊。所以自己搭建记录一下

提示:以下是本篇文章正文内容,下面案例可供参考

一、搭建单机ELasticsearch

1.下载

1.elk 下载官网

www.elastic.co/downloads/

elk 8 单机安装 elk7.12_elasticsearch


elk 8 单机安装 elk7.12_elk 8 单机安装_02


我搭建的是最新版本的,下载下来的是xxx.tar.gz

2.传到liunx服务器

我是用xftp传的
自己在liunx 新建一个文件夹存储压缩文件

cd /usr/local
mkdir elaticsearch

elk 8 单机安装 elk7.12_elasticsearch_03


解压

tar -zxvf elasticsearch-7.12.0-linux-x86_64.tar.gz
改个名字
mv   elasticsearch-7.12.0-linux-x86_64 es

启动

cd /usr/local/elasticsearch/es/
./bin/elasticsearch

会报错的java.lang.RuntimeException: can not run elasticsearch as root

elk 8 单机安装 elk7.12_mysql_04


因为当前是 root 用户,es 默认不允许 root 用户操作。

3.创建 es 用户组

创建 es 用户组和 es 用户,并将其添加到用户组 es 中

groupadd es 
useradd es -g es

更改 es 文件夹及内部文件的所属用户及组为 es:es

chown -R es /usr/local/elasticsearch/es/
4.修改配置

修改es下 conf下 jvm.options (看自己电脑配置,尽量而行,我的liunx 内存是2g的,以后启动集群3个你设的加起来挺多的,自己做实验还是弄小点比较好)

vim es/config/jvm.options
-Xms128m
-Xmx128m

修改elasticsearch.yml

vim es/config/elasticsearch.yml
#network.host为当前服务器ip(记住一定是内网ip)或者是0.0.0.0
cluster.name: elasticsearch
node.name: node-1
network.host: 172.21.0.13
http.port: 9200
transport.tcp.port: 9300 #对外访问端口
discovery.seed_hosts: ["172.21.0.13:9300"] 
cluster.initial_master_nodes: ["172.21.0.13:9300"]
# 允许head插件远程访问
http.cors.enabled: true
http.cors.allow-origin: "*"
#最久未使用(LRU)的 fielddata 会被回收为新数据腾出空间 必须要添加的配置
indices.breaker.total.use_real_memory: false
#fielddata 断路器默认设置堆的 60% 作为 fielddata 大小的上限。
indices.breaker.fielddata.limi: 60%  
#request 断路器估算需要完成其他请求部分的结构大小,例如创建一个聚合桶,默认限制是堆内存的 40%。
indices.breaker.request.limit: 40%
#total 揉合 request 和 fielddata 断路器保证两者组合起来不会使用超过堆内存的 70%。
indices.breaker.total.limit: 70%

修改liunx配置

vim /etc/sysctl.conf

i进行添加

vm.max_map_count=655360

esc 输入 :wq
然后执行加载参数

sysctl -p
vim /etc/security/limits.conf
#添加如下内容
* hard nofile 65536
* soft nofile 65536
* soft nproc 4096
* hard nproc 4096
:wq 保存并退出
# 其中*表示所有用户  nofile表示最大文件句柄数,表示能够打开的最大文件数目
此处不行还要修改 /etc/security/limits.d/90-nproc.conf
vim  /etc/security/limits.d/90-nproc.conf
* soft nproc 4096
:wq 保存并退出(启动还报错)

vi bin/elasticsearch
在开始的位置加入:
export JAVA_HOME=/usr/local/elasticsearch/es/jdk      #(此处es的jdk所在目录)
export PATH=$JAVA_HOME/bin:$PATH
:wq 保存并退出

切换到es用户下,进入到elasticsearch根目录下执行

如果出现  ERROR: [2] bootstrap checks failed
[1]: system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
[2]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] mu
修改elastic search.yml文件加入:
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
su es
cd /usr/local/elaticsearch/es/bin/
 ./elasticaearch -d  后台启动

2.安装head 监控elasticsearch

这个必须安装,因为这个是监控elasticsearch用的,等集群和kibana搭完了,这个才没用。否则你不知道你搭的对不对

1. 安装git
yum -y install git
2. 安装node

下载 node 地址 https://nodejs.org/en/download/

elk 8 单机安装 elk7.12_elasticsearch_05


如果你liunx有wegt也可以

(yum -y install wget 不好用删了 yum -y remove wget 重新安装)

cd /usr/local/elasticsearch
wget https://nodejs.org/dist/v10.16.3/node-v10.16.3-linux-x64.tar.xz

下载后,是一个【node-v10.16.3-linux-x64.tar.xz】文件夹,进行解压

tar -xJf node-v10.16.3-linux-x64.tar.xz

配置环境变量

su root
vi /etc/profile

进入文件后,在最后面追加两条路径,如下

#(刚刚解压地址)
export NODEJS_HOME=/usr/local/elasticsearch/node-v10.16.3-linux-x64
export PATH=$NODE_HOME/bin:$PATH

liunx执行

source /etc/profile

看看安装完了没,显示node版本则安装完了

node -v

elk 8 单机安装 elk7.12_mysql_06

3. 安装head
mkdir -p /usr/local/elasticsearch/plugins
cd /usr/local/elasticsearch/plugins

第一种安装方法

git clone git://github.com/mobz/elasticsearch-head.git

第二种安装方法
(wget不下来直接访问 https://github.com/mobz/elasticsearch-head/archive/master.zip)

wget  https://github.com/mobz/elasticsearch-head/archive/master.zip
unzip master.zip
cd /usr/local/elasticsearch/plugins/elasticsearch-head-master/ 
vi Gruntfile.js

加上

homename: ‘*’,

elk 8 单机安装 elk7.12_mysql_07


cd /usr/local/elasticsearch/plugins/elasticsearch-head-master

执行即可启动

npm install -g grunt-cli -d
npm install grunt --save
npm install
npm run start &
#后台启动
nohup npm run start &

elk 8 单机安装 elk7.12_elasticsearch_08

elk 8 单机安装 elk7.12_elasticsearch_09


elk 8 单机安装 elk7.12_bc_10

分片数 = 5 片 * 副本数

增加

http://81.70.170.21:9200/es_head/_create/1/

elk 8 单机安装 elk7.12_mysql_11

version 修改后会+1

修改:

elk 8 单机安装 elk7.12_elasticsearch_12

不存在id1会创建

查询

elk 8 单机安装 elk7.12_elk 8 单机安装_13

“_score”: 1, 分数:就是查询谁在前面

二、搭建elasticsearch集群

Elasticsearch天生就是为分布式而生的搜索引擎,我们搭建一下集群环境
liunx查看elasticsearch运行状态(重启就把pid删除就行)

ps -ef | grep elastic

注意:总用一个删不掉,不用管他,它不是启动内容

1.修改es1中config/elasticsearch.yml配置

vi config/elasticsearch.yml

修改内容如下

path.logs:  /usr/local/elasticsearch/es1/logs #log信息
path.data: /usr/local/elasticsearch/es1/data  #es存储数据地址
node.name: node-1  # 端口 
cluster.name: es  # 集群名称,同一集群要一致
transport.tcp.port: 9300 对外访问端口
discovery.seed_hosts: ["172.21.0.13:9300", "172.21.0.13:9301", "172.21.0.13:9302"]
cluster.initial_master_nodes: ["172.21.0.13:9300", "172.21.0.13:9301", "172.21.0.13:9302"]
# 跨域请求配置(为了让类似head的第三方插件可以请求es) 
http.cors.enabled: true
http.cors.allow-origin: "*"
# 防止脑裂最好是n/2+1
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping_timeout: 60s

2.将es1目录复制两份分别为es2、es3

cd /usr/local/elasticsearch 
cp -Rf es1/ es2 
cp -Rf es1/ es3

更改es文件夹及内部文件的所属用户及组为es:es

chown -Rf es:es es2/ 
chown -Rf es:es es3/

3.修改es2的 config/elasticsearch.yml

cluster.name: es # 集群名称,同一集群要一致 
node.name: node-2 # 集群下各节点名称 
http.port: 9201 # 端口
transport.tcp.port: 9301 对外访问端口
path.logs:  /usr/local/elasticsearch/es2/logs #log信息
path.data: /usr/local/elasticsearch/es2/data  #es存储数据地址

修改es3的 config/elasticsearch.yml

cluster.name: es # 集群名称,同一集群要一致 
node.name: node-3 # 集群下各节点名称 
http.port: 9202 # 端口
transport.tcp.port: 9302 对外访问端口
path.logs:  /usr/local/elasticsearch/es3/logs #log信息
path.data: /usr/local/elasticsearch/es3/data  #es存储数据地址

4.修改防火墙端口号,让外部访问

systemctl start firewalld
firewall-cmd --zone=public --add-port=9201/tcp --permanent
firewall-cmd --zone=public --add-port=9202/tcp --permanent
firewall-cmd --zone=public --add-port=9200/tcp --permanent
firewall-cmd --reload

5.启动机器

./es1/bin/elasticsearch -d
./es2/bin/elasticsearch -d
./es3/bin/elasticsearch -d

如果建立了head 直接访问

http://xx.xx.xx.xx:9100/ 自己liunx外网地址

我把1和2 关了,启动就是3个了,新建索引,分片会平均分配

elk 8 单机安装 elk7.12_bc_14

三、IK Analysis中文分词器

IK Analysis 插件(https://github.com/medcl/elasticsearch-analysis-ik/)就是一款专门用于 Elasticsearch 的分词器,可以友好的处理中文。
就是搜索中文更快,更简洁

1.下载IK

下载传到liunx

https://github.com/medcl/elasticsearch-analysis-ik/releases

或者
liunx命令

wget https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.12.0/elasticsearch-analysis-ik-7.12.0.zip

2.配置到elasticsearch中

创建ik目录,然后将ik分词器解压至ik目录(elasticsearch三个节点都需要操作,有几个节点弄几个)

mkdir -p /usr/local/elasticsearch/es1/plugins/ik 
mkdir -p /usr/local/elasticsearch/es2/plugins/ik 
mkdir -p /usr/local/elasticsearch/es3/plugins/ik

把下载的ik解压到创建目录

unzip elasticsearch-analysis-ik-7.12.0.zip -d /usr/local/elasticsearch/es1/plugins/ik/ 
unzip elasticsearch-analysis-ik-7.12.0.zip -d /usr/local/elasticsearch/es2/plugins/ik/ 
unzip elasticsearch-analysis-ik-7.12.0.zip -d /usr/local/elasticsearch/es3/plugins/ik/

直接将elasticsearch目录全部授权给es用户即可

chown -Rf es:es /usr/local/elasticsearch/

重新启动elasticsearch
重启后测试 ps -ef | grep elastic
kill 进程

su es
./es1/bin/elasticsearch -d
./es2/bin/elasticsearch -d

二、搭建logstash

在生产中logstash往往是将抽取的数据过滤后输出到Elasticsearch中
比如搜集日志信息,输入到Elasticsearch,通过kibana监控
我先写一下同步mysql数据到Elasticsearch

一、安装logstash并同步mysql数据到Elasticsearch

1.下载logstash

下载地址 :https://www.elastic.co/cn/downloads/logstash 先建立一个logstash解压后的文件夹

cd /usr/local/elasticsearch/

上传下载的logstash-7.12.0.tar.gz
解压logstash-7.12.0.tar.gz

tar -zxvf logstash-7.12.0.tar.gz -C /usr/local/elasticsearch/

2.下载mysql驱动

如果logstash想要同步mysql数据库,必须要有这个mysql驱动

,反正我是没找到现在7.12.0的版本不安驱动就能同步数据库的,如果有,请告诉我一下

驱动地址:https://dev.mysql.com/downloads/connector/j/

elk 8 单机安装 elk7.12_bc_15


上端到liunx

tar -zxvf mysql-connector-java-8.0.22.tar.gz

把这个解压后的jar放到 /logstash-7.12.0/logstash-core/lib/jars里面

3.配置启动logstash

cd /usr/local//usr/local/elasticsearch/logstash-7.12.0

在config文件夹中创建mysql.conf
随意取名

vim mysql.conf

已经确定7版本以上取消了type,所以我没加type)

input {
  jdbc {
    jdbc_driver_library => "/usr/local/elasticsearch/logstash-7.12.0/logstash-core/lib/jars/mysql-connector-java-8.0.22.jar"
    jdbc_driver_class => "com.mysql.jdbc.Driver"
    # 8.0以上版本:一定要把serverTimezone=UTC天加上
    jdbc_connection_string => "jdbc:mysql://localhost/ego?characterEncoding=utf8&useSSL=false&serverTimezone=UTC&rewriteBatchedStatements=true"
    jdbc_user => "$$$$$"
    jdbc_password => "#####"
    schedule => "* * * * *"
    statement => "SELECT * FROM t_goods_category"
    jdbc_paging_enabled => "true"
    jdbc_page_size => "1000"

  }
}
output {
    elasticsearch {
        # ES的IP地址及端口
        hosts => ["81.70.170.20:9202"]
        # 索引名称 可自定义
        index => "goods_category"
        # 需要关联的数据库中有有一个id字段,对应类型中的id
        document_id => "%{id}"
    }
    stdout {
        # JSON格式输出
        codec => json_lines
    }
}

启动

./bin/logstash -f ./config/mysql.conf
nohup bin/logstash -f ./config/mysql.conf >/dev/null 2>&1 &

多数据库连接如下

input {
     	stdin {}
     	jdbc {
 		      # mysql 数据库链接,shop为数据库名
 		      jdbc_connection_string => "jdbc:mysql://127.0.0.1:3306/lico"
 		      # 用户名和密码
 		      jdbc_user => "root"
 		      jdbc_password => "123456"
 		      # 驱动
 		      jdbc_driver_library => "/usr/local/elasticsearch/logstash-7.12.0/logstash-core/lib/jars/mysql-connector-java-8.0.22.jar"
 		      # 驱动类名
 		      jdbc_driver_class => "com.mysql.jdbc.Driver"
 		      jdbc_paging_enabled => "true"
 		      jdbc_page_size => "50000"
 		      # 执行的sql 文件路径+名称
 		      statement_filepath => "/usr/local/elasticsearch/logstash-7.12.0/config/goods.sql"
 		      # 设置监听间隔  各字段含义(由左至右)分、时、天、月、年,全部为*默认含义为每分钟都更新
 		      schedule => "* * * * *"
 		      # 索引类型
 		      type => "admin_user"
     	}
 	    jdbc {
 		      # mysql 数据库链接,shop为数据库名
 		      jdbc_connection_string => "jdbc:mysql://127.0.0.1:3306/lico"
 		      # 用户名和密码
 		      jdbc_user => "root"
 		      jdbc_password => "123456"
 		      # 驱动
 		      jdbc_driver_library => "/usr/local/elasticsearch/logstash-7.12.0/logstash-core/lib/jars/mysql-connector-java-8.0.22.jar"
 		      # 驱动类名
 		      jdbc_driver_class => "com.mysql.jdbc.Driver"
 		      jdbc_paging_enabled => "true"
 		      jdbc_page_size => "50000"
 		      # 执行的sql 文件路径+名称
 		      statement_filepath => "/usr/local/elasticsearch/logstash-7.12.0/config/goods.sql"
 		      # 设置监听间隔  各字段含义(由左至右)分、时、天、月、年,全部为*默认含义为每分钟都更新
 		      schedule => "* * * * *"
 		      # 索引类型
 		      type => "user"
 	    	}
   }
   filter {
 	    json {
 	        source => "message"
 	        remove_field => ["message"]
 	    }
    }
 	output {
 		if [type]=="admin_user"{
 		    elasticsearch {
 		        hosts => ["localhost:9200"]
 		        index => "admin_user"
 		        document_id => "%{id}"
 		    }
 		}
 		if [type]=="user"{
 		    elasticsearch {
 		        hosts => ["localhost:9200"]
 		        index => "user"
 		        document_id => "%{id}"
 		    }
	 	}
     	stdout {
       		codec => json_lines
        }
    }

可以再conf下面新建goods.sql

select
 goods_id as goodsId,
 goods_name as  goodsName,
 market_price as marketPrice,
 original_img as originalImg
from
 t_goods

二、同步log

elk 8 单机安装 elk7.12_mysql_16


更多的可以去

https://www.jianshu.com/p/bc7dae1ebab7 查看

三、搭建kibana

1.下载kibana

cd usr/local/elasticsearch
yum install wget
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.12.0-linux-x86_64.tar.gz

或者https://www.elastic.co/cn/downloads/kibana官网直接下载,然后传到liunx上

elk 8 单机安装 elk7.12_elk 8 单机安装_17

2. 解压

tar -zxvf  kibana-7.12.0-linux-x86_64.tar.gz

3.修改配置

mv kibana-7.12.0-linux-x86_64 kibana
cd kibana
vi config/kibana.yml

修改以下配置

# 服务端口,默认5601 
server.port: 5601 
# 允许访问IP
server.host: "0.0.0.0"
# 设置 elasticsearch 节点及端口 
elasticsearch.hosts: ["http://81.70.170.21:9200", "http://81.70.170.21:9201", "http://81.70.170.21:9202"]
#自己设置登录密码
elasticsearch.username: "wqy"
elasticsearch.password: "wqy"
#汉化
i18n.locale: "zh-CN"

开启防火墙

systemctl start firewalld
firewall-cmd --zone=public --add-port=5601/tcp --permanent
firewall-cmd --reload

4.启动kibana

cd /usr/local/elasticsearch/kibana/

nohup ./bin/kibana --allow-root >/dev/null 2>&1 &

elk 8 单机安装 elk7.12_elasticsearch_18


选自己浏览

5.配置索引

elk 8 单机安装 elk7.12_elasticsearch_19


elk 8 单机安装 elk7.12_elasticsearch_20


elk 8 单机安装 elk7.12_elk 8 单机安装_21


查看索引内容

elk 8 单机安装 elk7.12_elk 8 单机安装_22


elk 8 单机安装 elk7.12_mysql_23

四、springboot连接elasticsearch搜索内容

1. 启动一个springboot工程

elk 8 单机安装 elk7.12_elasticsearch_24


elk 8 单机安装 elk7.12_mysql_25

2. 配置pom和yml文件

<?xml version="1.0" encoding="UTF-8"?>

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>

	<groupId>com.xxxx</groupId>
	<artifactId>es-demo</artifactId>
	<version>1.0-SNAPSHOT</version>


	<properties>
		<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
		<maven.compiler.source>1.8</maven.compiler.source>
		<maven.compiler.target>1.8</maven.compiler.target>
	</properties>

	<dependencies>
		<dependency>
			<groupId>junit</groupId>
			<artifactId>junit</artifactId>
			<version>4.12</version>
			<scope>test</scope>
		</dependency>
		<!-- elasticsearch 服务依赖 -->
		<dependency>
			<groupId>org.elasticsearch</groupId>
			<artifactId>elasticsearch</artifactId>
			<version>7.4.2</version>
		</dependency>
		<!-- rest-client 客户端依赖 -->
		<dependency>
			<groupId>org.elasticsearch.client</groupId>
			<artifactId>elasticsearch-rest-client</artifactId>
			<version>7.4.2</version>
		</dependency>
		<!-- rest-high-level-client 客户端依赖 -->
		<dependency>
			<groupId>org.elasticsearch.client</groupId>
			<artifactId>elasticsearch-rest-high-level-client</artifactId>
			<version>7.4.2</version>
		</dependency>
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-beans</artifactId>
			<version>5.2.3.RELEASE</version>
		</dependency>
		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-context</artifactId>
			<version>5.2.3.RELEASE</version>
		</dependency>
	</dependencies>

</project>

yml文件添加

elasticsearch:
  address: 81.70.170.21:9202

es集群配置:
elasticsearch.address=192.168.10.100:9200, 192.168.10.100:9201, 192.168.10.100:9202

3.配置类

ElasticsearchConfig.java

package com.xxxx.rpc.config;

import org.apache.http.HttpHost;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestClientBuilder;
import org.elasticsearch.client.RestHighLevelClient;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

import java.util.Arrays;
import java.util.Objects;

/**
 * Elasticsearch配置类
 *
 * @author zhoubin
 * @since 1.0.0
 */
@Configuration
public class EsConfig {
	//ES服务器地址
	@Value("${elasticsearch.address}")
	private String[] address;
	//ES服务器连接方式
	private static final String SCHEME = "http";

	/**
	 * 根据服务器地址构建HttpHost对象
	 * @param s
	 * @return
	 */
	@Bean
	public HttpHost builderHttpHost(String s){
		String[] address = s.split(":");
		if (2!=address.length){
			return null;
		}
		String host = address[0];
		Integer port = Integer.valueOf(address[1]);
		return new HttpHost(host,port,SCHEME);
	}

	/**
	 * 创建RestClientBuilder对象
	 * @return
	 */
	@Bean
	public RestClientBuilder restClientBuilder(){
		HttpHost[] hosts = Arrays.stream(address)
				.map(this::builderHttpHost)
				.filter(Objects::nonNull)
				.toArray(HttpHost[]::new);
		return RestClient.builder(hosts);
	}

	/**
	 * 创建RestHighLevelClient对象
	 * @param restClientBuilder
	 * @return
	 */
	@Bean
	public RestHighLevelClient restHighLevelClient(@Autowired RestClientBuilder restClientBuilder){
		return new RestHighLevelClient(restClientBuilder);
	}

}

测试类
ElasticSearchTest

package com.xxxx.es;

import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.bulk.BulkResponse;
import org.elasticsearch.action.delete.DeleteRequest;
import org.elasticsearch.action.delete.DeleteResponse;
import org.elasticsearch.action.get.GetRequest;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.action.update.UpdateResponse;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder;
import org.elasticsearch.search.sort.SortBuilders;
import org.elasticsearch.search.sort.SortOrder;
import org.junit.Test;
import org.springframework.beans.factory.annotation.Autowired;

import java.io.IOException;
import java.math.BigDecimal;
import java.util.HashMap;
import java.util.Map;

/**
 * Elasticsearch测试类
 *
 * @author zhoubin
 * @since 1.0.0
 */
public class ElasticSearchTest {

	@Autowired
	private RestHighLevelClient client;

	/**
	 * 添加数据
	 */
	@Test
	public void testCreate() throws IOException {
		//准备数据
		Map<String,Object> jsonMap = new HashMap<>();
		jsonMap.put("username","zhangsan");
		jsonMap.put("age",18);
		jsonMap.put("address","sh");
		//指定索引库及id及数据
		IndexRequest indexRequest = new IndexRequest("ik").id("5").source(jsonMap);
		//执行请求
		IndexResponse indexResponse = client.index(indexRequest, RequestOptions.DEFAULT);
		System.out.println(indexResponse.toString());
	}


	/**
	 * 查询数据
	 */
	@Test
	public void testRetrieve() throws IOException {
		/**
		 * 如果索引库不存在会报错
		 */
		//指定索引库和id
		GetRequest getRequest = new GetRequest("ik","5");
		GetResponse getResponse = client.get(getRequest, RequestOptions.DEFAULT);
		System.out.println(getResponse.getSource());
	}

	/**
	 * 修改数据
	 */
	@Test
	public void testUpdate() throws IOException {
		//准备数据
		Map<String,Object> jsonMap = new HashMap<>();
		jsonMap.put("username","lisi");
		jsonMap.put("age",20);
		jsonMap.put("address","bj");
		//指定索引库及id及数据
		UpdateRequest updateRequest = new UpdateRequest("ik","5").doc(jsonMap);
		UpdateResponse updateResponse = client.update(updateRequest, RequestOptions.DEFAULT);
		System.out.println(updateResponse.toString());
	}


	/**
	 * 删除数据
	 * @throws IOException
	 */
	@Test
	public void testDelete() throws IOException {
		DeleteRequest deleteRequest = new DeleteRequest("ik","5");
		DeleteResponse deleteResponse = client.delete(deleteRequest, RequestOptions.DEFAULT);
		System.out.println(deleteResponse.toString());
	}


	/**
	 * 批量增删改操作
	 */
	@Test
	public void testCUD() throws IOException {
		//初始化BulkRequest
		BulkRequest request = new BulkRequest();
		//指定索引库和id及数据
		//批量添加
		// request.add(new IndexRequest("ik").id("6").source(XContentType.JSON,"username","zhangsan","age",18));
		// request.add(new IndexRequest("ik").id("7").source(XContentType.JSON,"username","lisi","age",20));
		//批量修改
		// request.add(new UpdateRequest("ik","6").doc(XContentType.JSON,"username","wangwu"));
		//批量删除
		request.add(new DeleteRequest("ik","7"));
		//执行请求
		BulkResponse bulkResponse = client.bulk(request, RequestOptions.DEFAULT);
		System.out.println(bulkResponse.toString());
	}

	/**
	 * 批量查询-查询所有数据
	 */
	@Test
	public void testReterieveAll() throws IOException {
		//指定索引库
		SearchRequest searchRequest = new SearchRequest("ik","shop");
		//构建查询对象
		SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
		//添加查询条件(如果不设置size,默认只返回10条)
		searchSourceBuilder.query(QueryBuilders.matchAllQuery());
		//执行请求
		searchRequest.source(searchSourceBuilder);
		SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT);
		//总条数
		System.out.println("总条数:"+searchResponse.getHits().getTotalHits().value);
		//结果数据(如果不设置返回条数,大于十条默认只返回十条)
		SearchHit[] hits = searchResponse.getHits().getHits();
		for (SearchHit hit : hits) {
			System.out.println("分数:"+hit.getScore());
			System.out.println("index:"+hit.getIndex());
			System.out.println("id:"+hit.getId());
			Map<String, Object> source = hit.getSourceAsMap();
			source.entrySet().forEach(System.out::println);
			System.out.println("------------------------------------");
		}
	}



	/**
	 * 批量查询-匹配查询
	 */
	@Test
	public void testReterieveMatch() throws IOException {
		//指定索引库
		SearchRequest searchRequest = new SearchRequest("ik","shop");
		//构建查询对象
		SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
		//添加查询条件(如果不设置size,默认只返回10条)
		String key = "中国";
		searchSourceBuilder.query(QueryBuilders.multiMatchQuery(key,"content","goodsName"));
		//执行请求
		searchRequest.source(searchSourceBuilder);
		SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT);
		//总条数
		System.out.println("总条数:"+searchResponse.getHits().getTotalHits().value);
		//结果数据(如果不设置返回条数,大于十条默认只返回十条)
		SearchHit[] hits = searchResponse.getHits().getHits();
		for (SearchHit hit : hits) {
			System.out.println("分数:"+hit.getScore());
			System.out.println("index:"+hit.getIndex());
			System.out.println("id:"+hit.getId());
			Map<String, Object> source = hit.getSourceAsMap();
			source.entrySet().forEach(System.out::println);
			System.out.println("------------------------------------");
		}
	}


	/**
	 * 批量查询-分页查询-按分数或id排序
	 */
	@Test
	public void testReterievePage() throws IOException {
		//指定索引库
		SearchRequest searchRequest = new SearchRequest("ik","shop");
		//构建查询对象
		SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
		//添加分页条件,从第0个开始,返回5个
		searchSourceBuilder.from(0).size(5);
		//添加查询条件(如果不设置size,默认只返回10条)
		String key = "中国移动联通电信";
		searchSourceBuilder.query(QueryBuilders.multiMatchQuery(key,"content","goodsName"));
		//按照score排序 正序(默认是倒序)
		// searchSourceBuilder.sort(SortBuilders.scoreSort().order(SortOrder.ASC));
		//按照id排序 倒序 (分数字段会失效 返回NaN)
		searchSourceBuilder.sort(SortBuilders.fieldSort("_id").order(SortOrder.DESC));
		//执行请求
		SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT);
		//总条数
		System.out.println("总条数:"+searchResponse.getHits().getTotalHits().value);
		//结果数据(如果不设置返回条数,大于十条默认只返回十条)
		SearchHit[] hits = searchResponse.getHits().getHits();
		for (SearchHit hit : hits) {
			System.out.println("分数:"+hit.getScore());
			System.out.println("index:"+hit.getIndex());
			System.out.println("id:"+hit.getId());
			Map<String, Object> source = hit.getSourceAsMap();
			source.entrySet().forEach(System.out::println);
			System.out.println("------------------------------------");
		}
	}


	/**
	 * 批量查询-高亮查询
	 */
	@Test
	public void testHighlight() throws IOException {
		//指定索引库
		SearchRequest searchRequest = new SearchRequest("shop");
		//构建查询对象
		SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
		//添加分页条件,从第0个开始,返回5个
		searchSourceBuilder.from(0).size(5);
		//构建高亮对象
		HighlightBuilder highlightBuilder = new HighlightBuilder();
		highlightBuilder.field("goodsName")
				.preTags("<span style='color:red'>")
				.postTags("</span>");
		searchSourceBuilder.highlighter(highlightBuilder);
		//添加查询条件(如果不设置size,默认只返回10条)
		String key = "中国移动联通电信";
		searchSourceBuilder.query(QueryBuilders.multiMatchQuery(key,"content","goodsName"));
		//按照score排序 正序(默认是倒序)
		// searchSourceBuilder.sort(SortBuilders.scoreSort().order(SortOrder.ASC));
		//按照id排序 倒序 (分数字段会失效 返回NaN)
		searchSourceBuilder.sort(SortBuilders.fieldSort("_id").order(SortOrder.DESC));
		//执行请求
		searchRequest.source(searchSourceBuilder);
		SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT);
		//总条数
		System.out.println("总条数:"+searchResponse.getHits().getTotalHits().value);
		System.out.println(searchResponse.toString());
		//结果数据(如果不设置返回条数,大于十条默认只返回十条)
		SearchHit[] hits = searchResponse.getHits().getHits();
		for (SearchHit hit : hits) {
			//构建项目中所需的数据结果集
			String highlightMessage = String.valueOf(hit.getHighlightFields().get("goodsName").fragments()[0]);
			Integer goodsId = Integer.valueOf((Integer) hit.getSourceAsMap().get("goodsId"));
			String  goodsName = String.valueOf(hit.getSourceAsMap().get("goodsName"));
			BigDecimal marketPrice = new BigDecimal(String.valueOf(hit.getSourceAsMap().get("marketPrice")));
			String originalImg = String.valueOf(hit.getSourceAsMap().get("originalImg"));
			System.out.println("goodsId->"+goodsId);
			System.out.println("goodsName->"+goodsName);
			System.out.println("marketPrice->"+marketPrice);
			System.out.println("originalImg->"+originalImg);
			System.out.println("highlightMessage->"+highlightMessage);
			System.out.println("-----------------------------------------");
		}
	}




}

总结

提示:这里对文章进行总结:
以上就是今天要讲的内容,本文仅仅简单介绍了ELK的使用,更详细的等以后碰到再更新