关于ElasticSearch集群
关于8.2.2版本的集群搭建可参照官网
【本文不光介绍es集群,还有包括移信云的日志模块大致内容】
下面讲述的是8以下版本:7.9.3
参考
在es集群中,最重要的点就是分片和备份。
首先对数据分片,存储到不同节点。
然后对每个分片进行备份,放到对方节点(不能将备份数据放到当前es服务器上,那样就没啥意义了),完成互相备份。
- 通过接口在es中创建映射关系(type)
先创建索引库再执行下面的操作。
PUT请求接口: http://192.168.1.30:9200/shtl-log/_mapping/logModel?include_type_name=true
注意:后面的 ?include_type_name=true 不可省略,否则会报错
然后就是请求参数
{
"logModel": {
"properties": {
"logType": {
"type": "keyword"
},
"businessName": {
"type": "keyword"
},
"level": {
"type": "keyword"
},
"reqUrl": {
"type": "keyword"
},
"ip": {
"type": "keyword"
},
"method": {
"type": "keyword"
},
"result": {
"type": "keyword"
},
"logStatus": {
"type": "keyword"
},
"error": {
"type": "keyword"
},
"logTitle": {
"type": "keyword"
},
"createDate": {
"type": "keyword"
}
}
}
}
- 创建索引库及映射
暂定日志数据模型为:
@Data
@Accessors(chain = true)
public class LogModel {
//日志类型 LoginLog:登录日志、OperationLog:操作日志、SystemLog:系统日志
private String logType = "";
// 所属模块(自定义) 例如域管:分为不同的模块,可以对操作日志通过模块进行筛选
private String businessName = "";
// 日志等级 DEBUG、INFO、WARNING、ERROR、CRITICAL
private String level = "";
// 请求地址
private String reqUrl = "";
// ip地址
private String ip = "";
// 方法名
private String method = "";
// 返回结果
private String result = "";
// 操作状态(0正常 1异常)
private String logStatus = "";
// 错误信息
private String error = "";
// 日志标题:用于显示日志时的操作内容 如:用户A将B选项从C更改为了D
// 如果从返回数据中看,返回数据可能就是接口直接返回的数据,这样将文字数据处理后显示出来更为直观
private String logTitle = "";
// 创建时间
private Date createDate = new Date();
}
主要依赖
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-client</artifactId>
</dependency>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-high-level-client</artifactId>
</dependency>
ES8.2.2
由于es8.2.2集群加上了ssl,且elasticsearch-rest-high-level-client在 7.15.0 中已弃用,不再推荐使用高级REST客户端,取而代之的是JAVA API客户端。
通过ca.crt测试es API Demo
maven依赖
<dependency>
<groupId>co.elastic.clients</groupId>
<artifactId>elasticsearch-java</artifactId>
<version>8.2.2</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.13.3</version>
</dependency>
<dependency>
<groupId>jakarta.json</groupId>
<artifactId>jakarta.json-api</artifactId>
<version>2.1.0</version>
</dependency>
LogModel.java es映射实体类
参照上面的LogModel
TestDemo
package com.ruoyi.test.es;
import co.elastic.clients.elasticsearch.ElasticsearchClient;
import co.elastic.clients.elasticsearch.core.SearchResponse;
import co.elastic.clients.elasticsearch.core.search.Hit;
import co.elastic.clients.json.jackson.JacksonJsonpMapper;
import co.elastic.clients.transport.ElasticsearchTransport;
import co.elastic.clients.transport.rest_client.RestClientTransport;
import com.ruoyi.web.controller.shtllog.model.LogModel;
import org.apache.http.HttpHost;
import org.apache.http.auth.AuthScope;
import org.apache.http.auth.UsernamePasswordCredentials;
import org.apache.http.client.CredentialsProvider;
import org.apache.http.impl.client.BasicCredentialsProvider;
import org.apache.http.impl.nio.client.HttpAsyncClientBuilder;
import org.apache.http.ssl.SSLContextBuilder;
import org.apache.http.ssl.SSLContexts;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestClientBuilder;
import org.junit.jupiter.api.Test;
import javax.net.ssl.SSLContext;
import java.io.InputStream;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.security.KeyStore;
import java.security.cert.Certificate;
import java.security.cert.CertificateFactory;
/**
* @author shuang.liang
* @date 2022/6/7 13:15
*/
public class Test1 {
@Test
void test1() throws Exception {
final CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
new UsernamePasswordCredentials("elastic", "SHTLLS"));
Path caCertificatePath = Paths.get("C://shtlls/ca.crt");
CertificateFactory factory =
CertificateFactory.getInstance("X.509");
Certificate trustedCa;
try (InputStream is = Files.newInputStream(caCertificatePath)) {
trustedCa = factory.generateCertificate(is);
}
KeyStore trustStore = KeyStore.getInstance("pkcs12");
trustStore.load(null, null);
trustStore.setCertificateEntry("ca", trustedCa);
SSLContextBuilder sslContextBuilder = SSLContexts.custom()
.loadTrustMaterial(trustStore, null);
final SSLContext sslContext = sslContextBuilder.build();
RestClientBuilder https = RestClient.builder(
new HttpHost("es01", 9200, "https"))
.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
@Override
public HttpAsyncClientBuilder customizeHttpClient(
HttpAsyncClientBuilder httpClientBuilder) {
return httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider).setSSLContext(sslContext);
}
});
RestClient restClient = https.build();
ElasticsearchTransport transport = new RestClientTransport(
restClient, new JacksonJsonpMapper());
ElasticsearchClient client = new ElasticsearchClient(transport);
SearchResponse<LogModel> search = client.search(s -> s
.index("shtl-log")
.query(q -> q
.term(t -> t
.field("logTitle")
.value(v -> v.stringValue("测试操作"))
)),
LogModel.class);
for (Hit<LogModel> hit : search.hits().hits()) {
processProduct(hit.source());
}
}
private void processProduct(LogModel p) {
System.out.println(p.toString());
}
}
查询结果展示:
注意事项
由于当前es集群环境是参照官网搭建的,在demo代码中的
es01
需要在host文件中添加其对应的ip
ElasticSearch 7.14.1 集群
通过elasticsearch-head手动创建索引,分片和备份都是默认的,不改动,然后通过接口,创建映射关系。
url: http://192.168.96.238:9201/shtl-log/_mapping/logModel?include_type_name=true
body:
{
"logModel": {
"properties": {
"logType": {
"type": "keyword"
},
"businessName": {
"type": "keyword"
},
"level": {
"type": "keyword"
},
"reqUrl": {
"type": "keyword"
},
"ip": {
"type": "ip"
},
"method": {
"type": "keyword"
},
"result": {
"type": "keyword"
},
"logStatus": {
"type": "keyword"
},
"error": {
"type": "keyword"
},
"logTitle": {
"type": "keyword"
},
"createDate": {
"type": "date"
}
}
}
}
ElasticSearch
- elasticsearch.yml配置
【没有管安全之类的配置,仅作为一个demo展示】
需要注意的是,如果是在同一台机器上使用docker部署集群,ip是容器的ip,先启动之后通过
docker inspect 容器ID
查看ip,然后修改配置文件这样,当主节点宕掉之后,会从机器中重新产生新的节点,重启宕掉的节点后也会重新加入到该集群中,成为从节点。
在springboot中可以通过配置文件配置es
- node1配置文件
# 集群名称
cluster.name: mobileCloudPortal
# 节点名称
node.name: es-node1
# 启用该物理机器所有网卡网络访问
network.host: 0.0.0.0
http.host: 0.0.0.0
# 跨域
http.cors.enabled: true
http.cors.allow-origin: "*"
# 当前节点是否可以被选举为master节点,是:true、否:false
node.master: true
# 当前节点是否用于存储数据,是:true、否:false
node.data: true
# 设置节点之间通信的端口
transport.tcp.port: 9301
# es7.x 之后新增的配置,写入候选主节点的设备地址,在开启服务后可以被选为主节点
discovery.seed_hosts: ["172.17.0.2:9301","172.17.0.11:9302","172.17.0.12:9303"]
# es7.x 之后新增的配置,初始化一个新的集群时需要此配置来选举master
cluster.initial_master_nodes: ["es-node1","es-node2","es-node3"]
# elasticsearch和其他节点通信的地址,如果不设置的话 会自动获取
network.publish_host: 172.17.0.2
# host地址,默认为network.host
transport.host: 0.0.0.0
# 设置这个集群,有多少个节点有master候选资格,如果集群较大官方建议为2-4个
discovery.zen.minimum_master_nodes: 1
1. node2配置文件
cluster.name: mobileCloudPortal
node.name: es-node2
network.host: 0.0.0.0
http.host: 0.0.0.0
http.cors.enabled: true
http.cors.allow-origin: "*"
node.master: true
node.data: true
transport.tcp.port: 9302
discovery.seed_hosts: 172.17.0.2:9301
cluster.initial_master_nodes: es-node1
network.publish_host: 172.17.0.11
transport.host: 0.0.0.0
discovery.zen.minimum_master_nodes: 1
1. node3配置文件
cluster.name: mobileCloudPortal
node.name: es-node3
network.host: 0.0.0.0
http.host: 0.0.0.0
http.cors.enabled: true
http.cors.allow-origin: "*"
node.master: true
node.data: true
transport.tcp.port: 9303
discovery.seed_hosts: 172.17.0.2:9301
cluster.initial_master_nodes: es-node1
network.publish_host: 172.17.0.12
transport.host: 0.0.0.0
discovery.zen.minimum_master_nodes: 1
- springboot中使用
表结构如下
1. EsAutoConfigure
package com.ruoyi.web.controller.shtllog.config;
import com.ruoyi.web.controller.shtllog.service.BaseElasticService;
import org.elasticsearch.client.RestClient;
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
import org.springframework.boot.autoconfigure.elasticsearch.ElasticsearchRestClientProperties;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Import;
/**
* @author shuang.liang
* @date 2022/6/2 14:44
*/
@Configuration
@ConditionalOnClass(RestClient.class)
@EnableConfigurationProperties(ElasticsearchRestClientProperties.class)
@Import(BaseElasticService.class)
public class EsAutoConfigure {
}
1. LogController
package com.ruoyi.web.controller.shtllog.controller;
import com.ruoyi.common.annotation.AnonymousAccess;
import com.ruoyi.common.core.domain.AjaxResult;
import com.ruoyi.common.utils.uuid.IdUtils;
import com.ruoyi.web.controller.shtllog.model.EsModel;
import com.ruoyi.web.controller.shtllog.model.LogModel;
import com.ruoyi.web.controller.shtllog.service.BaseElasticService;
import io.swagger.annotations.Api;
import io.swagger.annotations.ApiOperation;
import lombok.RequiredArgsConstructor;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.validation.annotation.Validated;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
/**
* @author shuang.liang
* @date 2022/6/2 14:50
*/
@RequiredArgsConstructor
@Validated
@RestController
@RequestMapping("/log")
@Api(value = "LogController", tags = {"日志操作接口"})
public class LogController {
private static final String INDEX = "shtl-log";
@Autowired
private BaseElasticService service;
@AnonymousAccess
@ApiOperation(value = "add")
@PostMapping("/add")
protected AjaxResult<String> getTenantOperationLogPageList() {
LogModel model = new LogModel();
model.setBusinessName("模块2")
.setIp("127.9.9.1")
.setLogStatus("1")
.setLogTitle("操作正常")
.setLevel("INFO")
.setMethod("方法一")
.setReqUrl("请求url")
.setResult("返回数据");
EsModel esModel = new EsModel();
esModel.setId(IdUtils.fastUUID());
esModel.setData(model);
service.insertOrUpdateOne(INDEX, esModel);
return AjaxResult.success("添加成功!");
}
}
1. EsModel.java
package com.ruoyi.web.controller.shtllog.model;
import lombok.Data;
/**
* @author shuang.liang
* @date 2022/6/2 14:45
*/
@Data
public class EsModel<T> {
private String id;
private T data;
public EsModel() {
}
public EsModel(String id, T data) {
this.id = id;
this.data = data;
}
}
1. LogModel.java
package com.ruoyi.web.controller.shtllog.model;
import lombok.Data;
import lombok.experimental.Accessors;
import java.util.Date;
/**
* @author shuang.liang
* @date 2022/6/2 14:00
*/
@Data
@Accessors(chain = true)
public class LogModel {
//日志类型 LoginLog:登录日志、OperationLog:操作日志、SystemLog:系统日志
private String logType = "";
// 所属模块(自定义) 例如域管:分为不同的模块,可以对操作日志通过模块进行筛选
private String businessName = "";
// 日志等级 DEBUG、INFO、WARNING、ERROR、CRITICAL
private String level = "";
// 请求地址
private String reqUrl = "";
// ip地址
private String ip = "";
// 方法名
private String method = "";
// 返回结果
private String result = "";
// 操作状态(0正常 1异常)
private String logStatus = "";
// 错误信息
private String error = "";
// 日志标题:用于显示日志时的操作内容 如:用户A将B选项从C更改为了D
// 如果从返回数据中看,返回数据可能就是接口直接返回的数据,这样将文字数据处理后显示出来更为直观
private String logTitle = "";
// 创建时间
private Date createDate = new Date();
}
1. PageResultOutputDTO.java
package com.ruoyi.web.controller.shtllog.model;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import lombok.Getter;
import lombok.Setter;
import lombok.ToString;
import java.util.List;
/**
* @author shuang.liang
* @date 2022/6/2 14:47
*/
@Getter
@Setter
@ToString
@ApiModel(value = "PageResultOutputDTO", description = "分页返回结果对象")
public class PageResultOutputDTO<T> {
@ApiModelProperty(value = "总条数")
private Long total;
@ApiModelProperty(value = "总页数")
private Integer pages;
@ApiModelProperty(value = "列表数据")
private List<T> records;
}
1. BaseElasticService.java
package com.ruoyi.web.controller.shtllog.service;
import cn.hutool.core.util.PageUtil;
import com.alibaba.fastjson.JSON;
import com.ruoyi.web.controller.shtllog.model.EsModel;
import com.ruoyi.web.controller.shtllog.model.PageResultOutputDTO;
import lombok.extern.slf4j.Slf4j;
import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.delete.DeleteRequest;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.support.IndicesOptions;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.client.indices.CreateIndexRequest;
import org.elasticsearch.client.indices.CreateIndexResponse;
import org.elasticsearch.client.indices.GetIndexRequest;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.index.reindex.DeleteByQueryRequest;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.SearchHits;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import org.springframework.beans.factory.annotation.Autowired;
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
/**
* @author shuang.liang
* @date 2022/6/2 14:44
*/
@Slf4j
public class BaseElasticService<T> {
@Autowired
private RestHighLevelClient restHighLevelClient;
/**
* 创建索引
*
* @param idxName 索引名称
* @param idxSQL 索引描述
*/
public void createIndex(String idxName, String idxSQL) {
try {
if (!this.indexExist(idxName)) {
log.error(" idxName={} 已经存在,idxSql={}", idxName, idxSQL);
return;
}
CreateIndexRequest request = new CreateIndexRequest(idxName);
// 设置分片
buildSetting(request);
request.mapping(idxSQL, XContentType.JSON);
// request.settings() 手工指定Setting
CreateIndexResponse res = restHighLevelClient.indices().create(request, RequestOptions.DEFAULT);
if (!res.isAcknowledged()) {
log.error("创建索引失败");
throw new RuntimeException("初始化失败");
}
} catch (Exception e) {
e.printStackTrace();
}
}
/**
* 创建索引
*
* @param idxName
* @param builder
*/
public void createIndex(String idxName, XContentBuilder builder) {
try {
if (!this.indexExist(idxName)) {
log.error(" idxName={} 已经存在", idxName);
return;
}
CreateIndexRequest request = new CreateIndexRequest(idxName);
// 设置分片
buildSetting(request);
request.mapping(builder);
CreateIndexResponse res = restHighLevelClient.indices().create(request, RequestOptions.DEFAULT);
if (!res.isAcknowledged()) {
throw new RuntimeException("初始化失败");
}
} catch (Exception e) {
e.printStackTrace();
System.exit(0);
}
}
/**
* 断某个index是否存在
*
* @param idxName
* @return
* @throws Exception
*/
public boolean indexExist(String idxName) throws Exception {
GetIndexRequest request = new GetIndexRequest(idxName);
request.local(false);
request.humanReadable(true);
request.includeDefaults(false);
request.indicesOptions(IndicesOptions.lenientExpandOpen());
return restHighLevelClient.indices().exists(request, RequestOptions.DEFAULT);
}
/**
* 设置分片
*
* @param request
*/
private void buildSetting(CreateIndexRequest request) {
request.settings(Settings.builder().put("index.number_of_shards", 3)
.put("index.number_of_replicas", 2));
}
/**
* 添加或者修改对象
*
* @param idxName
* @param model
*/
public void insertOrUpdateOne(String idxName, EsModel<T> model) {
IndexRequest request = new IndexRequest(idxName);
log.error("Data : id={},entity={}", model.getId(), JSON.toJSONString(model.getData()));
request.id(model.getId());
request.source(JSON.toJSONString(model.getData()), XContentType.JSON);
try {
restHighLevelClient.index(request, RequestOptions.DEFAULT);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
/**
* 批量插入数据
*
* @param idxName
* @param list
*/
public void insertBatch(String idxName, List<EsModel<T>> list) {
BulkRequest request = new BulkRequest();
list.forEach(item -> request.add(new IndexRequest(idxName).id(item.getId())
.source(JSON.toJSONString(item.getData()), XContentType.JSON)));
try {
restHighLevelClient.bulk(request, RequestOptions.DEFAULT);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
/**
* 批量删除
*
* @param idxName
* @param idList
* @param <T>
*/
public <T> void deleteBatch(String idxName, Collection<T> idList) {
BulkRequest request = new BulkRequest();
idList.forEach(item -> request.add(new DeleteRequest(idxName, item.toString())));
try {
restHighLevelClient.bulk(request, RequestOptions.DEFAULT);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
/**
* 查询
*
* @param idxName 索引名称
* @param builder 查询参数
* @param c 结果类对象
* @return
*/
public <T> List<T> list(String idxName, SearchSourceBuilder builder, Class<T> c) {
SearchRequest request = new SearchRequest(idxName);
request.source(builder);
try {
SearchResponse response = restHighLevelClient.search(request, RequestOptions.DEFAULT);
SearchHit[] hits = response.getHits().getHits();
List<T> res = new ArrayList<>(hits.length);
for (SearchHit hit : hits) {
res.add(JSON.parseObject(hit.getSourceAsString(), c));
}
return res;
} catch (Exception e) {
throw new RuntimeException(e);
}
}
/**
* 获取分页
*
* @param idxName 索引名称
* @param builder 查询参数
* @param c 结果类对象
* @return
*/
public <T> PageResultOutputDTO<T> page(String idxName, SearchSourceBuilder builder, Class<T> c) {
PageResultOutputDTO<T> pageResultOutputDTO = new PageResultOutputDTO<>();
SearchRequest request = new SearchRequest(idxName);
request.source(builder);
// 查询ES
SearchResponse searchResponse = null;
try {
searchResponse = restHighLevelClient.search(request, RequestOptions.DEFAULT);
SearchHits hits = searchResponse.getHits();
// 总条数
Long total = hits.getTotalHits().value;
List<T> res = new ArrayList<>();
for (SearchHit hit : hits) {
res.add(JSON.parseObject(hit.getSourceAsString(), c));
}
pageResultOutputDTO.setPages(PageUtil.totalPage(Math.toIntExact(total), builder.size()));
pageResultOutputDTO.setTotal(total);
pageResultOutputDTO.setRecords(res);
return pageResultOutputDTO;
} catch (Exception e) {
throw new RuntimeException(e);
}
}
/**
* 删除index
*
* @param idxName
*/
public void deleteIndex(String idxName) {
try {
if (!this.indexExist(idxName)) {
log.error(" idxName={} 已经存在", idxName);
return;
}
restHighLevelClient.indices().delete(new DeleteIndexRequest(idxName), RequestOptions.DEFAULT);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
/**
* 删除index 根据条件
*
* @param idxName
* @param builder
*/
public void deleteByQuery(String idxName, QueryBuilder builder) {
DeleteByQueryRequest request = new DeleteByQueryRequest(idxName);
request.setQuery(builder);
//设置批量操作数量,最大为10000
request.setBatchSize(10000);
request.setConflicts("proceed");
try {
restHighLevelClient.deleteByQuery(request, RequestOptions.DEFAULT);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}
Kibana
注意:ElasticSearch7.14.1对应的Kibana7.14.1无法正常启动,需要Kibana7.14.2
以上因为作者搭建的时候遇到的问题,不一定都是这样
配置:kibana.yml 【没有管安全之类的配置,仅作为一个demo展示】
server.host: "0.0.0.0"
server.shutdownTimeout: "5s"
# 这里ip为安装es的服务器ip
elasticsearch.hosts: ["http://192.168.96.238:9201","http://192.168.96.238:9202","http://192.168.96.238:9203"]
monitoring.ui.container.elasticsearch.enabled: true
# 这里配置中文显示,默认为英文 【English - en (default)、Chinese - zh-CN、Japanese - ja-JP】
i18n.locale: "zh-CN"
kibana 配置好es集群之后就可以正常启动了。
然后可以通过配置实现对es中的数据进行展示,下面是做的一个小demo.
ElasticSearch SSL
ElasticSearch开启安全模式 参考1
流程基本一致:首先创建一个容器,然后讲config中的配置文件拷贝出来,修改配置文件,然后进入随便一个容器,执行命令:bin/elasticsearch-certutil ca,执行完成后再执行:bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12,一直按回车就行了。
然后讲生成的文件拷贝到宿主机:docker cp -a es:/usr/share/elasticsearch/elastic-certificates.p12 /home/docker/elasticsearch/config/
将改文件赋值到每一个es的配置文件。授权所有用户可读: chmod +r /home/docker/elasticsearch/config/elastic-certificates.p12
配置完后再次进入容器:./bin/elasticsearch-setup-passwords auto,输入: y ,然后生成的一些密码需要记住,拷贝出来。
接下来就可以启动整个集群的es了,通过 elasticsearch连接需要在url添加参数:如: http://localhost:9100/?auth_user=elastic&auth_password=password(这里的password是上一步生成的账户和密码)
如果还想通过 :http://127.0.0.1:9201/shtl-log/_mapping/logModel?include_type_name=true 这个接口进行mapping的创建,需要添加 Authorization,不止到的可以通过 http://192.168.96.238:9202/_cat 此接口进行查看,请求这个接口需要登录,登录之后自然能从请求头中拿到Authorization,再通过postman就能正常请求API了。
下面介绍一下重要的一些配置文件:
- elasticsearch.yml配置文件
# 集群名称
cluster.name: mobileCloudPortal
# 节点名称
node.name: es-node1
# 启用该物理机器所有网卡网络访问
network.host: 0.0.0.0
http.host: 0.0.0.0
# 跨域
http.cors.enabled: true
http.cors.allow-origin: "*"
######### 新增 点一 #########
http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type
# 当前节点是否可以被选举为master节点,是:true、否:false
node.master: true
# 当前节点是否用于存储数据,是:true、否:false
node.data: true
# 设置节点之间通信的端口
transport.tcp.port: 9301
# es7.x 之后新增的配置,写入候选主节点的设备地址,在开启服务后可以被选为主节点
discovery.seed_hosts: ["172.17.0.4:9301","172.17.0.3:9302","172.17.0.2:9303"]
# es7.x 之后新增的配置,初始化一个新的集群时需要此配置来选举master
cluster.initial_master_nodes: ["es-node1","es-node2","es-node3"]
# elasticsearch和其他节点通信的地址,如果不设置的话 会自动获取
network.publish_host: 172.17.0.4
# host地址,默认为network.host
transport.host: 0.0.0.0
# 设置这个集群,有多少个节点有master候选资格,如果集群较大官方建议为2-4个
discovery.zen.minimum_master_nodes: 1
######### 新增 点二 #########
# 打开安全设置
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.keystore.type: PKCS12
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.type: PKCS12
xpack.security.audit.enabled: true
- kibana.yml
#
# ** THIS IS AN AUTO-GENERATED FILE **
#
# Default Kibana configuration for docker target
server.host: "0.0.0.0"
server.shutdownTimeout: "5s"
elasticsearch.hosts: ["http://192.168.96.238:9201","http://192.168.96.238:9202","http://192.168.96.238:9203"]
# 主要新增elastic的用户名和密码
elasticsearch.username: "elastic"
elasticsearch.password: "NfZUtDVWaOGX71RRCZW4"
monitoring.ui.container.elasticsearch.enabled: true
# 这里配置中文显示,默认为英文 【English - en (default)、Chinese - zh-CN、Japanese - ja-JP】
i18n.locale: "zh-CN"
- 项目中配置文件:
spring:
elasticsearch:
rest:
uris: 192.168.96.238:9201,192.168.96.238:9202,192.168.96.238:9203
# 依然是elastic的用户名和密码
username: elastic
password: NfZUtDVWaOGX71RRCZW4
其他补充文件
为了更好的测试es,所以结合apiFox做了一个mock接口,写了一个页面,直接循环调用自己写的接口向es中添加大量数据,便于测试和模拟数据的生成。
- ApiFox -> 接口下【高级Mock】-> Body
{
"index": "shtl-log",
"dataJSONStr": {
"logType|1": [
"OPERATION LOG",
"LOGIN LOG"
],
"businessName|1": [
"模块1",
"模块2",
"模块3",
"模块4",
"模块5"
],
"level|1": [
"DEBUG",
"INFO",
"WARNING",
"ERROR",
"CRITICAL"
],
"reqUrl": '@url',
"ip": '@ip',
"method|1": [
"add1()",
"add2()",
"add3()",
"add4()",
"add5()",
"add6()",
"add7()",
"add8()",
"add9()",
"add10()"
],
"result|1": [
"{\"code\":\"200\",\"msg\":\"success\",\"data\":\"null\"}",
""
],
"logStatus|1": [
"0",
"1"
],
"error|1": [
"error msg",
""
],
"logTitle|1": [
"添加",
"删除",
"修改",
"查询"
]
}
}
- es-demo-addLog.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
<script src="http://libs.baidu.com/jquery/2.1.4/jquery.min.js"></script>
</head>
<body>
<input id="num" type="number" />
<button>点击</button>
</body>
<script>
$("button").click(function () {
var num = document.getElementById("num").value;
var data = {
shtllog: ""
}
var start = new Date().getTime()
for (var i = 0; i < num; i++) {
$.ajax({
type: "post",
// 本地mock的url
url: "http://127.0.0.1:4523/m2/748314-0-default/24381703",
data: data,
success: (res) => {
$.ajax({
type: "post",
// 自己项目中向es添加接口的url
url: "http://127.0.0.1:8000/protal-es/log/add-log",
contentType: "application/json",
// 将mock的数据作为请求参数传递到日志添加接口中,添加到es
data: JSON.stringify(res),
success: (res) => {
console.log("共耗时:", (new Date().getTime() - start) / 1000, " s")
}
})
}
})
}
});
</script>
</html>
mq模块说明
新增 protal-mq-demo 模块,也放到 mobile-cloud-portal 中, 使用的rabbitmq版本为:3.9.13(由于之前服务器上安装过,所以此次并没有重新安装较新版本,直接使用了旧版)
经过测试,10000条日志数据发送到rabbitmq,然后通过监听日志队列,将rabbitmq中的日志通过portal-es模块的添加日志接口发送到elasticsarch中的指定索引下,经过多次测试一万条数据,成功率为 100%。
上图中的unacked表示消费端尚未确认的数量为250,从Channels中的
Prefetch
可以看出来,消费者每次从rabbitmq中最多拿250条数据,消费完之后再次从rabbitmq中获取,所以queue中显示的unacked为250。
html 写入日志
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
<script src="http://libs.baidu.com/jquery/2.1.4/jquery.min.js"></script>
</head>
<body>
<input id="num" type="number" />
<button>点击</button>
</body>
<script>
$("button").click(function () {
var num = document.getElementById("num").value;
var data = {
shtllog: ""
}
var start = new Date().getTime()
for (var i = 0; i < num; i++) {
$.ajax({
type: "post",
// 本地mock的url
url: "http://127.0.0.1:4523/m2/748314-0-default/24381703",
data: data,
success: (res) => {
res.dataJSONStr.createDate = Date.now()
var data = {}
data.index = res.index
data.dataJSONStr = JSON.stringify(res.dataJSONStr)
$.ajax({
type: "post",
// 自己项目中向es添加接口的url
url: "http://127.0.0.1:8098/protal-mq-demo/openfeign/add-mq",
contentType: "application/json",
// 将mock的数据作为请求参数传递到日志添加接口中,添加到es
data: JSON.stringify(data),
success: (res) => {
console.log("共耗时:", (new Date().getTime() - start) / 1000, " s")
}
})
}
})
}
});
</script>
</html>
var start = new Date().getTime()
for (var i = 0; i < num; i++) {
$.ajax({
type: "post",
// 本地mock的url
url: "http://127.0.0.1:4523/m2/748314-0-default/24381703",
data: data,
success: (res) => {
res.dataJSONStr.createDate = Date.now()
var data = {}
data.index = res.index
data.dataJSONStr = JSON.stringify(res.dataJSONStr)
$.ajax({
type: "post",
// 自己项目中向es添加接口的url
url: "http://127.0.0.1:8098/protal-mq-demo/openfeign/add-mq",
contentType: "application/json",
// 将mock的数据作为请求参数传递到日志添加接口中,添加到es
data: JSON.stringify(data),
success: (res) => {
console.log("共耗时:", (new Date().getTime() - start) / 1000, " s")
}
})
}
})
}
});