目录

​一、环境准备​

​1、HBase历史版本​

​2、HBase官方文档​

​3、Linux SSH免密登录​

​4、Linux zk高可用部署​

​5、Docker搭建Hadoop集群​

​二、HBase安装​

​1、下载hbase​

​2、解压文件​

​3、创建软链接​

​三、HBASE配置​

​1、regionservers配置​

​2、hbase-site.xml配置​

​3、hbase-env.sh配置​

​4、backup-masters配置​

​四、环境变量配置​

​1、HBase环境变量配置​

​2、配置立即生效​

​五、启动HBase​

​1、启动Zookeeper​

​2、启动Hadoop​

​3、启动HBase​

​六、验证安装​

​1、HBase Shell​

​2、HBase WebUI​


一、环境准备

1、​​HBase​​历史版本

​Index of /dist/hbase​

2、HBase官方文档

​Apache HBase ™ Reference Guide​

3、Linux ​​SSH​​免密登录

​大数据入门之 ssh 免密码登录_qq262593421的博客-CSDN博客​

4、Linux zk高可用部署

​大数据高可用技术之zookeeper3.4.5安装配置_qq262593421的博客-CSDN博客​

5、Docker搭建Hadoop集群


二、HBase安装

1、下载hbase

wget -p /usr/local/hadoop/ http://archive.apache.org/dist/hbase/2.1.0/hbase-2.1.0-bin.tar.gz

2、解压文件

cd /usr/local/hadoop
tar zxpf hbase-2.1.0-bin.tar.gz -C /usr/local/hadoop

3、创建软链接

ln -s /usr/local/hadoop/hbase-2.1.0 /usr/local/hadoop/hbase

三、HBASE配置

1、regionservers配置

regionservers配置和hadoop的work一样,hadoop的DataNode节点是哪个regionservers就是哪几个

echo 'hadoop01
hadoop02
hadoop03' > /usr/local/hadoop/hbase/conf/regionservers

2、hbase-site.xml配置

mv /usr/local/hadoop/hbase/conf/hbase-site.xml /usr/local/hadoop/hbase/conf/hbase-site.xml.init
vim /usr/local/hadoop/hbase/conf/hbase-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-->
<configuration>
<!--
<property>
<name>zookeeper.znode.parent</name>
<value>/hbase/master</value>
</property>
-->
<property>
<name>hbase.master</name>
<value>60000</value>
<!-- Hbase HA 方式下只需配置端口 -->
</property>
<property>
<name>hbase.tmp.dir</name>
<value>/home/cluster/hbase/tmp</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://ns1/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hadoop01,hadoop02,hadoop03</value>
<!-- <value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value> -->
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/local/zookeeper/data</value>
</property>
<property>
<name>dfs.datanode.max.transfer.threads</name>
<value>4096</value>
</property>
<!--
<property>
<name>hbase.master</name>
<value>hadoop01</value>
</property>
-->
<!--
<property>
<name>hbase.masters</name>
<value>hadoop01,hadoop02</value>
<description>List of master rpc end points for the hbase cluster.</description>
</property>
-->
<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>false</value>
</property>
<!--
<property>
<name>hbase.lease.recovery.dfs.timeout</name>
<value>23000</value>
<description>How much time we allow elapse between calls to recover lease.
Should be larger than the dfs timeout.</description>
</property>
<property>
<name>dfs.client.socket-timeout</name>
<value>10000</value>
<description>Down the DFS timeout from 60 to 10 seconds.</description>
</property>
<property>
<name>dfs.client.socket-timeout</name>
<value>10000</value>
<description>Down the DFS timeout from 60 to 10 seconds.</description>
</property>
<property>
<name>dfs.datanode.socket.write.timeout</name>
<value>10000</value>
<description>Down the DFS timeout from 8 * 60 to 10 seconds.</description>
</property>
<property>
<name>ipc.client.connect.timeout</name>
<value>3000</value>
<description>Down from 60 seconds to 3.</description>
</property>
-->

<!--
<property>
<name>ipc.client.connect.max.retries.on.timeouts</name>
<value>2</value>
<description>Down from 45 seconds to 3 (2 == 3 retries).</description>
</property>
<property>
<name>dfs.namenode.avoid.read.stale.datanode</name>
<value>true</value>
<description>Enable stale state in hdfs</description>
</property>
<property>
<name>dfs.namenode.stale.datanode.interval</name>
<value>20000</value>
<description>Down from default 30 seconds</description>
</property>
<property>
<name>dfs.namenode.avoid.write.stale.datanode</name>
<value>true</value>
<description>Enable stale state in hdfs</description>
</property>
-->

<!--
<property>
<name>hbase.security.authentication</name>
<value>simple</value>
</property>
<property>
<name>hbase.security.authorization</name>
<value>true</value>
</property>
<property>
<name>hbase.coprocessor.master.classes</name>
<value>org.apache.hadoop.hbase.security.access.AccessController</value>
</property>
<property>
<name>hbase.coprocessor.region.classes</name>
<value>org.apache.hadoop.hbase.security.access.AccessController</value>
</property>
<property>
<name>hbase.coprocessor.regionserver.classes</name>
<value>org.apache.hadoop.hbase.security.access.AccessController</value>
</property>
<property>
<name>hbase.security.authentication</name>
<value>simple</value>
</property>
<property>
<name>hbase.rpc.engine</name>
<value>org.apache.hadoop.hbase.ipc.SecureRpcEngine</value>
</property>
-->

<!-- HFile v3 Support -->
<property>
<name>hfile.format.version</name>
<value>3</value>
</property>
<!-- HBase Superuser -->
<property>
<name>hbase.superuser</name>
<value>hbase,admin,root,hdfs,zookeeper,hive,hadoop,hue,impala,spark,kylin</value>
</property>

<!-- geomesa-hbase -->
<!--
<property>
<name>hbase.coprocessor.user.region.classes</name>
<value>org.locationtech.geomesa.hbase.coprocessor.GeoMesaCoprocessor</value>
</property>
-->
<property>
<name>hbase.table.sanity.checks</name>
<value>false</value>
</property>
<property>
<name>hbase.coprocessor.abortonerror</name>
<value>false</value>
</property>

<!-- adjust and optimize -->
<property>
<name>hfile.block.cache.size</name>
<value/>
<!-- <value>0.2</value> -->
<description>stofile的读缓存占用Heap的大小百分比。该值直接影响数据读的性能当然是越大越好,如果写比读少很多,开到0.4-0.5也没问题,如果读写均衡,设置为0.3左右。如果写比读多,果断使用默认就行。</description>
</property>
<!-- 启用multiwal https://www.jianshu.com/p/b23800d9b227-->
<property>
<name>hbase.wal.provider</name>
<value>multiwal</value>
<description>MultiWAL: 如果每个RegionServer只有一个WAL,由于HDFS必须是连续的,导致必须写WAL连续的,然后出现性能问题。MultiWAL可以让RegionServer同时写多个WAL并行的,通过HDFS底层的多管道,最终提升总的吞吐量,但是不会提升单个Region的吞吐量。</description>
</property>

<!-- phoenix config -->
<!--
<property>
<name>hbase.regionserver.wal.codec</name>
<value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
</property>
<property>
<name>phoenix.schema.isNamespaceMappingEnabled</name>
<value>true</value>
</property>
<property>
<name>phoenix.schema.mapSystemTablesToNamespace</name>
<value>true</value>
</property>
<property>
<name>phoenix.functions.allowUserDefinedFunctions</name>
<value>true</value>
<description>enable UDF functions</description>
</property>
-->

</configuration>

3、hbase-env.sh配置

mv /usr/local/hadoop/hbase/conf/hbase-env.sh /usr/local/hadoop/hbase/conf/hbase-env.sh.init
vim /usr/local/hadoop/hbase/conf/hbase-env.sh
#!/usr/bin/env bash

export HBASE_OPTS="-XX:+UseConcMarkSweepGC -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/usr/local/hadoop/hbase/logs/jvm-gc-hbase.log"

export JAVA_HOME=/usr/java/jdk1.8
export HBASE_HEAPSIZE=4G

export HADOOP_HOME=/usr/local/hadoop/hadoop
export HBASE_HOME=/usr/local/hadoop/hbase
export HBASE_CLASSPATH=/usr/local/hadoop/hadoop/etc/hadoop
export HBASE_MANAGES_ZK=false
export HBASE_PID_DIR=/var/hadoop/pids

4、backup-masters配置

启动hbase时会将配置的backup-masters节点作为备用HMaster

echo 'hadoop01
hadoop02' > /usr/local/hadoop/hbase/conf/backup-masters

四、环境变量配置

1、HBase环境变量配置

echo '
## hbase config
export HBASE_HOME=/usr/local/hadoop/hbase
export PATH=$PATH:$HBASE_HOME/bin' >> /etc/profile

2、配置立即生效

source /etc/profile

五、启动HBase

1、启动​​Zookeeper​

zkServer.sh start

2、启动Hadoop

hdfs --daemon start zkfc
start-all.sh

3、启动HBase

start-hbase.sh

六、验证安装

1、HBase Shell

hbase shell
create 'tb1','cmf1','cmf2','cmf3'
list
list_namespace

2、HBase WebUI

​http://hadoop01:16010​​​ ​​http://127.0.0.1:16011​

​http://hadoop02:16010​​​ ​​http://127.0.0.1:16012​