hive load sql文件 hive表load数据_hive


一、Hive数据操作---DML数据操作

1、数据导入

第一种方式:向表中装载数据(Load)


//语法
hive> load data [local] inpath '/opt/module/datas/student.txt' overwrite | into table student [partition (partcol1=val1,…)];


  • load data:表示加载数据
  • local:表示从本地加载数据到hive表;否则从HDFS加载数据到hive表
  • inpath:表示加载数据的路径
  • overwrite:表示覆盖表中已有数据,否则表示追加
  • into table:表示加载到哪张表
  • student:表示具体的表
  • partition:表示上传到指定分区

操作步骤:

(1)创建一张数据表student5


hive (default)> create table student5(id int, name string)
              > row format delimited fields terminated by 't';
              
OK
Time taken: 0.114 seconds
hive (default)>


(2)加载本地文件到hive


hive (default)> load data local inpath '/usr/local/hadoop/module/datas/student.txt' into table default.student5;

Loading data to table default.student5
Table default.student4 stats: [numFiles=1, totalSize=39]
OK
Time taken: 0.378 seconds
hive (default)>


(3)加载数据覆盖表中已有的数据


hive (default)> load data local inpath '/usr/local/hadoop/module/datas/student.txt' overwrite   into table default.student5;

Loading data to table default.student5
Table default.student5 stats: [numFiles=1, numRows=0, totalSize=39, rawDataSize=0]
OK
Time taken: 0.461 seconds
hive (default)> 

//之后,再查询这张表,有数据
hive (default)> select * from student5;

OK
student4.id	student5.name
1001	zhangshan
1002	lishi
1003	zhaoliu
Time taken: 0.102 seconds, Fetched: 3 row(s)
hive (default)> 

//再加载一次(加载本地文件到hive)
hive (default)>  load data local inpath '/usr/local/hadoop/module/datas/student.txt' into table default.student5;

Loading data to table default.student5
Table default.student5 stats: [numFiles=2, numRows=0, totalSize=78, rawDataSize=0]
OK
Time taken: 0.426 seconds
hive (default)> 


//再查询这张表,数据有所增加(原本3条,现在6条)
hive (default)> select * from student5;

OK
student5.id	student5.name
1001	zhangshan
1002	lishi
1003	zhaoliu
1001	zhangshan
1002	lishi
1003	zhaoliu
Time taken: 0.099 seconds, Fetched: 6 row(s)
hive (default)> 


//若执行,加载数据覆盖表中已有的数据,(本来6条数据会被覆盖)
hive (default)> load data local inpath '/usr/local/hadoop/module/datas/student.txt' overwrite   into table default.student5;

Loading data to table default.student5
Table default.student5 stats: [numFiles=1, numRows=0, totalSize=39, rawDataSize=0]
OK
Time taken: 0.479 seconds
hive (default)> 

hive (default)> select * from student5;
OK
student5.id	student5.name
1001	zhangshan
1002	lishi
1003	zhaoliu
Time taken: 0.102 seconds, Fetched: 3 row(s)
hive (default)>


(4)我们需要上传student.txt到HDFS到根目录,执行命令如下:


[root@hadoop101 ~]# cd /usr/local/hadoop/module/datas/
[root@hadoop101 datas]# hadoop fs -put student.txt /
[root@hadoop101 datas]#


hive load sql文件 hive表load数据_hive_02


HDFS的根目录加载本地文件到hive


hive (default)> load data  inpath '/student.txt' into table default.student5;

Loading data to table default.student5
Table default.student5 stats: [numFiles=2, numRows=0, totalSize=78, rawDataSize=0]
OK
Time taken: 0.432 seconds
hive (default)> 

//查询一下数据信息(从下面得知,又多了3条数据)
hive (default)>  select * from student5;

OK
student5.id	student5.name
1001	zhangshan
1002	lishi
1003	zhaoliu
1001	zhangshan
1002	lishi
1003	zhaoliu
Time taken: 0.091 seconds, Fetched: 6 row(s)
hive (default)>


原本三条数据,现如今有6条,同时,再HDFS文件系统查看student.txt是否还存在


hive load sql文件 hive表load数据_Time_03


第二种方式:通过查询语句向表中插入数据(Insert)

操作步骤:


//查看当前的数据表中是否有分区表
hive (default)> show tables;

OK
tab_name
db_hive1
dept
dept_partition2
emp
hive_test
sqoop_test
stu2
stu_partition
student
student1
student3
student4
student5
Time taken: 1.484 seconds, Fetched: 13 row(s)
hive (default)> 

hive (default)> desc stu2;

OK
col_name	data_type	comment
id                  	int                 	                    
name                	string              	                    
month               	string              	                    
day                 	string              	                    
	 	 
# Partition Information	 	 
# col_name            	data_type           	comment             
	 	 
month               	string              	                    
day                 	string              	                    
Time taken: 0.69 seconds, Fetched: 10 row(s)
hive (default)>


由上,stu2是一个分区表,若没有分区表需要创建,创建sql命令如下:


hive (default)> create table student(id int, name string) partitioned by (month string) 
row format delimited fields terminated by 't';


自己本身有分区表就忽略过了。接着,需要向分区表(stu2)基本插入数据


hive (default)> insert into table stu2 partition(month='202006',day='26') values(1,'wangwu');

Query ID = root_20200102135829_bda9ac50-448a-4038-b617-33a3d1f448b4
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1577971593473_0001, Tracking URL = http://hadoop101:8088/proxy/application_1577971593473_0001/
Kill Command = /usr/local/hadoop/module/hadoop-2.7.2/bin/hadoop job  -kill job_1577971593473_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2020-01-02 13:58:52,919 Stage-1 map = 0%,  reduce = 0%
2020-01-02 13:59:05,561 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.8 sec
MapReduce Total cumulative CPU time: 2 seconds 800 msec
Ended Job = job_1577971593473_0001
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to: hdfs://hadoop101:9000/user/hive/warehouse/stu2/month=202006/day=26/.hive-staging_hive_2020-01-02_13-58-29_834_7263697218847751236-1/-ext-10000
Loading data to table default.stu2 partition (month=202006, day=26)
Partition default.stu2{month=202006, day=26} stats: [numFiles=1, numRows=1, totalSize=9, rawDataSize=8]
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1   Cumulative CPU: 2.8 sec   HDFS Read: 3660 HDFS Write: 97 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 800 msec
OK
_col0	_col1
Time taken: 38.147 seconds


再次查询stu2表


hive (default)> select * from stu2;

OK
stu2.id	stu2.name	stu2.month	stu2.day
1001	zhangshan	202006	23
1002	lishi	202006	23
1003	zhaoliu	202006	23
1	wangwu	202006	26
Time taken: 0.336 seconds, Fetched: 4 row(s)
hive (default)> 

//允许插入重复的数据
hive (default)> insert into table stu2 partition(month='202006',day='26') values(1,'wangwu');

hive (default)> select * from stu2;
OK
stu2.id	stu2.name	stu2.month	stu2.day
1001	zhangshan	202006	23
1002	lishi	202006	23
1003	zhaoliu	202006	23
1	wangwu	202006	26
1	wangwu	202006	26
Time taken: 0.112 seconds, Fetched: 5 row(s)
hive (default)>


hive load sql文件 hive表load数据_hadoop_04


基本模式插入(根据单张表查询结果)


hive (default)> insert into table stu2 partition(month=202006,day=29)
              > select * from student;


hive load sql文件 hive表load数据_datatables 加载不了本地数据_05


第三种方式:查询语句中创建表并加载数据(As Select)

根据查询结果创建表(查询的结果会添加到新创建的表中)


create table if not exists student3
as select id, name from student;


第四种方式:创建表时通过Location指定加载数据路径

(1)创建一张表


hive (default)> create table student2 like student; 
OK
Time taken: 0.247 seconds
hive (default)>


hive load sql文件 hive表load数据_hive load sql文件_06


接着,在本地的/usr/local/hadoop/module/datas/student.txt上传到HDFS文件系统到/user/hive/warehouse/student2


hive (default)> dfs -put /usr/local/hadoop/module/datas/student.txt /user/hive/warehouse/student2 ;


hive load sql文件 hive表load数据_datatables 加载不了本地数据_07


student2这张表是可以查询到数据的


hive (default)> select * from student2;

OK
student2.id	student2.name	student2.age
1001	zhangshan	NULL
1002	lishi	NULL
1003	zhaoliu	NULL
Time taken: 0.081 seconds, Fetched: 3 row(s)
hive (default)>


接着,在HDFS文件系统的/user目录下创建一个MrZhou文件夹


hive (default)> dfs -mkdir -p /user/MrZhou;
hive (default)>


hive load sql文件 hive表load数据_hive load sql文件_08


接着,把本地的文件(usr/local/hadoop/module/datas/student.txt)上传到HDFS文件系统(user/MrZhou)文件目录下


hive (default)> dfs -put  /usr/local/hadoop/module/datas/student.txt /user/MrZhou;
hive (default)>


接着,当我们创建一个新数据表,需要指定创建的这个文件路径,后面查询还是可以拿到数据的


hive (default)> create table student6 like student
              > location '/user/MrZhou';
              
OK
Time taken: 0.181 seconds
hive (default)> 
hive (default)> select * from student6;

OK
student6.id	student6.name	student6.age
1001	zhangshan	NULL
1002	lishi	NULL
1003	zhaoliu	NULL
Time taken: 0.118 seconds, Fetched: 3 row(s)
hive (default)>


第五种方式:Import数据到指定Hive表中


hive (default)> import table student from  '/user/MrZhou';
FAILED: SemanticException [Error 10027]: Invalid path
hive (default)> 

//上述错误说,导入路径不可用


2、数据导出

(1) Insert导出 (将查询的结果导出到本地)


hive (default)>  insert overwrite local directory '/usr/local/hadoop/module/datas/stu1'
              > select * from student;
              
Query ID = root_20200102224956_684283c4-a161-44d6-883e-b888b0b1e5d7
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1577971593473_0006, Tracking URL = http://hadoop101:8088/proxy/application_1577971593473_0006/
Kill Command = /usr/local/hadoop/module/hadoop-2.7.2/bin/hadoop job  -kill job_1577971593473_0006
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2020-01-02 22:50:09,422 Stage-1 map = 0%,  reduce = 0%
2020-01-02 22:50:19,536 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.41 sec
MapReduce Total cumulative CPU time: 1 seconds 410 msec
Ended Job = job_1577971593473_0006
Copying data to local directory /usr/local/hadoop/module/datas/stu1
Copying data to local directory /usr/local/hadoop/module/datas/stu1
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1   Cumulative CPU: 1.41 sec   HDFS Read: 3019 HDFS Write: 48 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 410 msec
OK
student.id	student.name	student.age
Time taken: 25.608 seconds
hive (default)>


我们在/usr/local/hadoop/module/datas目录查看,多了一个stu1文件夹


[root@hadoop101 datas]# ll
total 24
-rw-r--r-- 1 root root  69 Dec 31 01:59 dept.txt
-rw-r--r-- 1 root root 657 Dec 31 02:07 emp.txt
-rw-r--r-- 1 root root  23 Dec 30 02:42 hivef.sql
-rw-r--r-- 1 root root  54 Dec 30 02:49 hive_result.txt
drwxr-xr-x 3 root root 115 Jan  2 22:50 stu1
-rw-r--r-- 1 root root  39 Dec 29 17:36 student.txt
-rw-r--r-- 1 root root 144 Dec 30 16:21 test.txt
[root@hadoop101 datas]# ls stu1;
000000_0
[root@hadoop101 datas]# 

//查看000000_0文件
[root@hadoop101 datas]# cat stu1/000000_0 
1001zhangshanN
1002lishiN
1003zhaoliuN
[root@hadoop101 datas]#


从上面得知,数据没有格式,将查询的结果格式化导出到本地


hive (default)> insert overwrite local directory '/usr/local/hadoop/module/datas/stu1'
              > ROW FORMAT DELIMITED FIELDS TERMINATED BY 't' 
              > select * from student;
              
Query ID = root_20200102230048_66acb502-cc9a-4ff3-8a97-c325930eba6b
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1577971593473_0007, Tracking URL = http://hadoop101:8088/proxy/application_1577971593473_0007/
Kill Command = /usr/local/hadoop/module/hadoop-2.7.2/bin/hadoop job  -kill job_1577971593473_0007
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2020-01-02 23:01:00,277 Stage-1 map = 0%,  reduce = 0%
2020-01-02 23:01:09,973 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.43 sec
MapReduce Total cumulative CPU time: 1 seconds 430 msec
Ended Job = job_1577971593473_0007
Copying data to local directory /usr/local/hadoop/module/datas/stu1
Copying data to local directory /usr/local/hadoop/module/datas/stu1
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1   Cumulative CPU: 1.43 sec   HDFS Read: 3102 HDFS Write: 48 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 430 msec
OK
student.id	student.name	student.age
Time taken: 22.805 seconds
hive (default)>


我们还是在/usr/local/hadoop/module/datas目录查看,还是个stu1文件夹,stu1文件夹下的文件名还是000000_0


[root@hadoop101 datas]# ls stu1;
000000_0
[root@hadoop101 datas]# cat stu1/000000_0 
1001	zhangshan	N
1002	lishi	N
1003	zhaoliu	N
[root@hadoop101 datas]#


主要是格式发生改变,将查询的结果导出到HDFS上(没有local)


hive (default)> insert overwrite directory '/user/MrZhou/student'
              >  ROW FORMAT DELIMITED FIELDS TERMINATED BY 't'
              >    select * from student;
              
Query ID = root_20200102231254_d1e857f4-01c6-491e-bf65-32abba3a7092
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1577971593473_0008, Tracking URL = http://hadoop101:8088/proxy/application_1577971593473_0008/
Kill Command = /usr/local/hadoop/module/hadoop-2.7.2/bin/hadoop job  -kill job_1577971593473_0008
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2020-01-02 23:13:06,802 Stage-1 map = 0%,  reduce = 0%
2020-01-02 23:13:16,473 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.8 sec
MapReduce Total cumulative CPU time: 1 seconds 800 msec
Ended Job = job_1577971593473_0008
Stage-3 is selected by condition resolver.
Stage-2 is filtered out by condition resolver.
Stage-4 is filtered out by condition resolver.
Moving data to: hdfs://hadoop101:9000/user/MrZhou/student/.hive-staging_hive_2020-01-02_23-12-54_968_2823409006805237170-1/-ext-10000
Moving data to: /user/MrZhou/student
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1   Cumulative CPU: 1.8 sec   HDFS Read: 3070 HDFS Write: 48 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 800 msec
OK
student.id	student.name	student.age
Time taken: 22.704 seconds
hive (default)>


hive load sql文件 hive表load数据_datatables 加载不了本地数据_09


3、Export导出到HDFS上


hive (default)> export table student
              > to '/user/MrZhou/export';
              
Copying data from file:/tmp/root/12bc47b9-ed69-4c2e-95ea-2b1f311ac691/hive_2020-01-02_23-49-54_403_711547950453122729-1/-local-10000/_metadata
Copying file: file:/tmp/root/12bc47b9-ed69-4c2e-95ea-2b1f311ac691/hive_2020-01-02_23-49-54_403_711547950453122729-1/-local-10000/_metadata
Copying data from hdfs://hadoop101:9000/user/hive/warehouse/student
Copying file: hdfs://hadoop101:9000/user/hive/warehouse/student/student.txt
OK
Time taken: 0.933 seconds
hive (default)>


hive load sql文件 hive表load数据_hive_10


hive load sql文件 hive表load数据_hive load sql文件_11


4、Import数据到指定Hive表中


hive (default)> import table student7 from 
              >  '/user/MrZhou/export';
              
Copying data from hdfs://hadoop101:9000/user/MrZhou/export/data
Copying file: hdfs://hadoop101:9000/user/MrZhou/export/data/student.txt
Loading data to table default.student7
OK
Time taken: 0.655 seconds
hive (default)>


注意:student7 是一个自定义的新表,没有被创建过的


hive load sql文件 hive表load数据_hive load sql文件_12


5、清除表中数据(Truncate)

注意:Truncate只能删除管理表,不能删除外部表中数据

操作步骤:

(1)首先查看student5表是否有数据


hive (default)> select * from student5;

OK
student5.id	student5.name
1001	zhangshan
1002	lishi
1003	zhaoliu
1001	zhangshan
1002	lishi
1003	zhaoliu
Time taken: 0.114 seconds, Fetched: 6 row(s)
hive (default)>


(2)显示是有数据,接下来做清空表操作,执行命令:


hive (default)>  truncate table student5;

OK
Time taken: 0.147 seconds
hive (default)>


(3)查看student5表是否有数据


hive (default)> select * from student5;
OK
student5.id	student5.name
Time taken: 0.107 seconds
hive (default)> 

//显然,已经没有数据了
//接着,查看当前的数据表
hive (default)> show tables;

OK
tab_name
db_hive1
dept
dept_partition2
emp
hive_test
sqoop_test
stu2
stu_partition
student
student1
student2
student3
student4
student5
student6
student7
values__tmp__table__1
values__tmp__table__2
Time taken: 0.049 seconds, Fetched: 18 row(s)
hive (default)>


(4)若,我们清空dept表,会提示错误,由于**dept是外部表**


hive (default)>  truncate table dept;
FAILED: SemanticException [Error 10146]: Cannot truncate non-managed table dept.


外部表( EXTERNAL_TABLE )不能清空

(5)查看一个表是否是内部表还是外部表


hive (default)> desc formatted dept;

Table Type:         	EXTERNAL_TABLE



二、Hive数据操作---查询(重点)

做一个简单的查询测试:首先把一个dept表清空,清空这个表必须是内部表,外部表是无法清空的

(1)无法清空,说明是外部表


hive (default)> truncate table dept;

FAILED: SemanticException [Error 10146]: Cannot truncate non-managed table dept.
hive (default)>


(2)我们需要把dept表把外部表改成内部表,执行命令如下:


hive (default)> alter table dept set tblproperties('EXTERNAL'='FALSE');

OK
Time taken: 0.635 seconds
hive (default)>


(3)接下来,再次清空dept表就可以,执行命令如下:


hive (default)> truncate table dept;
OK
Time taken: 0.325 seconds
hive (default)>


(4)当清空成功之后,我们再次查询dept这张表是否还有数据


hive (default)> select * from dept;
OK
dept.deptno	dept.dname	dept.loc
Time taken: 0.707 seconds
hive (default)> 

//发现dept表,已经没有数据了


(5)接下来,我们就可以在dept下的表,加载一些数据进去


hive (default)>  load data local inpath '/usr/local/hadoop/module/datas/dept.txt' into table dept;
Loading data to table default.dept
Table default.dept stats: [numFiles=1, numRows=0, totalSize=69, rawDataSize=0]
OK
Time taken: 2.245 seconds
hive (default)>


(6)当加载一些数据进去之后,再次查询dept表的数据


hive (default)> select * from dept;
OK
dept.deptno	dept.dname	dept.loc
10	ACCOUNTING	1700
20	RESEARCH	1800
30	SALES	1900
40	OPERATIONS	1700
Time taken: 0.142 seconds, Fetched: 4 row(s)
hive (default)> 

//说明已经有数据了


1、查询

官方地址:https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Select


hive load sql文件 hive表load数据_Time_13


查询语句语法:


[WITH CommonTableExpression (, CommonTableExpression)*]    (Note: Only available
 starting with Hive 0.13.0)
SELECT [ALL | DISTINCT] select_expr, select_expr, ...
  FROM table_reference
  [WHERE where_condition]
  [GROUP BY col_list]
  [ORDER BY col_list]
  [CLUSTER BY col_list
    | [DISTRIBUTE BY col_list] [SORT BY col_list]
  ]
 [LIMIT number]


基本查询(Select…From)

(1)全表和特定列查询


//全表查询
hive (default)> select * from emp;

//选择特定列查
hive (default)> select empno, ename from emp;


注意:

  • SQL 语言大小写不敏感。
  • SQL 可以写在一行或者多行
  • 关键字不能被缩写也不能分行
  • 各子句一般要分行写。
  • 使用缩进提高语句的可读性。

(2)列别名

  • 重命名一个列
  • 便于计算
  • 紧跟列名,也可以在列名和别名之间加入关键字‘AS’

案例实操


//查询名称和部门
hive (default)> select ename AS name, deptno dn from emp;


(3)算术运算符


hive load sql文件 hive表load数据_datatables 加载不了本地数据_14

算术运算符

  • 查询出所有员工的薪水后加1显示(sal_1:添加名称)
hive (default)> select  sal +1 sal_1 from emp;

OK
sal_1
801.0
1601.0
1251.0
2976.0
1251.0
2851.0
2451.0
3001.0
5001.0
1501.0
1101.0
951.0
3001.0
1301.0
Time taken: 0.284 seconds, Fetched: 14 row(s)
hive (default)>


(4)常用函数

  • 求总行数(count)
hive (default)>  select count(*) from emp;
.....

OK
_c0
14
Time taken: 45.924 seconds, Fetched: 1 row(s)
hive (default)>


了解一下,面试问过:count(1)、count(*)、count(column(列))这三者的区别?

  • 求工资的最大值(max)
hive (default)> select max(sal)  from emp;
....

OK
_c0
5000.0
Time taken: 38.153 seconds, Fetched: 1 row(s)
hive (default)>


  • 求工资的最小值(min)
hive (default)> select min(sal)  from emp;
....

OK
_c0
800.0
Time taken: 33.707 seconds, Fetched: 1 row(s)
hive (default)>


  • 求工资的平均值(avg)
hive (default)> select avg(sal)  from emp;
....

OK
_c0
2073.214285714286
Time taken: 32.588 seconds, Fetched: 1 row(s)
hive (default)>


  • 求工资的总和(sum)
hive (default)> select sum(sal)  from emp;
......

OK
_c0
29025.0
Time taken: 37.091 seconds, Fetched: 1 row(s)
hive (default)>


(5) Limit语句

典型的查询会返回多行数据。LIMIT子句用于限制返回的行数。


hive (default)> select * from emp limit 5;

OK
emp.empno	emp.ename	emp.job	emp.mgr	emp.hiredate	emp.sal	emp.comm	emp.deptno
7369	SMITH	CLERK	7902	1980-12-17	800.0	NULL	20
7499	ALLEN	SALESMAN	7698	1981-2-20	1600.0	300.0	30
7521	WARD	SALESMAN	7698	1981-2-22	1250.0	500.0	30
7566	JONES	MANAGER	7839	1981-4-2	2975.0	NULL	20
7654	MARTIN	SALESMAN	7698	1981-9-28	1250.0	1400.0	30
Time taken: 0.137 seconds, Fetched: 5 row(s)
hive (default)>


2、Where子查询

  • 使用WHERE子句,将不满足条件的行过滤掉
  • WHERE子句紧随FROM子句

案例实操

(1)查询出薪水大于2000的所有员工


hive (default)> select * from emp where sal >2000;

OK
emp.empno	emp.ename	emp.job	emp.mgr	emp.hiredate	emp.sal	emp.comm	emp.deptno
7566	JONES	MANAGER	7839	1981-4-2	2975.0	NULL	20
7698	BLAKE	MANAGER	7839	1981-5-1	2850.0	NULL	30
7782	CLARK	MANAGER	7839	1981-6-9	2450.0	NULL	10
7788	SCOTT	ANALYST	7566	1987-4-19	3000.0	NULL	20
7839	KING	PRESIDENT	NULL	1981-11-17	5000.0	NULL	10
7902	FORD	ANALYST	7566	1981-12-3	3000.0	NULL	20
Time taken: 0.315 seconds, Fetched: 6 row(s)
hive (default)>


3、比较运算符(Between/In/ Is Null)

下面表中描述了谓词操作符,这些操作符同样可以用于JOIN…ON和HAVING语句中。


hive load sql文件 hive表load数据_hive_15

比较运算符

hive load sql文件 hive表load数据_Time_16

比较运算符

案例实操

(1) 查询出薪水等于5000的所有员工


hive (default)> select * from emp where sal =5000;
OK
emp.empno	emp.ename	emp.job	emp.mgr	emp.hiredate	emp.sal	emp.comm	emp.deptno
7839	KING	PRESIDENT	NULL	1981-11-17	5000.0	NULL	10
Time taken: 0.151 seconds, Fetched: 1 row(s)
hive (default)>


(2)查询工资在500到1000的员工信息


hive (default)> select * from emp where sal between 500 and 1000;
OK
emp.empno	emp.ename	emp.job	emp.mgr	emp.hiredate	emp.sal	emp.comm	emp.deptno
7369	SMITH	CLERK	7902	1980-12-17	800.0	NULL	20
7900	JAMES	CLERK	7698	1981-12-3	950.0	NULL	30
Time taken: 0.128 seconds, Fetched: 2 row(s)
hive (default)>


(3) 查询comm为空的所有员工信息


hive (default)> select * from emp where comm is null;

OK
emp.empno	emp.ename	emp.job	emp.mgr	emp.hiredate	emp.sal	emp.comm	emp.deptno
7369	SMITH	CLERK	7902	1980-12-17	800.0	NULL	20
7566	JONES	MANAGER	7839	1981-4-2	2975.0	NULL	20
7698	BLAKE	MANAGER	7839	1981-5-1	2850.0	NULL	30
7782	CLARK	MANAGER	7839	1981-6-9	2450.0	NULL	10
7788	SCOTT	ANALYST	7566	1987-4-19	3000.0	NULL	20
7839	KING	PRESIDENT	NULL	1981-11-17	5000.0	NULL	10
7876	ADAMS	CLERK	7788	1987-5-23	1100.0	NULL	20
7900	JAMES	CLERK	7698	1981-12-3	950.0	NULL	30
7902	FORD	ANALYST	7566	1981-12-3	3000.0	NULL	20
7934	MILLER	CLERK	7782	1982-1-23	1300.0	NULL	10
Time taken: 0.124 seconds, Fetched: 10 row(s)
hive (default)>


(4)查询工资是1500或5000的员工信息


hive (default)> select * from emp where sal IN (1500, 5000);
OK
emp.empno	emp.ename	emp.job	emp.mgr	emp.hiredate	emp.sal	emp.comm	emp.deptno
7839	KING	PRESIDENT	NULL	1981-11-17	5000.0	NULL	10
7844	TURNER	SALESMAN	7698	1981-9-8	1500.0	0.0	30
Time taken: 0.107 seconds, Fetched: 2 row(s)
hive (default)>


(5)查询薪水以2开头的


hive (default)> select * from emp where sal like '2%';

OK
emp.empno	emp.ename	emp.job	emp.mgr	emp.hiredate	emp.sal	emp.comm	emp.deptno
7566	JONES	MANAGER	7839	1981-4-2	2975.0	NULL	20
7698	BLAKE	MANAGER	7839	1981-5-1	2850.0	NULL	30
7782	CLARK	MANAGER	7839	1981-6-9	2450.0	NULL	10
Time taken: 0.11 seconds, Fetched: 3 row(s)
hive (default)>


(6)查询第二个数字为2的薪水数字


hive (default)> select * from emp where sal like '_2%';

OK
emp.empno	emp.ename	emp.job	emp.mgr	emp.hiredate	emp.sal	emp.comm	emp.deptno
7521	WARD	SALESMAN	7698	1981-2-22	1250.0	500.0	30
7654	MARTIN	SALESMAN	7698	1981-9-28	1250.0	1400.0	30
Time taken: 0.101 seconds, Fetched: 2 row(s)
hive (default)>


(7) 查询薪水包含2的数字,可以采用正则表达式


hive (default)> select sal from emp where sal rlike '[2]';

OK
sal
1250.0
2975.0
1250.0
2850.0
2450.0
Time taken: 0.143 seconds, Fetched: 5 row(s)
hive (default)>


4、逻辑运算符(And/Or/Not)


hive load sql文件 hive表load数据_datatables 加载不了本地数据_17

逻辑运算符

案例实操

(1)查询薪水大于1000,部门是30


hive (default)> select * from emp where sal>1000 and deptno=30;

OK
emp.empno	emp.ename	emp.job	emp.mgr	emp.hiredate	emp.sal	emp.comm	emp.deptno
7499	ALLEN	SALESMAN	7698	1981-2-20	1600.0	300.0	30
7521	WARD	SALESMAN	7698	1981-2-22	1250.0	500.0	30
7654	MARTIN	SALESMAN	7698	1981-9-28	1250.0	1400.0	30
7698	BLAKE	MANAGER	7839	1981-5-1	2850.0	NULL	30
7844	TURNER	SALESMAN	7698	1981-9-8	1500.0	0.0	30
Time taken: 0.091 seconds, Fetched: 5 row(s)
hive (default)>


(2)查询薪水大于1000,或者部门是30


hive (default)> select * from emp where sal>1000 or deptno=30;

OK
emp.empno	emp.ename	emp.job	emp.mgr	emp.hiredate	emp.sal	emp.comm	emp.deptno
7499	ALLEN	SALESMAN	7698	1981-2-20	1600.0	300.0	30
7521	WARD	SALESMAN	7698	1981-2-22	1250.0	500.0	30
7566	JONES	MANAGER	7839	1981-4-2	2975.0	NULL	20
7654	MARTIN	SALESMAN	7698	1981-9-28	1250.0	1400.0	30
7698	BLAKE	MANAGER	7839	1981-5-1	2850.0	NULL	30
7782	CLARK	MANAGER	7839	1981-6-9	2450.0	NULL	10
7788	SCOTT	ANALYST	7566	1987-4-19	3000.0	NULL	20
7839	KING	PRESIDENT	NULL	1981-11-17	5000.0	NULL	10
7844	TURNER	SALESMAN	7698	1981-9-8	1500.0	0.0	30
7876	ADAMS	CLERK	7788	1987-5-23	1100.0	NULL	20
7900	JAMES	CLERK	7698	1981-12-3	950.0	NULL	30
7902	FORD	ANALYST	7566	1981-12-3	3000.0	NULL	20
7934	MILLER	CLERK	7782	1982-1-23	1300.0	NULL	10
Time taken: 0.127 seconds, Fetched: 13 row(s)
hive (default)>


(3)查询除了20部门和30部门以外的员工信息


hive (default)>  select * from emp where deptno not IN(30, 20);

OK
emp.empno	emp.ename	emp.job	emp.mgr	emp.hiredate	emp.sal	emp.comm	emp.deptno
7782	CLARK	MANAGER	7839	1981-6-9	2450.0	NULL	10
7839	KING	PRESIDENT	NULL	1981-11-17	5000.0	NULL	10
7934	MILLER	CLERK	7782	1982-1-23	1300.0	NULL	10
Time taken: 0.101 seconds, Fetched: 3 row(s)
hive (default)>


5、分组

Group By语句

GROUP BY语句通常会和聚合函数一起使用,按照一个或者多个列队结果进行分组,然后对每个组执行聚合操作。

案例实操

(1)计算emp表每个部门的平均工资


hive (default)> select avg(sal)avg_sal from emp
              > group by deptno;
.......

OK
avg_sal
2916.6666666666665
2175.0
1566.6666666666667
Time taken: 34.524 seconds, Fetched: 3 row(s)
hive (default)>


(2)计算emp每个部门中每个岗位的最高薪水


hive (default)> select deptno,job,avg(sal)avg_sal from emp
              > group by deptno,job;
......
OK
deptno	job	avg_sal
10	CLERK	1300.0
10	MANAGER	2450.0
10	PRESIDENT	5000.0
20	ANALYST	3000.0
20	CLERK	950.0
20	MANAGER	2975.0
30	CLERK	950.0
30	MANAGER	2850.0
30	SALESMAN	1400.0
Time taken: 31.764 seconds, Fetched: 9 row(s)
hive (default)>


(3) 求每个部门的平均薪水大于2000的部门


hive (default)> select deptno,avg(sal)avg_sal from emp
              > group by deptno
              > having avg_sal > 2000;
......
OK
deptno	avg_sal
10	2916.6666666666665
20	2175.0
Time taken: 37.298 seconds, Fetched: 2 row(s)
hive (default)>


having与where不同点

(1)where针对表中的列发挥作用,查询数据;having针对查询结果中的列发挥作用,筛选数据。
(2)where后面不能写分组函数,而having后面可以使用分组函数。
(3)having只用于group by分组统计语句。

6、Join语句

等值Join

Hive支持通常的SQL JOIN语句,但是只支持等值连接,不支持非等值连接。

案例实操

(1)根据员工表和部门表中的部门编号相等,查询员工编号、员工名称和部门名称;


hive (default)> select e.empno, e.ename,d.dname
              > from emp e join dept d
              > on e.deptno=d.deptno;
.....
OK
e.empno	e.ename	d.dname
7369	SMITH	RESEARCH
7499	ALLEN	SALES
7521	WARD	SALES
7566	JONES	RESEARCH
7654	MARTIN	SALES
7698	BLAKE	SALES
7782	CLARK	ACCOUNTING
7788	SCOTT	RESEARCH
7839	KING	ACCOUNTING
7844	TURNER	SALES
7876	ADAMS	RESEARCH
7900	JAMES	SALES
7902	FORD	RESEARCH
7934	MILLER	ACCOUNTING
Time taken: 36.229 seconds, Fetched: 14 row(s)
hive (default)>


7、表的别名

好处:

  • 使用别名可以简化查询。
  • 使用表名前缀可以提高执行效率。

(1)合并员工表和部门表


hive (default)>  select e.empno, e.ename, d.deptno from emp
              >  e join dept d on e.deptno
              >  = d.deptno;

....
OK
e.empno	e.ename	d.deptno
7369	SMITH	20
7499	ALLEN	30
7521	WARD	30
7566	JONES	20
7654	MARTIN	30
7698	BLAKE	30
7782	CLARK	10
7788	SCOTT	20
7839	KING	10
7844	TURNER	30
7876	ADAMS	20
7900	JAMES	30
7902	FORD	20
7934	MILLER	10
Time taken: 35.142 seconds, Fetched: 14 row(s)
hive (default)>


(2) 内连接

内连接:只有进行连接的两个表中都存在与连接条件相匹配的数据才会被保留下来。


hive (default)> select e.empno, e.ename, d.deptno from emp e join dept d on e.deptno
              >  = d.deptno;


(3)左外连接

左外连接:JOIN操作符左边表中符合WHERE子句的所有记录将会被返回。


hive (default)>  select e.empno, e.ename, d.deptno from emp e left join dept d on e.deptno
              >  = d.deptno;
.....

OK
e.empno	e.ename	d.deptno
7369	SMITH	20
7499	ALLEN	30
7521	WARD	30
7566	JONES	20
7654	MARTIN	30
7698	BLAKE	30
7782	CLARK	10
7788	SCOTT	20
7839	KING	10
7844	TURNER	30
7876	ADAMS	20
7900	JAMES	30
7902	FORD	20
7934	MILLER	10
Time taken: 35.313 seconds, Fetched: 14 row(s)
hive (default)>


(4)右外连接

右外连接:JOIN操作符右边表中符合WHERE子句的所有记录将会被返回。


hive (default)> select e.empno, e.ename, d.deptno from emp e right join dept d on e.deptno
              >  = d.deptno;

....
OK
e.empno	e.ename	d.deptno
7782	CLARK	10
7839	KING	10
7934	MILLER	10
7369	SMITH	20
7566	JONES	20
7788	SCOTT	20
7876	ADAMS	20
7902	FORD	20
7499	ALLEN	30
7521	WARD	30
7654	MARTIN	30
7698	BLAKE	30
7844	TURNER	30
7900	JAMES	30
NULL	NULL	40
Time taken: 32.33 seconds, Fetched: 15 row(s)
hive (default)>


(5)满外连接

满外连接:将会返回所有表中符合WHERE语句条件的所有记录。如果任一表的指定字段没有符合条件的值的话,那么就使用NULL值替代。


hive (default)> select e.empno, e.ename, d.dname from emp e
              > full join dept d  
              > on e.deptno=d.deptno;
....
OK
e.empno	e.ename	d.dname
7934	MILLER	ACCOUNTING
7839	KING	ACCOUNTING
7782	CLARK	ACCOUNTING
7876	ADAMS	RESEARCH
7788	SCOTT	RESEARCH
7369	SMITH	RESEARCH
7566	JONES	RESEARCH
7902	FORD	RESEARCH
7844	TURNER	SALES
7499	ALLEN	SALES
7698	BLAKE	SALES
7654	MARTIN	SALES
7521	WARD	SALES
7900	JAMES	SALES
NULL	NULL	OPERATIONS
Time taken: 43.97 seconds, Fetched: 15 row(s)
hive (default)>


8、多表连接

注意:连接 n个表,至少需要n-1个连接条件。例如:连接三个表,至少需要两个连接条件。

操作步骤:

(1)数据准备,在/usr/local/hadoop/module/datas目录下创建一个location.txt文本


[root@hadoop101 datas]# pwd
/usr/local/hadoop/module/datas
[root@hadoop101 datas]# vim location.txt

1700	Beijing
1800	London
1900	Tokyo


(2)创建位置表


hive (default)> create table if not exists default.location(
              > loc int,
              > loc_name string
              > )
              > row format delimited fields terminated by 't';
              
OK
Time taken: 0.648 seconds
hive (default)>


(3)导入数据


hive (default)> load data local inpath '/usr/local/hadoop/module/datas/location.txt' 
              > into table location;
              
Loading data to table default.location
Table default.location stats: [numFiles=1, totalSize=36]
OK
Time taken: 0.377 seconds
hive (default)>


(4) 查看这个数据表location,数据信息


hive (default)> select * from location;

OK
location.loc	location.loc_name
1700	Beijing
1800	London
1900	Tokyo
Time taken: 0.096 seconds, Fetched: 3 row(s)
hive (default)>


(5)多表连接查询


hive (default)> SELECT e.ename, d.deptno, l.loc_name
              > FROM   emp e 
              > JOIN   dept d
              > ON     d.deptno = e.deptno 
              > JOIN   location l
              > ON     d.loc = l.loc;


......
OK
e.ename	d.deptno	l.loc_name
SMITH	20	London
ALLEN	30	Tokyo
WARD	30	Tokyo
JONES	20	London
MARTIN	30	Tokyo
BLAKE	30	Tokyo
CLARK	10	Beijing
SCOTT	20	London
KING	10	Beijing
TURNER	30	Tokyo
ADAMS	20	London
JAMES	30	Tokyo
FORD	20	London
MILLER	10	Beijing
Time taken: 38.904 seconds, Fetched: 14 row(s)
hive (default)>


大多数情况下,Hive会对每对JOIN连接对象启动一个MapReduce任务。本例中会首先启动一个MapReduce job对表e和表d进行连接操作,然后会再启动一个MapReduce job将第一个MapReduce job的输出和表l;进行连接操作。

注意:为什么不是表d和表l先进行连接操作呢?这是因为Hive总是按照从左到右的顺序执行的。

9、笛卡尔积

笛卡尔集会在下面条件下产生

  • 省略连接条件
  • 连接条件无效
  • 所有表中的所有行互相连接

案例实操


hive (default)> select empno, dname from emp, dept;
.....

OK
empno	dname
7369	ACCOUNTING
7369	RESEARCH
7369	SALES
7369	OPERATIONS
7499	ACCOUNTING
7499	RESEARCH
7499	SALES
7499	OPERATIONS
7521	ACCOUNTING
7521	RESEARCH
7521	SALES
7521	OPERATIONS
7566	ACCOUNTING
7566	RESEARCH
7566	SALES
7566	OPERATIONS
7654	ACCOUNTING
7654	RESEARCH
7654	SALES
7654	OPERATIONS
7698	ACCOUNTING
7698	RESEARCH
7698	SALES
7698	OPERATIONS
7782	ACCOUNTING
7782	RESEARCH
7782	SALES
7782	OPERATIONS
7788	ACCOUNTING
7788	RESEARCH
7788	SALES
7788	OPERATIONS
7839	ACCOUNTING
7839	RESEARCH
7839	SALES
7839	OPERATIONS
7844	ACCOUNTING
7844	RESEARCH
7844	SALES
7844	OPERATIONS
7876	ACCOUNTING
7876	RESEARCH
7876	SALES
7876	OPERATIONS
7900	ACCOUNTING
7900	RESEARCH
7900	SALES
7900	OPERATIONS
7902	ACCOUNTING
7902	RESEARCH
7902	SALES
7902	OPERATIONS
7934	ACCOUNTING
7934	RESEARCH
7934	SALES
7934	OPERATIONS
Time taken: 35.072 seconds, Fetched: 56 row(s)
hive (default)> 

hive (default)> select empno, dname from emp, dept;
.....

OK
empno	dname
7369	ACCOUNTING
7369	RESEARCH
7369	SALES
7369	OPERATIONS
7499	ACCOUNTING
7499	RESEARCH
7499	SALES
7499	OPERATIONS
7521	ACCOUNTING
7521	RESEARCH
7521	SALES
7521	OPERATIONS
7566	ACCOUNTING
7566	RESEARCH
7566	SALES
7566	OPERATIONS
7654	ACCOUNTING
7654	RESEARCH
7654	SALES
7654	OPERATIONS
7698	ACCOUNTING
7698	RESEARCH
7698	SALES
7698	OPERATIONS
7782	ACCOUNTING
7782	RESEARCH
7782	SALES
7782	OPERATIONS
7788	ACCOUNTING
7788	RESEARCH
7788	SALES
7788	OPERATIONS
7839	ACCOUNTING
7839	RESEARCH
7839	SALES
7839	OPERATIONS
7844	ACCOUNTING
7844	RESEARCH
7844	SALES
7844	OPERATIONS
7876	ACCOUNTING
7876	RESEARCH
7876	SALES
7876	OPERATIONS
7900	ACCOUNTING
7900	RESEARCH
7900	SALES
7900	OPERATIONS
7902	ACCOUNTING
7902	RESEARCH
7902	SALES
7902	OPERATIONS
7934	ACCOUNTING
7934	RESEARCH
7934	SALES
7934	OPERATIONS
Time taken: 35.072 seconds, Fetched: 56 row(s)
hive (default)>


连接谓词中不支持or


hive (default)> select e.empno, e.ename, d.deptno from emp e join dept d on e.deptno
= d.deptno or e.ename=d.ename;   错误的


10、排序

Order By:全局排序,一个Reducer

使用 ORDER BY 子句排序

  • ASC(ascend): 升序(默认)
  • DESC(descend): 降序
  • ORDER BY 子句在SELECT语句的结尾

案例实操

(1)查询员工信息按工资升序排列


hive (default)> select * from emp order by sal;


...
OK
emp.empno	emp.ename	emp.job	emp.mgr	emp.hiredate	emp.sal	emp.comm	emp.deptno
7369	SMITH	CLERK	7902	1980-12-17	800.0	NULL	20
7900	JAMES	CLERK	7698	1981-12-3	950.0	NULL	30
7876	ADAMS	CLERK	7788	1987-5-23	1100.0	NULL	20
7521	WARD	SALESMAN	7698	1981-2-22	1250.0	500.0	30
7654	MARTIN	SALESMAN	7698	1981-9-28	1250.0	1400.0	30
7934	MILLER	CLERK	7782	1982-1-23	1300.0	NULL	10
7844	TURNER	SALESMAN	7698	1981-9-8	1500.0	0.0	30
7499	ALLEN	SALESMAN	7698	1981-2-20	1600.0	300.0	30
7782	CLARK	MANAGER	7839	1981-6-9	2450.0	NULL	10
7698	BLAKE	MANAGER	7839	1981-5-1	2850.0	NULL	30
7566	JONES	MANAGER	7839	1981-4-2	2975.0	NULL	20
7788	SCOTT	ANALYST	7566	1987-4-19	3000.0	NULL	20
7902	FORD	ANALYST	7566	1981-12-3	3000.0	NULL	20
7839	KING	PRESIDENT	NULL	1981-11-17	5000.0	NULL	10
Time taken: 35.444 seconds, Fetched: 14 row(s)
hive (default)>


(2)查询员工信息按工资降序排列


hive (default)> select * from emp order by sal desc;

...
OK
emp.empno	emp.ename	emp.job	emp.mgr	emp.hiredate	emp.sal	emp.comm	emp.deptno
7839	KING	PRESIDENT	NULL	1981-11-17	5000.0	NULL	10
7902	FORD	ANALYST	7566	1981-12-3	3000.0	NULL	20
7788	SCOTT	ANALYST	7566	1987-4-19	3000.0	NULL	20
7566	JONES	MANAGER	7839	1981-4-2	2975.0	NULL	20
7698	BLAKE	MANAGER	7839	1981-5-1	2850.0	NULL	30
7782	CLARK	MANAGER	7839	1981-6-9	2450.0	NULL	10
7499	ALLEN	SALESMAN	7698	1981-2-20	1600.0	300.0	30
7844	TURNER	SALESMAN	7698	1981-9-8	1500.0	0.0	30
7934	MILLER	CLERK	7782	1982-1-23	1300.0	NULL	10
7654	MARTIN	SALESMAN	7698	1981-9-28	1250.0	1400.0	30
7521	WARD	SALESMAN	7698	1981-2-22	1250.0	500.0	30
7876	ADAMS	CLERK	7788	1987-5-23	1100.0	NULL	20
7900	JAMES	CLERK	7698	1981-12-3	950.0	NULL	30
7369	SMITH	CLERK	7902	1980-12-17	800.0	NULL	20
Time taken: 29.34 seconds, Fetched: 14 row(s)
hive (default)>


按照别名排序

按照员工薪水的2倍排序


hive (default)> select ename, sal*2 twosal
              > from emp order by twosal;

...
OK
ename	twosal
SMITH	1600.0
JAMES	1900.0
ADAMS	2200.0
WARD	2500.0
MARTIN	2500.0
MILLER	2600.0
TURNER	3000.0
ALLEN	3200.0
CLARK	4900.0
BLAKE	5700.0
JONES	5950.0
SCOTT	6000.0
FORD	6000.0
KING	10000.0
Time taken: 38.358 seconds, Fetched: 14 row(s)
hive (default)> 

// 从上面排序得出,每排序一个都会加上1600


多个列排序

按照部门和工资升序排序


hive (default)> select ename, deptno, sal 
              > from emp order
              > by deptno, sal;
...
OK
ename	deptno	sal
MILLER	10	1300.0
CLARK	10	2450.0
KING	10	5000.0
SMITH	20	800.0
ADAMS	20	1100.0
JONES	20	2975.0
SCOTT	20	3000.0
FORD	20	3000.0
JAMES	30	950.0
MARTIN	30	1250.0
WARD	30	1250.0
TURNER	30	1500.0
ALLEN	30	1600.0
BLAKE	30	2850.0
Time taken: 39.124 seconds, Fetched: 14 row(s)
hive (default)>


每个MapReduce内部排序(Sort By)

Sort By:每个Reducer内部进行排序,对全局结果集来说不是排序。

(1)查看设置reduce个数


hive (default)> set mapreduce.job.reduces;

mapreduce.job.reduces=-1
hive (default)>


(2)设置reduce个数


hive (default)> set mapreduce.job.reduces=3;
hive (default)>


(3)根据部门编号降序查看员工信息


hive (default)> select * from emp sort by empno desc;

.......
OK
emp.empno	emp.ename	emp.job	emp.mgr	emp.hiredate	emp.sal	emp.comm	emp.deptno
7844	TURNER	SALESMAN	7698	1981-9-8	1500.0	0.0	30
7839	KING	PRESIDENT	NULL	1981-11-17	5000.0	NULL	10
7788	SCOTT	ANALYST	7566	1987-4-19	3000.0	NULL	20
7782	CLARK	MANAGER	7839	1981-6-9	2450.0	NULL	10
7698	BLAKE	MANAGER	7839	1981-5-1	2850.0	NULL	30
7654	MARTIN	SALESMAN	7698	1981-9-28	1250.0	1400.0	30
7934	MILLER	CLERK	7782	1982-1-23	1300.0	NULL	10
7900	JAMES	CLERK	7698	1981-12-3	950.0	NULL	30
7876	ADAMS	CLERK	7788	1987-5-23	1100.0	NULL	20
7566	JONES	MANAGER	7839	1981-4-2	2975.0	NULL	20
7521	WARD	SALESMAN	7698	1981-2-22	1250.0	500.0	30
7499	ALLEN	SALESMAN	7698	1981-2-20	1600.0	300.0	30
7902	FORD	ANALYST	7566	1981-12-3	3000.0	NULL	20
7369	SMITH	CLERK	7902	1980-12-17	800.0	NULL	20
Time taken: 53.566 seconds, Fetched: 14 row(s)
hive (default)>


(4)将查询结果导入到文件中(按照部门编号降序排序)


hive (default)> insert overwrite local directory '/usr/local/hadoop/module/datas/sortby-result'
              >  select * from emp sort by sal;
.......
OK
emp.empno	emp.ename	emp.job	emp.mgr	emp.hiredate	emp.sal	emp.comm	emp.deptno
Time taken: 57.465 seconds
hive (default)>


当这个命令执行完毕之后,我们在/usr/local/hadoop/module/datas目录下查看文件,会发现多了一个sortby-result文件夹


[root@hadoop101 datas]# pwd
/usr/local/hadoop/module/datas
[root@hadoop101 datas]# ll
total 28
-rw-r--r-- 1 root root  69 Dec 31 01:59 dept.txt
-rw-r--r-- 1 root root 657 Dec 31 02:07 emp.txt
-rw-r--r-- 1 root root  23 Dec 30 02:42 hivef.sql
-rw-r--r-- 1 root root  54 Dec 30 02:49 hive_result.txt
-rw-r--r-- 1 root root  36 Jan  3 16:25 location.txt
drwxr-xr-x 3 root root 189 Jan  3 17:48 sortby-result
drwxr-xr-x 3 root root 115 Jan  2 23:01 stu1
-rw-r--r-- 1 root root  39 Dec 29 17:36 student.txt
-rw-r--r-- 1 root root 144 Dec 30 16:21 test.txt
[root@hadoop101 datas]# 

//查看sortby-result文件夹的数据
[root@hadoop101 datas]# cd sortby-result/
[root@hadoop101 sortby-result]# ll
total 12
-rw-r--r-- 1 root root 288 Jan  3 17:48 000000_0
-rw-r--r-- 1 root root 282 Jan  3 17:48 000001_0
-rw-r--r-- 1 root root  91 Jan  3 17:48 000002_0
[root@hadoop101 sortby-result]# 

//查看000000_0、000001_0、000002_0的数据信息
[root@hadoop101 sortby-result]# cat 000000_0 

7654MARTINSALESMAN76981981-9-281250.01400.030
7844TURNERSALESMAN76981981-9-81500.00.030
7782CLARKMANAGER78391981-6-92450.0N10
7698BLAKEMANAGER78391981-5-12850.0N30
7788SCOTTANALYST75661987-4-193000.0N20
7839KINGPRESIDENTN1981-11-175000.0N10


[root@hadoop101 sortby-result]# cat 000001_0 

7900JAMESCLERK76981981-12-3950.0N30
7876ADAMSCLERK77881987-5-231100.0N20
7521WARDSALESMAN76981981-2-221250.0500.030
7934MILLERCLERK77821982-1-231300.0N10
7499ALLENSALESMAN76981981-2-201600.0300.030
7566JONESMANAGER78391981-4-22975.0N20


[root@hadoop101 sortby-result]# cat 000002_0 

7369SMITHCLERK79021980-12-17800.0N20
7902FORDANALYST75661981-12-33000.0N20
[root@hadoop101 sortby-result]#


之后,我们再同样执行操作,在/usr/local/hadoop/module/datas目录下查看文件,会发现多了一个order-result文件夹


hive (default)> insert overwrite local directory '/usr/local/hadoop/module/datas/order-result'
              >  select * from emp order by sal;

....
OK
emp.empno	emp.ename	emp.job	emp.mgr	emp.hiredate	emp.sal	emp.comm	emp.deptno
Time taken: 36.627 seconds
hive (default)> 
[root@hadoop101 datas]# ll
total 28
-rw-r--r-- 1 root root  69 Dec 31 01:59 dept.txt
-rw-r--r-- 1 root root 657 Dec 31 02:07 emp.txt
-rw-r--r-- 1 root root  23 Dec 30 02:42 hivef.sql
-rw-r--r-- 1 root root  54 Dec 30 02:49 hive_result.txt
-rw-r--r-- 1 root root  36 Jan  3 16:25 location.txt
drwxr-xr-x 3 root root 115 Jan  3 17:57 order-result
drwxr-xr-x 3 root root 189 Jan  3 17:48 sortby-result
drwxr-xr-x 3 root root 115 Jan  2 23:01 stu1
-rw-r--r-- 1 root root  39 Dec 29 17:36 student.txt
-rw-r--r-- 1 root root 144 Dec 30 16:21 test.txt
[root@hadoop101 datas]#


加载过程:


hive load sql文件 hive表load数据_hadoop_18


查看order-result文件夹下的数据


[root@hadoop101 order-result]# ll
total 4
-rw-r--r-- 1 root root 661 Jan  3 17:57 000000_0
[root@hadoop101 order-result]# 
[root@hadoop101 order-result]# cat 000000_0 

7369SMITHCLERK79021980-12-17800.0N20
7900JAMESCLERK76981981-12-3950.0N30
7876ADAMSCLERK77881987-5-231100.0N20
7521WARDSALESMAN76981981-2-221250.0500.030
7654MARTINSALESMAN76981981-9-281250.01400.030
7934MILLERCLERK77821982-1-231300.0N10
7844TURNERSALESMAN76981981-9-81500.00.030
7499ALLENSALESMAN76981981-2-201600.0300.030
7782CLARKMANAGER78391981-6-92450.0N10
7698BLAKEMANAGER78391981-5-12850.0N30
7566JONESMANAGER78391981-4-22975.0N20
7788SCOTTANALYST75661987-4-193000.0N20
7902FORDANALYST75661981-12-33000.0N20
7839KINGPRESIDENTN1981-11-175000.0N10
[root@hadoop101 order-result]#


11、分区排序(Distribute By)

Distribute By:类似MR中partition,进行分区,结合sort by使用。

注意,Hive要求DISTRIBUTE BY语句要写在SORT BY语句之前。
对于distribute by进行测试,一定要分配多reduce进行处理,否则无法看到distribute by的效果。

案例实操:

(1)先按照部门编号分区,再按照员工编号降序排序。


hive (default)> insert overwrite local directory '/usr/local/hadoop/module/datas/distribute-result'
              > select * from emp distribute by deptno sort  by sal;

....
OK
emp.empno	emp.ename	emp.job	emp.mgr	emp.hiredate	emp.sal	emp.comm	emp.deptno
Time taken: 55.914 seconds
hive (default)> 

//在/usr/local/hadoop/module/datas目录下查看文件,会发现多了一个distribute-result文件夹
[root@hadoop101 datas]# ll
total 28
-rw-r--r-- 1 root root  69 Dec 31 01:59 dept.txt
drwxr-xr-x 3 root root 189 Jan  3 18:18 distribute-result
-rw-r--r-- 1 root root 657 Dec 31 02:07 emp.txt
-rw-r--r-- 1 root root  23 Dec 30 02:42 hivef.sql
-rw-r--r-- 1 root root  54 Dec 30 02:49 hive_result.txt
-rw-r--r-- 1 root root  36 Jan  3 16:25 location.txt
drwxr-xr-x 3 root root 115 Jan  3 17:57 order-result
drwxr-xr-x 3 root root 189 Jan  3 17:48 sortby-result
drwxr-xr-x 3 root root 115 Jan  2 23:01 stu1
-rw-r--r-- 1 root root  39 Dec 29 17:36 student.txt
-rw-r--r-- 1 root root 144 Dec 30 16:21 test.txt
[root@hadoop101 datas]# 

//查看这个distribute-result文件夹的数据信息
[root@hadoop101 datas]# cd distribute-result/
[root@hadoop101 distribute-result]# ll
total 12
-rw-r--r-- 1 root root 293 Jan  3 18:18 000000_0
-rw-r--r-- 1 root root 139 Jan  3 18:18 000001_0
-rw-r--r-- 1 root root 229 Jan  3 18:18 000002_0
[root@hadoop101 distribute-result]# 

[root@hadoop101 distribute-result]# cat 000000_0 

7900JAMESCLERK76981981-12-3950.0N30
7521WARDSALESMAN76981981-2-221250.0500.030
7654MARTINSALESMAN76981981-9-281250.01400.030
7844TURNERSALESMAN76981981-9-81500.00.030
7499ALLENSALESMAN76981981-2-201600.0300.030
7698BLAKEMANAGER78391981-5-12850.0N30
[root@hadoop101 distribute-result]# 

[root@hadoop101 distribute-result]# cat 000001_0 

7934MILLERCLERK77821982-1-231300.0N10
7782CLARKMANAGER78391981-6-92450.0N10
7839KINGPRESIDENTN1981-11-175000.0N10
[root@hadoop101 distribute-result]# 

[root@hadoop101 distribute-result]# cat 000002_0 

7369SMITHCLERK79021980-12-17800.0N20
7876ADAMSCLERK77881987-5-231100.0N20
7566JONESMANAGER78391981-4-22975.0N20
7788SCOTTANALYST75661987-4-193000.0N20
7902FORDANALYST75661981-12-33000.0N20
[root@hadoop101 distribute-result]#


分区排序,必须设置为:


set mapreduce.job.reduces=3;


12、Cluster By

当distribute by和sorts by字段相同时,可以使 用cluster by方式。
cluster by除了具有distribute by的功能外还兼具sort by的功能。但是排序只能是升序排序,不能指定排序规则为ASC或者DESC。

(1)以下两种写法等价


hive (default)> select * from emp cluster by deptno;
hive (default)> select * from emp distribute by deptno sort by deptno;


注意:按照部门编号分区,不一定就是固定死的数值,可以是20号和30号部门分到一个分区里面去。

分区查询,生成一个cluster-result文件,也是三个数据


hive (default)> insert overwrite local directory '/usr/local/hadoop/module/datas/cluster-result'
              > select * from emp cluster by deptno;
....
OK
emp.empno	emp.ename	emp.job	emp.mgr	emp.hiredate	emp.sal	emp.comm	emp.deptno
Time taken: 56.335 seconds
hive (default)>


hive load sql文件 hive表load数据_Time_19


[root@hadoop101 datas]# cd cluster-result/
[root@hadoop101 cluster-result]# ll
total 12
-rw-r--r-- 1 root root 293 Jan  3 18:48 000000_0
-rw-r--r-- 1 root root 139 Jan  3 18:48 000001_0
-rw-r--r-- 1 root root 229 Jan  3 18:48 000002_0
[root@hadoop101 cluster-result]#


里面是数据是按部门排序


[root@hadoop101 cluster-result]# cat 000000_0 

7654MARTINSALESMAN76981981-9-281250.01400.030
7900JAMESCLERK76981981-12-3950.0N30
7698BLAKEMANAGER78391981-5-12850.0N30
7521WARDSALESMAN76981981-2-221250.0500.030
7844TURNERSALESMAN76981981-9-81500.00.030
7499ALLENSALESMAN76981981-2-201600.0300.030


[root@hadoop101 cluster-result]# cat 000001_0 

7934MILLERCLERK77821982-1-231300.0N10
7839KINGPRESIDENTN1981-11-175000.0N10
7782CLARKMANAGER78391981-6-92450.0N10


[root@hadoop101 cluster-result]# cat 000002_0 

7788SCOTTANALYST75661987-4-193000.0N20
7566JONESMANAGER78391981-4-22975.0N20
7876ADAMSCLERK77881987-5-231100.0N20
7902FORDANALYST75661981-12-33000.0N20
7369SMITHCLERK79021980-12-17800.0N20
[root@hadoop101 cluster-result]#


13、分桶及抽样查询

分桶表数据存储

分区针对的是数据的存储路径;分桶针对的是数据文件
分区提供一个隔离数据和优化查询的便利方式。不过,并非所有的数据集都可形成合理的分区,特别是之前所提到过的要确定合适的划分大小这个疑虑。
分桶是将数据集分解成更容易管理的若干部分的另一个技术。

操作步骤:先创建分桶表,通过直接导入数据文件的方式

(1)创建分桶表


hive (default)> create table stu_buck(id int, name string)
              > clustered by(id)
              > into 4 buckets
              > row format delimited fields terminated by 't';
              
OK
Time taken: 1.883 seconds
hive (default)>


在HDFS文件系统查看,被创建出来了


hive load sql文件 hive表load数据_Time_20

分桶表

创建出来这个表是空文件数据,我们许需要做的事加载数据到这个表上

(2)准备数据,在/usr/local/hadoop/module/datas目录下新建一个stu_buck文本,执行如下:


[root@hadoop101 hadoop-2.7.2]# cd /usr/local/hadoop/module/datas/
[root@hadoop101 datas]# vim  stu_buck

1001	ss1
1002	ss2
1003	ss3
1004	ss4
1005	ss5
1006	ss6
1007	ss7
1008	ss8
1009	ss9
1010	ss10
1011	ss11
1012	ss12
1013	ss13
1014	ss14
1015	ss15
1016	ss16


(3)导入数据到分桶表中


hive (default)> load data local inpath '/usr/local/hadoop/module/datas/stu_buck.txt' 
              > into table stu_buck;
              
Loading data to table default.stu_buck
Table default.stu_buck stats: [numFiles=1, totalSize=151]
OK
Time taken: 1.727 seconds
hive (default)> 

//检验,查询stu_buck表的数据
hive (default)> select * from stu_buck;

OK
stu_buck.id	stu_buck.name
1001	ss1
1002	ss2
1003	ss3
1004	ss4
1005	ss5
1006	ss6
1007	ss7
1008	ss8
1009	ss9
1010	ss10
1011	ss11
1012	ss12
1013	ss13
1014	ss14
1015	ss15
1016	ss16
Time taken: 0.621 seconds, Fetched: 16 row(s)
hive (default)>


由上,说明stu_buck表由数据,成功导入ok

我们在HDFS文件系统查看,有数据了:


hive load sql文件 hive表load数据_Time_21


我们可以换另一种方式:

首先,把stu_buck表清空


hive (default)> truncate table stu_buck;
OK
Time taken: 0.396 seconds
hive (default)>

//查看这个stu_buck表数据是否还有?
hive (default)> select * from stu_buck;
OK
stu_buck.id	stu_buck.name
Time taken: 0.135 seconds
hive (default)>


hive load sql文件 hive表load数据_hadoop_22


以上,说明已经清空了stu_buck表的数据

创建分桶表时,数据通过子查询的方式导入:

(1)先建一个普通的stu表


hive (default)> create table stu(id int, name string)
              > row format delimited fields terminated by 't';
              
OK
Time taken: 0.187 seconds
hive (default)>


(2)向普通的stu表中导入数据


hive (default)> load data local inpath '/usr/local/hadoop/module/datas/stu_buck.txt' 
              > into table stu;
              
Loading data to table default.stu
Table default.stu stats: [numFiles=1, totalSize=151]
OK
Time taken: 0.502 seconds
hive (default)> 

//检验,查询stu表的数据
hive (default)> select * from stu;

OK
stu.id	stu.name
1001	ss1
1002	ss2
1003	ss3
1004	ss4
1005	ss5
1006	ss6
1007	ss7
1008	ss8
1009	ss9
1010	ss10
1011	ss11
1012	ss12
1013	ss13
1014	ss14
1015	ss15
1016	ss16
Time taken: 0.129 seconds, Fetched: 16 row(s)
hive (default)>


(3)导入数据到分桶表


hive (default)> insert into table stu_buck
              > select * from stu;
.........
.........
OK
stu.id	stu.name
Time taken: 39.359 seconds
hive (default)>


发现还是只有一个分桶


hive load sql文件 hive表load数据_hive_23


(4)需要设置一个属性


hive (default)>  set hive.enforce.bucketing;

hive.enforce.bucketing=false
hive (default)>


由上,hive.enforce.bucketing=false,我们需要改成true,执行命令:


hive (default)> set hive.enforce.bucketing=true;
hive (default)>  set mapreduce.job.reduces=-1;
hive (default)>


(5)接下来我们需要对表插入数据,注意的是:在插入这张表的数据前,需要清空这张表的数据, 清空stu_buck表数据:


hive (default)> truncate table stu_buck;

OK
Time taken: 0.213 seconds
hive (default)>


(6) 插入stu_buck表数据:


hive (default)> insert into table stu_buck
              > select * from stu;

........
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 4
//4个文件
........
OK
stu.id	stu.name
Time taken: 79.077 seconds
hive (default)> 

//检验,查询stu_buck表的数据
hive (default)> select * from stu_buck;
OK
stu_buck.id	stu_buck.name
1016	ss16
1012	ss12
1008	ss8
1004	ss4
1009	ss9
1005	ss5
1001	ss1
1013	ss13
1010	ss10
1002	ss2
1006	ss6
1014	ss14
1003	ss3
1011	ss11
1007	ss7
1015	ss15
Time taken: 0.201 seconds, Fetched: 16 row(s)
hive (default)>


由上,发现查询出来的数据排序发生改变,HDFS文件系统的数据规律:查询数据,文件是由上往下读


hive load sql文件 hive表load数据_hive_24


14、分桶抽样查询

对于非常大的数据集,有时用户需要使用的是一个具有代表性的查询结果而不是全部结果。Hive可以通过对表进行抽样来满足这个需求。

查询表stu_buck中的数据


hive (default)> select * from stu_buck tablesample(bucket 1 out of 4 on id);

OK
stu_buck.id	stu_buck.name
1016	ss16
1012	ss12
1008	ss8
1004	ss4
Time taken: 0.305 seconds, Fetched: 4 row(s)
hive (default)>


注:tablesample是抽样语句


语法:TABLESAMPLE(BUCKET x OUT OF y) 。


y必须是table总bucket数的倍数或者因子。hive根据y的大小,决定抽样的比例。例如,table总共分了4份,当y=2时,抽取(4/2=)2个bucket的数据,当y=8时,抽取(4/8=)1/2个bucket的数据。
x表示从哪个bucket开始抽取,如果需要取多个分区,以后的分区号为当前分区号加上y
例如,table总bucket数为4,tablesample(bucket 1 out of 2),表示总共抽取(4/2=)2个bucket的数据,抽取第1(x)个和第3(x+y)个bucket的数据。

注意:x的值必须小于等于y的值,否则


FAILED: SemanticException [Error 10061]: Numerator should not be bigger than
denominator in sample clause for table stu_buck