共计 10011 个字符,预计需要花费 26 分钟才能阅读完成。
本篇文章为大家展示了 Hive 中的复合数据结构以及函数的用法说明是什么,内容简明扼要并且容易理解,绝对能使你眼前一亮,通过这篇文章的详细介绍希望你能有所收获。
目前 hive 支持的复合数据类型有以下几种:
map
(key1, value1, key2, value2, …) Creates a map with the given key/value pairs
struct
(val1, val2, val3, …) Creates a struct with the given field values. Struct field names will be col1, col2, …
named_struct
(name1, val1, name2, val2, …) Creates a struct with the given field names and values. (as of Hive 0.8.0)
array
(val1, val2, …) Creates an array with the given elements
create_union
(tag, val1, val2, …) Creates a union type with the value that is being pointed to by the tag parameter
一、map、struct、array 这 3 种的用法:1、Array 的使用
创建数据库表,以 array 作为数据类型
create table person(name string,work_locations array string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY \t
COLLECTION ITEMS TERMINATED BY ,
biansutao beijing,shanghai,tianjin,hangzhou
linan changchu,chengdu,wuhan
LOAD DATA LOCAL INPATH /home/hadoop/person.txt OVERWRITE INTO TABLE person;
hive select * from person;
biansutao [beijing , shanghai , tianjin , hangzhou]
linan [changchu , chengdu , wuhan]
Time taken: 0.355 seconds
hive select name from person;
linan
biansutao
Time taken: 12.397 seconds
hive select work_locations[0] from person;
changchu
beijing
Time taken: 13.214 seconds
hive select work_locations from person;
[changchu , chengdu , wuhan]
[beijing , shanghai , tianjin , hangzhou]
Time taken: 13.755 seconds
hive select work_locations[3] from person;
hangzhou
Time taken: 12.722 seconds
hive select work_locations[4] from person;
Time taken: 15.958 seconds
2、Map 的使用
创建数据库表
create table score(name string, score map string,int)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY \t
COLLECTION ITEMS TERMINATED BY ,
MAP KEYS TERMINATED BY :
要入库的数据
biansutao 数学 :80, 语文 :89, 英语 :95
jobs 语文 :60, 数学 :80, 英语 :99
LOAD DATA LOCAL INPATH /home/hadoop/score.txt OVERWRITE INTO TABLE score;
hive select * from score;
biansutao {数学 :80, 语文 :89, 英语 :95}
jobs {语文 :60, 数学 :80, 英语 :99}
Time taken: 0.665 seconds
hive select name from score;
biansutao
Time taken: 19.778 seconds
hive select t.score from score t;
{语文 :60, 数学 :80, 英语 :99}
{数学 :80, 语文 :89, 英语 :95}
Time taken: 19.353 seconds
hive select t.score[语文] from score t;
Time taken: 13.054 seconds
hive select t.score[英语] from score t;
Time taken: 13.769 seconds
3、Struct 的使用
创建数据表
CREATE TABLE test(id int,course struct course:string,score:int)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY \t
COLLECTION ITEMS TERMINATED BY ,
1 english,80
2 math,89
3 chinese,95
LOAD DATA LOCAL INPATH /home/hadoop/test.txt OVERWRITE INTO TABLE test;
hive select * from test;
1 {course : english , score :80}
2 {course : math , score :89}
3 {course : chinese , score :95}
Time taken: 0.275 seconds
hive select course from test;
{course : english , score :80}
{course : math , score :89}
{course : chinese , score :95}
Time taken: 44.968 seconds
select t.course.course from test t;
english
chinese
Time taken: 15.827 seconds
hive select t.course.score from test t;
Time taken: 13.235 seconds
4、数据组合(不支持组合的复杂数据类型)
LOAD DATA LOCAL INPATH /home/hadoop/test.txt OVERWRITE INTO TABLE test;
create table test1(id int,a MAP STRING,ARRAY STRING)
row format delimited fields terminated by \t
collection items terminated by ,
MAP KEYS TERMINATED BY :
1 english:80,90,70
2 math:89,78,86
3 chinese:99,100,82
LOAD DATA LOCAL INPATH /home/hadoop/test1.txt OVERWRITE INTO TABLE test1;
二、hive 中的一些不常见函数的用法:
常见的函数就不废话了,和标准 sql 类似,下面我们要聊到的基本是 HQL 里面专有的函数,
hive 里面的函数大致分为如下几种:Built-in、Misc.、UDF、UDTF、UDAF
我们就挑几个标准 SQL 里没有,但是在 HIVE SQL 在做统计分析常用到的来说吧。
1、array_contains(Collection Functions)
这是内置的对集合进行操作的函数,用法举例:
create EXTERNAL table IF NOT EXISTS userInfo (id int,sex string, age int, name string, email string,sd string, ed string) ROW FORMAT DELIMITED FIELDS TERMINATED BY \t location /hive/dw
select * from userinfo where sex= male and (id!=1 and id !=2 and id!=3 and id!=4 and id!=5) and age 30;
select * from (select * from userinfo where sex= male and !array_contains(split( 1,2,3,4,5 , ,),cast(id as string))) tb1 where tb1.age 30;
其中建表所用的测试数据你可以用如下链接的脚本自动生成:
http://my.oschina.net/leejun2005/blog/76631
2、get_json_object(Misc. Functions)
测试数据:
first {store :{ fruit :[{ weight :8, type : apple},{weight :9, type : pear}], bicycle :{price :19.951, color : red1}}, email : amy@only_for_json_udf_test.net , owner : amy1 } third
first {store :{ fruit :[{ weight :9, type : apple},{weight :91, type : pear}], bicycle :{price :19.952, color : red2}}, email : amy@only_for_json_udf_test.net , owner : amy2 } third
first {store :{ fruit :[{ weight :10, type : apple},{weight :911, type : pear}], bicycle :{price :19.953, color : red3}}, email : amy@only_for_json_udf_test.net , owner : amy3 } third
create external table if not exists t_json(f1 string, f2 string, f3 string) row format delimited fields TERMINATED BY location /test/json
select get_json_object(t_json.f2, $.owner) from t_json;
SELECT * from t_json where get_json_object(t_json.f2, $.store.fruit[0].weight ) = 9;
SELECT get_json_object(t_json.f2, $.non_exist_key) FROM t_json;
这里尤其要注意 UDTF 的问题,官方文档有说明:
json_tuple
A new json_tuple() UDTF is introduced in hive 0.7. It takes a set of names (keys) and a JSON string, and returns a tuple of values using one function. This is much more efficient than calling GET_JSON_OBJECT to retrieve more than one key from a single JSON string. In any case where a single JSON string would be parsed more than once, your query will be more efficient if you parse it once, which is what JSON_TUPLE is for. As JSON_TUPLE is a UDTF, you will need to use the LATERAL VIEW syntax in order to achieve the same goal.
For example,
select a.timestamp, get_json_object(a.appevents, $.eventid), get_json_object(a.appenvets, $.eventname) from log a;
should be changed to
select a.timestamp, b.*
from log a lateral view json_tuple(a.appevent, eventid , eventname) b as f1, f2;
UDTF(User-Defined Table-Generating Functions) 用来解决 输入一行输出多行 (On-to-many maping) 的需求。
通过 Lateral view 可以方便的将 UDTF 得到的行转列的结果集合在一起提供服务,因为直接在 SELECT 使用 UDTF 会存在限制,即仅仅能包含单个字段,不光是多个 UDTF,仅仅单个 UDTF 加上其他字段也是不可以,hive 提示在 UDTF 中仅仅能有单一的表达式。如下:
hive select my_test(“abcef:aa”) as qq,’abcd’from sunwg01;
FAILED: Error in semantic analysis: Only a single expression in the SELECT clause is supported with UDTF’s
使用 Lateral view 可以实现上面的需求,Lateral view 语法如下:
lateralView: LATERAL VIEW udtf(expression) tableAlias AS columnAlias (‘,’columnAlias)*
fromClause: FROM baseTable (lateralView)*
hive create table sunwg (a array, b array)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY‘\t’
COLLECTION ITEMS TERMINATED BY‘,’;
OK
Time taken: 1.145 seconds
hive load data local inpath‘/home/hjl/sunwg/sunwg.txt’overwrite into table sunwg;
Copying data from file:/home/hjl/sunwg/sunwg.txt
Loading data to table sunwg
OK
Time taken: 0.162 seconds
hive select * from sunwg;
OK
[10,11] [tom , mary]
[20,21] [kate , tim]
Time taken: 0.069 seconds
hive
SELECT a, name
FROM sunwg LATERAL VIEW explode(b) r1 AS name;
OK
[10,11] tom
[10,11] mary
[20,21] kate
[20,21] tim
Time taken: 8.497 seconds
hive SELECT id, name
FROM sunwg LATERAL VIEW explode(a) r1 AS id
LATERAL VIEW explode(b) r2 AS name;
OK
10 tom
10 mary
11 tom
11 mary
20 kate
20 tim
21 kate
21 tim
Time taken: 9.687 seconds
3、parse_url_tuple
测试数据:
url1 http://facebook.com/path2/p.php?k1=v1 k2=v2#Ref1
url2 https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-getjsonobject
url3 https://www.google.com.hk/#hl=zh-CN newwindow=1 safe=strict q=hive+translate+example oq=hive+translate+example gs_l=serp.3…10174.11861.6.12051.8.8.0.0.0.0.132.883.0j7.7.0…0.0…1c.1j4.8.serp.0B9C1T_n0Hs bav=on.2,or. bvm=bv.44770516,d.aGc fp=e13e41a6b9dab3f6 biw=1241 bih=589
create external table if not exists t_url(f1 string, f2 string) row format delimited fields TERMINATED BY location /test/url
SELECT f1, b.* FROM t_url LATERAL VIEW parse_url_tuple(f2, HOST , PATH , QUERY , QUERY:k1) b as host, path, query, query_id;
结果:
url1 facebook.com /path2/p.php k1=v1 k2=v2 v1
url2 cwiki.apache.org /confluence/display/Hive/LanguageManual+UDF NULL NULL
url3 www.google.com.hk / NULL NULL
4、explode
explode 是一个 hive 内置的表生成函数:Built-in Table-Generating Functions (UDTF),主要是解决 1 to N 的问题,即它可以把一行输入拆成多行,比如一个 array 的每个元素拆成一行,作为一个虚表输出。它有如下需要注意的地方:
Using the syntax SELECT udtf(col) AS colAlias... has a few limitations:
No other expressions are allowed in SELECT
SELECT pageid, explode(adid_list) AS myCol... is not supported
UDTF s can t be nested
SELECT explode(explode(adid_list)) AS myCol... is not supported
GROUP BY / CLUSTER BY / DISTRIBUTE BY / SORT BY is not supported
SELECT explode(adid_list) AS myCol ... GROUP BY myCol is not supported
从上面的原理与语法上可知,
select 列中不能 udtf 和其它非 udtf 列混用,
udtf 不能嵌套,
不支持 GROUP BY / CLUSTER BY / DISTRIBUTE BY / SORT BY
还有 select 中出现的 udtf 一定需要列别名,否则会报错:
SELECT explode(myCol) AS myNewCol FROM myTable;
SELECT explode(myMap) AS (myMapKey, myMapValue) FROM myMapTable;
SELECT posexplode(myCol) AS pos, myNewCol FROM myTable;
5、lateral view
lateral view 是 Hive 中提供给 UDTF 的 conjunction,它可以解决 UDTF 不能添加额外的 select 列的问题。当我们想对 hive 表中某一列进行 split 之后,想对其转换成 1 to N 的模式,即一行转多列。hive 不允许我们在 UDTF 函数之外,再添加其它 select 语句。
如下,我们想将登录某个游戏的用户 id 放在一个字段 user_ids 里,对每一行数据用 UDTF 后输出多行。
select game_id, explode(split(user_ids, \\[\\[\\[)) as user_id from login_game_log where dt= 2014-05-15 ;
FAILED: Error in semantic analysis: UDTF s are not supported outside the SELECT clause, nor nested in expressions。
提示语法分析错误,UDTF 不支持函数之外的 select 语句,如果我们想支持怎么办呢?接下来就是 Lateral View 登场的时候了。
Lateral view 其实就是用来和像类似 explode 这种 UDTF 函数联用的。lateral view 会将 UDTF 生成的结果放到一个虚拟表中,然后这个虚拟表(1 to N)会和输入行即每个 game_id 进行 join 来达到连接 UDTF 外的 select 字段的目的(源表和拆分的虚表按行做行内 1 join N 的直接连接),这也是为什么 LATERAL VIEW udtf(expression) 后面需要表别名和列别名的原因。
Lateral View Syntax
lateralView: LATERAL VIEW udtf(expression) tableAlias AS columnAlias (, columnAlias)*
fromClause: FROM baseTable (lateralView)*
可以看出,可以在 2 个地方用 Lateral view:
在 udtf 前面用
在 from baseTable 后面用
例如:
pageid adid_list
front_page [1, 2, 3]
contact_page [3, 4, 5]
SELECT pageid, adid
FROM pageAds LATERAL VIEW explode(adid_list) adTable AS adid;
pageid adid
front_page 1
front_page 2
front_page 3
contact_page 3
contact_page 4
contact_page 5
From 语句后可以跟多个 Lateral View。
A FROM clause can have multiple LATERAL VIEW clauses. Subsequent LATERAL VIEWS can reference columns from any of the tables appearing to the left of the LATERAL VIEW.
给定数据:
Array int col1 Array string col2
[1, 2] [a , b , c][3, 4] [d , e , f]
转换目标:
想同时把第一列和第二列拆开,类似做笛卡尔乘积。
我们可以这样写:
SELECT myCol1, myCol2 FROM baseTable
LATERAL VIEW explode(col1) myTable1 AS myCol1
LATERAL VIEW explode(col2) myTable2 AS myCol2;
还有一种情况,如果 UDTF 转换的 Array 是空的怎么办呢?
在 Hive0.12 里面会支持 outer 关键字,如果 UDTF 的结果是空,默认会被忽略输出。
如果加上 outer 关键字,则会像 left outer join 一样,还是会输出 select 出的列,而 UDTF 的输出结果是 NULL。
Lateral View 通常和 UDTF 一起出现,为了解决 UDTF 不允许在 select 字段的问题。
Multiple Lateral View 可以实现类似笛卡尔乘积。
Outer 关键字可以把不输出的 UDTF 的空结果,输出成 NULL,防止丢失数据。
上述内容就是 Hive 中的复合数据结构以及函数的用法说明是什么,你们学到知识或技能了吗?如果还想学到更多技能或者丰富自己的知识储备,欢迎关注丸趣 TV 行业资讯频道。