多维统计一般分两种,我们看看 Hive 中如何解决:
1、同属性的多维组合统计
(1)问题:
有如下数据,字段内容分别为:url, catePath0, catePath1, catePath2, unitparams
https://cwiki.apache.org/confluence 0 1 8 {"store":{"fruit":[{"weight":1,"type":"apple"},{"weight":9,"type":"pear"}],"bicycle":{"price":19.951,"color":"red1"}},"email":"amy@only_for_json_udf_test.net","owner":"amy1"}
http://my.oschina.net/leejun2005/blog/83058 0 1 23 {"store":{"fruit":[{"weight":1,"type":"apple"},{"weight":9,"type":"pear"}],"bicycle":{"price":19.951,"color":"red1"}},"email":"amy@only_for_json_udf_test.net","owner":"amy1"}
http://www.hao123.com/indexnt.html?sto 0 1 25 {"store":{"fruit":[{"weight":1,"type":"apple"},{"weight":9,"type":"pear"}],"bicycle":{"price":19.951,"color":"red1"}},"email":"amy@only_for_json_udf_test.net","owner":"amy1"}
https://cwiki.apache.org/confluence 0 5 18 {"store":{"fruit":[{"weight":5,"type":"apple"},{"weight":9,"type":"pear"}],"bicycle":{"price":19.951,"color":"red1"}},"email":"amy@only_for_json_udf_test.net","owner":"amy1"}
http://my.oschina.net/leejun2005/blog/83058 0 5 118 {"store":{"fruit":[{"weight":5,"type":"apple"},{"weight":9,"type":"pear"}],"bicycle":{"price":19.951,"color":"red1"}},"email":"amy@only_for_json_udf_test.net","owner":"amy1"}
http://www.hao123.com/indexnt.html?sto 0 3 98 {"store":{"fruit":[{"weight":3,"type":"apple"},{"weight":9,"type":"pear"}],"bicycle":{"price":19.951,"color":"red1"}},"email":"amy@only_for_json_udf_test.net","owner":"amy1"}
http://www.hao123.com/indexnt.html?sto 0 3 8 {"store":{"fruit":[{"weight":3,"type":"apple"},{"weight":9,"type":"pear"}],"bicycle":{"price":19.951,"color":"red1"}},"email":"amy@only_for_json_udf_test.net","owner":"amy1"}
http://my.oschina.net/leejun2005/blog/83058 0 5 81 {"store":{"fruit":[{"weight":5,"type":"apple"},{"weight":9,"type":"pear"}],"bicycle":{"price":19.951,"color":"red1"}},"email":"amy@only_for_json_udf_test.net","owner":"amy1"}
http://www.hao123.com/indexnt.html?sto 0 9 8 {"store":{"fruit":[{"weight":9,"type":"apple"},{"weight":9,"type":"pear"}],"bicycle":{"price":19.951,"color":"red1"}},"email":"amy@only_for_json_udf_test.net","owner":"amy1"}
(2)需求:
计算 catePath0, catePath1, catePath2 这三种维度组合下,各个 url 对应的 pv、uv,如:
0 1 23 1 1
0 1 25 1 1
0 1 8 1 1
0 1 ALL 3 3
0 3 8 1 1
0 3 98 1 1
0 3 ALL 2 1
0 5 118 1 1
0 5 18 1 1
0 5 81 1 1
0 5 ALL 3 2
0 ALL ALL 8 3
ALL ALL ALL 8 3
(3)解决思路:
hive 中同属性多维统计问题通常用 union all 组合出各种维度然后 group by 进行求解:
01 |
create EXTERNAL table IF NOT EXISTS t_log (
|
02 |
url string, c0 string, c1 string, c2 string, unitparams string
|
03 |
) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' location '/tmp/decli/1' ;
|
06 |
select host, c0, c1, c2 from t_log t0
|
07 |
LATERAL VIEW parse_url_tuple(url, 'HOST' ) t1 as host
|
08 |
where get_json_object(t0.unitparams, '$.store.fruit[0].weight' ) != 9
|
10 |
select host, c0, c1, 'ALL' c2 from t_log t0
|
11 |
LATERAL VIEW parse_url_tuple(url, 'HOST' ) t1 as host
|
12 |
where get_json_object(t0.unitparams, '$.store.fruit[0].weight' ) != 9
|
14 |
select host, c0, 'ALL' c1, 'ALL' c2 from t_log t0
|
15 |
LATERAL VIEW parse_url_tuple(url, 'HOST' ) t1 as host
|
16 |
where get_json_object(t0.unitparams, '$.store.fruit[0].weight' ) != 9
|
18 |
select host, 'ALL' c0, 'ALL' c1, 'ALL' c2 from t_log t0
|
19 |
LATERAL VIEW parse_url_tuple(url, 'HOST' ) t1 as host
|
20 |
where get_json_object(t0.unitparams, '$.store.fruit[0].weight' ) != 9
|
23 |
select c0, c1, c2, count (host) PV, count ( distinct (host)) UV from (
|
24 |
select host, c0, c1, c2 from t_log t0
|
25 |
LATERAL VIEW parse_url_tuple(url, 'HOST' ) t1 as host
|
26 |
where get_json_object(t0.unitparams, '$.store.fruit[0].weight' ) != 9
|
28 |
select host, c0, c1, 'ALL' c2 from t_log t0
|
29 |
LATERAL VIEW parse_url_tuple(url, 'HOST' ) t1 as host
|
30 |
where get_json_object(t0.unitparams, '$.store.fruit[0].weight' ) != 9
|
32 |
select host, c0, 'ALL' c1, 'ALL' c2 from t_log t0
|
33 |
LATERAL VIEW parse_url_tuple(url, 'HOST' ) t1 as host
|
34 |
where get_json_object(t0.unitparams, '$.store.fruit[0].weight' ) != 9
|
36 |
select host, 'ALL' c0, 'ALL' c1, 'ALL' c2 from t_log t0
|
37 |
LATERAL VIEW parse_url_tuple(url, 'HOST' ) t1 as host
|
38 |
where get_json_object(t0.unitparams, '$.store.fruit[0].weight' ) != 9
|
39 |
) test group by c0, c1, c2;
|
2、不同属性的多维组合统计
这种场景下我们一般选择 Multi Table/File Inserts,下面选自《programming hive》P124
Making Multiple Passes over the Same Data
Hive has a special syntax for producing multiple aggregations from a single pass
through a source of data, rather than rescanning it for each aggregation. This change
can save considerable processing time for large input data sets. We discussed the details
previously in Chapter 5.
For example, each of the following two queries creates a table from the same source
table, history:
hive> INSERT OVERWRITE TABLE sales
> SELECT * FROM history WHERE action='purchased';
hive> INSERT OVERWRITE TABLE credits
> SELECT * FROM history WHERE action='returned';
This syntax is correct, but inefficient. The following rewrite achieves the same thing,
but using a single pass through the source history table:
hive> FROM history
> INSERT OVERWRITE sales SELECT * WHERE action='purchased'
> INSERT OVERWRITE credits SELECT * WHERE action='returned';
2 |
INSERT OVERWRITE TABLE pv_gender_sum
|
3 |
SELECT pv_users.gender, count_distinct(pv_users.userid)
|
4 |
GROUP BY pv_users.gender
|
6 |
INSERT OVERWRITE DIRECTORY '/user/data/tmp/pv_age_sum'
|
7 |
SELECT pv_users.age, count_distinct(pv_users.userid)
|
https://cwiki.apache.org/confluence/display/Hive/Tutorial
注意事项以及一些小技巧:
1、hive union all 的用法:不支持 top level,以及各个select字段名称、属性必须严格一致
2、结果的顺序问题,可以自己加字符控制排序
3、多重insert和union all一样也只扫描一次,但因为要insert到多个分区,所以做了很多其他的事情,导致消耗的时间非常长,其会产生多个job,union all 本身只有一个job
关于 insert overwrite 产生多 job 并行执行的问题:
set hive.exec.parallel=true; //打开任务并行执行
set hive.exec.parallel.thread.number=16; //同一个sql允许最大并行度,默认为8。
http://superlxw1234.iteye.com/blog/1703713
4、当前HIVE 不支持 not in 中包含查询子句的语法,形如如下的HQ语句是不被支持的:
查询在key字段在a表中,但不在b表中的数据
select a.key from a where key not in(select key from b) 该语句在hive中不支持
可以通过left outer join进行查询,(假设B表中包含另外的一个字段 key1
select a.key from a left outer join b on a.key=b.key where b.key1 is null
5、left out join 不能连续3个以上使用,必须2个一组,2个一组包装起来使用。
01 |
select p.ssi,p.pv,p.uv,p.nuv,p.visits, '2012-06-19 17:00:00' from (
|
03 |
select * from ( select ssi, count (1) pv, sum (visits) visits from FactClickAnalysis
|
04 |
where logTime <= '2012-06-19 18:00:00' and logTime >= '2012-06-19 17:00:00' group by ssi ) p1
|
07 |
select ssi, count (1) uv from ( select ssi,cookieid from FactClickAnalysis
|
08 |
where logTime <= '2012-06-19 18:00:00' and logTime >= '2012-06-19 17:00:00' group by ssi,cookieid ) t1 group by ssi
|
13 |
select ssi, count (1) nuv from FactClickAnalysis
|
14 |
where logTime = insertTime and logTime <= '2012-06-19 18:00:00' and logTime >= '2012-06-19 17:00:00' group by ssi
|
6、hive本地执行mr
http://superlxw1234.iteye.com/blog/1703546
7、hive动态分区创建过多遇到的一个错误
http://superlxw1234.iteye.com/blog/1677938
8、hive中巧用正则表达式的贪婪匹配
http://superlxw1234.iteye.com/blog/1751216
9、hive匹配全中文字段
用java中匹配中文的正则即可:
name rlike '^[\\u4e00-\\u9fa5]+$'
判断一个字段是否全数字:
select mobile from woa_login_log_his where pt = '2012-01-10' and mobile rlike '^\\d+$' limit 50;
10、hive中使用sql window函数 LAG/LEAD/FIRST/LAST
http://superlxw1234.iteye.com/blog/1600323
http://www.shaoqun.com/a/18839.aspx
11、hive优化之------控制hive任务中的map数和reduce数
http://superlxw1234.iteye.com/blog/1582880
12、hive中转义$等特殊字符
http://superlxw1234.iteye.com/blog/1568739
13、日期处理:
查看N天前的日期:
select from_unixtime(unix_timestamp('20111102','yyyyMMdd') - N*86400,'yyyyMMdd') from t_lxw_test1 limit 1;
获取两个日期之间的天数/秒数/分钟数等等:
select ( unix_timestamp('2011-11-02','yyyy-MM-dd')-unix_timestamp('2011-11-01','yyyy-MM-dd') ) / 86400 from t_lxw_test limit 1;
14、删除 Hive 临时文件 hive.exec.scratchdir
http://hi.baidu.com/youziguo/item/1dd7e6315dcc0f28b2c0c576
REF:
http://superlxw1234.iteye.com/blog/1536440
http://liubingwwww.blog.163.com/blog/static/3048510720125201749323/
http://blog.csdn.net/azhao_dn/article/details/6921429
http://superlxw1234.iteye.com/category/228899
相关推荐
在大数据处理领域,Hive是一个极其重要的工具,它被广泛应用于大数据分析和数据仓库操作。本实战数据集主要涉及两个核心部分:`video`数据和`user`数据,这些都是构建大数据分析模型的基础元素。让我们深入探讨一下...
在这个阶段,我们需要使用Hive来分析搜狗日志中的关键词,以了解搜狗日志的关键词分布和趋势。 3.3 UID分析 UID分析是数据分析的第三步。在这个阶段,我们需要使用Hive来分析搜狗日志中的UID,以了解搜狗日志的...
Hive学习总结及应用.pdf 本文档主要介绍了Hive的基本概念、应用场景、元数据存储方式、数据导入和导出方式等。下面是对文档中提到的知识点的详细解释: 一、Hive概述 Hive是一个构建在HDFS和Map/Reduce之上的可...
【Hive原理】 Hive是基于Hadoop平台的数据仓库解决方案,它主要解决了在大数据场景下,业务人员和数据科学...通过理解其原理、掌握SQL语法、优化技巧和解决数据倾斜问题,可以在大数据环境中高效地进行数据分析工作。
本资源为电影票房数据分析的Hive代码,涵盖了四个主要部分:统计2020年上映的电影中当前总票房最高的10部电影、统计2020年国庆假期中电影票房增长最多的三部电影及其每日的票房数据、统计2020年中当日综合总票房最多...
利用Hive进行复杂用户行为大数据分析及优化案例(全套视频+课件+代码+讲义+工具软件),具体内容包括: 01_自动批量加载数据到hive 02_Hive表批量加载数据的脚本实现(一) 03_Hive表批量加载数据的脚本实现(二) ...
本资源摘要信息主要介绍了基于 Hive 的数据分析案例,通过对 MM 聊天软件的数据进行统计分析,了解用户行为,实现精准的用户画像,并为公司的发展决策提供精确的数据支撑。 知识点一:Hive 数据分析 Hive 是基于 ...
hive在数据分析的作用研究
### Hive综合应用案例—用户搜索日志分析 #### 一、背景介绍 随着互联网技术的发展,用户搜索行为已经成为衡量网站或应用性能与用户体验的重要指标之一。通过对用户搜索日志进行深入分析,不仅可以揭示用户的搜索...
1. 基于hive旅游数据的分析与应用源码代码说明:经导师指导并认可通过的98分毕设项目代码。 2.适用对象:本代码学习资料适用于计算机、电子信息工程、数学等专业正在做毕设的学生,需要项目实战练习的学习者,也适用...
在大数据分析领域,Hive...通过这个“hive影评案例”,学习者可以实践如何使用Java编写Hive应用程序,同时了解Hive在大数据分析中的实际应用。掌握这些技能将有助于理解大数据处理的流程,提高解决复杂数据问题的能力。
【大数据技术基础实验报告-Hive安装配置...总结,本实验报告详细介绍了如何在Linux环境下安装、配置Hive,并给出了Hive的基本应用示例。理解并掌握这些步骤和概念,将有助于进一步学习和应用Hive进行大数据处理和分析。
在实际项目中,我们可能还需要处理更复杂的情况,比如时间窗口内的TOP统计,或者结合其他维度进行分析。例如,按季度统计Top N畅销产品: ```sql SELECT product_id, SUM(quantity) as quarter_quantity FROM sales...
Spring Boot 基于 Hive 旅游数据的分析与应用 Spring Boot 是一个基于 Java 的开源框架,广泛应用于 Web 应用程序开发。Hive 是一个基于 Hadoop 的数据仓库工具,用于处理大规模数据。旅游数据分析是旅游产业中...
Hive on Spark源码分析 Hive on Spark 源码分析是指将 Hive 默认的执行...通过对 Hive on Spark 的源码分析,我们可以更好地理解 Hive on Spark 的运行机理和实现原理,从而更好地应用 Hive on Spark 解决实际问题。
由于 Hive 采用了类似SQL 的查询语言 HQL(Hive Query Language),因此很容易将 Hive ...数据库可以用在 Online 的应用中,但是Hive 是为数据仓库而设计的,清楚这一点,有助于从应用角度理解 Hive 的特性。
Hive 在很多场景下都有广泛的应用,尤其是在日志分析领域。例如,可以利用 Hive 来统计网站某一时间段内的页面访问量 (PV) 和独立访客数 (UV),并支持多维度的数据分析。此外,许多互联网公司,如百度、淘宝等,也会...
【大数据分析与应用Hadoop-Hive】的讲解涵盖了Hadoop生态系统、MapReduce的工作原理、Hive的应用架构以及实际的手厅数据过滤操作。以下是对这些知识点的详细阐述: ### 一、Hadoop生态 Hadoop是一个开源的大数据...
Hive允许用户通过类SQL查询语言(Hive SQL)来查询、分析存储在Hadoop分布式文件系统中的数据,无需直接编写复杂的MapReduce程序。 系统开发中采用了SpringBoot框架,这是一个用于快速构建微服务的Java框架,以其...