`
bupt04406
  • 浏览: 347718 次
  • 性别: Icon_minigender_1
  • 来自: 杭州
社区版块
存档分类
最新评论

hive ppd

    博客分类:
  • Hive
阅读更多
Implement predicate push down for hive queries
https://issues.apache.org/jira/browse/HIVE-279
FilterOperator is applied twice with ppd on.
https://issues.apache.org/jira/browse/HIVE-1538 .

ppd(谓词下推)在HIVE-279中实现,在HIVE-1538中去除了冗余,ppd把一些过滤条件直接下推到紧挨着TableScanOperator,先过滤掉无用的数据。
org.apache.hadoop.hive.conf.HiveConf:
HIVEOPTPPD("hive.optimize.ppd", true), // predicate pushdown  默认是打开的。

org.apache.hadoop.hive.ql.ppd.PredicatePushDown中有解释:
/**
* Implements predicate pushdown. Predicate pushdown is a term borrowed from relational
* databases even though for Hive it is predicate pushup.
* The basic idea is to process expressions as early in the plan as possible. The default plan
* generation adds filters where they are seen but in some instances some of the filter expressions
* can be pushed nearer to the operator that sees this particular data for the first time.
* e.g.
*  select a.*, b.*
*  from a join b on (a.col1 = b.col1)
*  where a.col1 > 20 and b.col2 > 40

* For the above query, the predicates (a.col1 > 20) and (b.col2 > 40), without predicate pushdown,
* would be evaluated after the join processing has been done. Suppose the two predicates filter out
* most of the rows from a and b, the join is unnecessarily processing these rows.
* With predicate pushdown, these two predicates will be processed before the join.
*
* Predicate pushdown is enabled by setting hive.optimize.ppd to true. It is disable by default.
*
* The high-level algorithm is describe here
* - An operator is processed after all its children have been processed
* - An operator processes its own predicates and then merges (conjunction) with the processed
*     predicates of its children. In case of multiple children, there are combined using
*     disjunction (OR).
* - A predicate expression is processed for an operator using the following steps
*    - If the expr is a constant then it is a candidate for predicate pushdown
*    - If the expr is a col reference then it is a candidate and its alias is noted
*    - If the expr is an index and both the array and index expr are treated as children
*    - If the all child expr are candidates for pushdown and all of the expression reference
*        only one alias from the operator's  RowResolver then the current expression is also a
*        candidate
*   One key thing to note is that some operators (Select, ReduceSink, GroupBy, Join etc) change
*   the columns as data flows through them. In such cases the column references are replaced by
*   the corresponding expression in the input data.
*/

下面可以看到打开关闭ppd,以及HIVE-1538打上的效果:
SQL: 来自 hive-trunk/ql/src/test/queries/clientpositive/ppd_gby.q

EXPLAIN
SELECT src1.c1
FROM
(SELECT src.value as c1, count(src.key) as c2 from src where src.value > 'val_10' group by src.value) src1
WHERE src1.c1 > 'val_200' and (src1.c2 > 30 or src1.c1 < 'val_400')



set hive.optimize.ppd = false;

hive> set hive.optimize.ppd = false;
hive> EXPLAIN
    > SELECT src1.c1
    > FROM
    > (SELECT src.value as c1, count(src.key) as c2 from src where src.value > 'val_10' group by src.value) src1
    > WHERE src1.c1 > 'val_200' and (src1.c2 > 30 or src1.c1 < 'val_400');
OK
ABSTRACT SYNTAX TREE:
  (TOK_QUERY (TOK_FROM (TOK_SUBQUERY (TOK_QUERY (TOK_FROM (TOK_TABREF src)) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL src) value) c1) (TOK_SELEXPR (TOK_FUNCTION count (. (TOK_TABLE_OR_COL src) key)) c2)) (TOK_WHERE (> (. (TOK_TABLE_OR_COL src) value) 'val_10')) (TOK_GROUPBY (. (TOK_TABLE_OR_COL src) value)))) src1)) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL src1) c1))) (TOK_WHERE (and (> (. (TOK_TABLE_OR_COL src1) c1) 'val_200') (or (> (. (TOK_TABLE_OR_COL src1) c2) 30) (< (. (TOK_TABLE_OR_COL src1) c1) 'val_400'))))))

STAGE DEPENDENCIES:
  Stage-1 is a root stage
  Stage-0 is a root stage

STAGE PLANS:
  Stage: Stage-1
    Map Reduce
      Alias -> Map Operator Tree:
        src1:src
          TableScan
            alias: src
            Filter Operator
              predicate:
                  expr: (value > 'val_10')
                  type: boolean
              Select Operator
                expressions:
                      expr: key
                      type: string
                      expr: value
                      type: string
                outputColumnNames: key, value
                Group By Operator
                  aggregations:
                        expr: count(key)
                  bucketGroup: false
                  keys:
                        expr: value
                        type: string
                  mode: hash
                  outputColumnNames: _col0, _col1
                  Reduce Output Operator
                    key expressions:
                          expr: _col0
                          type: string
                    sort order: +
                    Map-reduce partition columns:
                          expr: _col0
                          type: string
                    tag: -1
                    value expressions:
                          expr: _col1
                          type: bigint
      Reduce Operator Tree:
        Group By Operator
          aggregations:
                expr: count(VALUE._col0)
          bucketGroup: false
          keys:
                expr: KEY._col0
                type: string
          mode: mergepartial
          outputColumnNames: _col0, _col1
          Select Operator
            expressions:
                  expr: _col0
                  type: string
                  expr: _col1
                  type: bigint
            outputColumnNames: _col0, _col1
            Filter Operator
              predicate:
                  expr: ((_col0 > 'val_200') and ((_col1 > 30) or (_col0 < 'val_400')))
                  type: boolean
              Select Operator
                expressions:
                      expr: _col0
                      type: string
                outputColumnNames: _col0
                File Output Operator
                  compressed: false
                  GlobalTableId: 0
                  table:
                      input format: org.apache.hadoop.mapred.TextInputFormat
                      output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat

  Stage: Stage-0
    Fetch Operator
      limit: -1



    > set hive.ppd.remove.duplicatefilters=false;      
hive> EXPLAIN
    > SELECT src1.c1
    > FROM
    > (SELECT src.value as c1, count(src.key) as c2 from src where src.value > 'val_10' group by src.value) src1
    > WHERE src1.c1 > 'val_200' and (src1.c2 > 30 or src1.c1 < 'val_400')
    > ;
OK
ABSTRACT SYNTAX TREE:
  (TOK_QUERY (TOK_FROM (TOK_SUBQUERY (TOK_QUERY (TOK_FROM (TOK_TABREF src)) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL src) value) c1) (TOK_SELEXPR (TOK_FUNCTION count (. (TOK_TABLE_OR_COL src) key)) c2)) (TOK_WHERE (> (. (TOK_TABLE_OR_COL src) value) 'val_10')) (TOK_GROUPBY (. (TOK_TABLE_OR_COL src) value)))) src1)) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL src1) c1))) (TOK_WHERE (and (> (. (TOK_TABLE_OR_COL src1) c1) 'val_200') (or (> (. (TOK_TABLE_OR_COL src1) c2) 30) (< (. (TOK_TABLE_OR_COL src1) c1) 'val_400'))))))

STAGE DEPENDENCIES:
  Stage-1 is a root stage
  Stage-0 is a root stage

STAGE PLANS:
  Stage: Stage-1
    Map Reduce
      Alias -> Map Operator Tree:
        src1:src
          TableScan
            alias: src
            Filter Operator
              predicate:
                  expr: ((value > 'val_10') and (value > 'val_200'))
                  type: boolean
              Filter Operator
                predicate:
                    expr: (value > 'val_10')
                    type: boolean
                Select Operator
                  expressions:
                        expr: key
                        type: string
                        expr: value
                        type: string
                  outputColumnNames: key, value
                  Group By Operator
                    aggregations:
                          expr: count(key)
                    bucketGroup: false
                    keys:
                          expr: value
                          type: string
                    mode: hash
                    outputColumnNames: _col0, _col1
                    Reduce Output Operator
                      key expressions:
                            expr: _col0
                            type: string
                      sort order: +
                      Map-reduce partition columns:
                            expr: _col0
                            type: string
                      tag: -1
                      value expressions:
                            expr: _col1
                            type: bigint
      Reduce Operator Tree:
        Group By Operator
          aggregations:
                expr: count(VALUE._col0)
          bucketGroup: false
          keys:
                expr: KEY._col0
                type: string
          mode: mergepartial
          outputColumnNames: _col0, _col1
          Select Operator
            expressions:
                  expr: _col0
                  type: string
                  expr: _col1
                  type: bigint
            outputColumnNames: _col0, _col1
            Filter Operator
              predicate:
                  expr: ((_col0 > 'val_200') and ((_col1 > 30) or (_col0 < 'val_400')))
                  type: boolean
              Select Operator
                expressions:
                      expr: _col0
                      type: string
                outputColumnNames: _col0
                File Output Operator
                  compressed: false
                  GlobalTableId: 0
                  table:
                      input format: org.apache.hadoop.mapred.TextInputFormat
                      output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat

  Stage: Stage-0
    Fetch Operator
      limit: -1


Time taken: 0.208 seconds





hive> set hive.ppd.remove.duplicatefilters=true;                                                               
hive> EXPLAIN
    > SELECT src1.c1
    > FROM
    > (SELECT src.value as c1, count(src.key) as c2 from src where src.value > 'val_10' group by src.value) src1
    > WHERE src1.c1 > 'val_200' and (src1.c2 > 30 or src1.c1 < 'val_400');
OK
ABSTRACT SYNTAX TREE:
  (TOK_QUERY (TOK_FROM (TOK_SUBQUERY (TOK_QUERY (TOK_FROM (TOK_TABREF src)) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL src) value) c1) (TOK_SELEXPR (TOK_FUNCTION count (. (TOK_TABLE_OR_COL src) key)) c2)) (TOK_WHERE (> (. (TOK_TABLE_OR_COL src) value) 'val_10')) (TOK_GROUPBY (. (TOK_TABLE_OR_COL src) value)))) src1)) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL src1) c1))) (TOK_WHERE (and (> (. (TOK_TABLE_OR_COL src1) c1) 'val_200') (or (> (. (TOK_TABLE_OR_COL src1) c2) 30) (< (. (TOK_TABLE_OR_COL src1) c1) 'val_400'))))))

STAGE DEPENDENCIES:
  Stage-1 is a root stage
  Stage-0 is a root stage

STAGE PLANS:
  Stage: Stage-1
    Map Reduce
      Alias -> Map Operator Tree:
        src1:src
          TableScan
            alias: src
            Filter Operator
              predicate:
                  expr: ((value > 'val_10') and (value > 'val_200'))
                  type: boolean
              Select Operator
                expressions:
                      expr: key
                      type: string
                      expr: value
                      type: string
                outputColumnNames: key, value
                Group By Operator
                  aggregations:
                        expr: count(key)
                  bucketGroup: false
                  keys:
                        expr: value
                        type: string
                  mode: hash
                  outputColumnNames: _col0, _col1
                  Reduce Output Operator
                    key expressions:
                          expr: _col0
                          type: string
                    sort order: +
                    Map-reduce partition columns:
                          expr: _col0
                          type: string
                    tag: -1
                    value expressions:
                          expr: _col1
                          type: bigint
      Reduce Operator Tree:
        Group By Operator
          aggregations:
                expr: count(VALUE._col0)
          bucketGroup: false
          keys:
                expr: KEY._col0
                type: string
          mode: mergepartial
          outputColumnNames: _col0, _col1
          Filter Operator
            predicate:
                expr: ((_col0 > 'val_200') and ((_col1 > 30) or (_col0 < 'val_400')))
                type: boolean
            Select Operator
              expressions:
                    expr: _col0
                    type: string
                    expr: _col1
                    type: bigint
              outputColumnNames: _col0, _col1
              Select Operator
                expressions:
                      expr: _col0
                      type: string
                outputColumnNames: _col0
                File Output Operator
                  compressed: false
                  GlobalTableId: 0
                  table:
                      input format: org.apache.hadoop.mapred.TextInputFormat
                      output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat

  Stage: Stage-0
    Fetch Operator
      limit: -1
分享到:
评论

相关推荐

    hive客户端安装_hive客户端安装_hive_

    在大数据处理领域,Hive是一个非常重要的工具,它提供了一个基于Hadoop的数据仓库基础设施,用于数据查询、分析和管理大规模数据集。本教程将详细讲解如何在Linux环境下安装Hive客户端,以便进行数据操作和分析。 ...

    HIVE安装及详解

    "HIVE安装及详解" HIVE是一种基于Hadoop的数据仓库工具,主要用于处理和分析大规模数据。下面是关于HIVE的安装及详解。 HIVE基本概念 HIVE是什么?HIVE是一种数据仓库工具,主要用于处理和分析大规模数据。它将...

    Hive_JDBC.zip_hive java_hive jdbc_hive jdbc pom_java hive_maven连

    在大数据处理领域,Apache Hive是一个基于Hadoop的数据仓库工具,它允许用户使用SQL(HQL,Hive Query Language)查询存储在Hadoop集群中的大型数据集。Hive JDBC(Java Database Connectivity)是Hive提供的一种...

    Ambari下Hive3.0升级到Hive4.0

    在大数据领域,Apache Ambari 是一个用于 Hadoop 集群管理和监控的开源工具,而 Hive 是一个基于 Hadoop 的数据仓库系统,用于处理和分析大规模数据集。本话题聚焦于如何在 Ambari 环境下将 Hive 3.0 升级到 Hive ...

    Hive驱动1.1.0.zip

    在大数据处理领域,Hive是一个基于Hadoop的数据仓库工具,它允许用户使用SQL(HQL,Hive Query Language)查询和管理存储在Hadoop分布式文件系统(HDFS)中的大量结构化数据。Hive 1.1.0是Hive的一个版本,提供了...

    Hive3.1.2编译源码

    使用hive3.1.2和spark3.0.0配置hive on spark的时候,发现官方下载的hive3.1.2和spark3.0.0不兼容,hive3.1.2对应的版本是spark2.3.0,而spark3.0.0对应的hadoop版本是hadoop2.6或hadoop2.7。 所以,如果想要使用高...

    连接hive依赖的jar包_hive连接方式

    在大数据处理领域,Hive是一个基于Hadoop的数据仓库工具,它可以将结构化的数据文件映射为一张数据库表,并提供SQL查询功能,使得用户可以使用SQL语句来处理存储在Hadoop分布式文件系统(HDFS)上的大数据。...

    DBeaver链接hive驱动包下载: hive-jdbc-uber-2.6.5.0-292.jar

    《DBeaver与Hive连接:hive-jdbc-uber-2.6.5.0-292.jar驱动详解》 在大数据处理领域,Hive作为一个基于Hadoop的数据仓库工具,广泛用于数据查询和分析。而DBeaver,作为一款跨平台的数据库管理工具,以其用户友好的...

    hive相关jar包

    在大数据处理领域,Hive是一个基于Hadoop的数据仓库工具,它允许用户使用SQL(HQL,Hive Query Language)查询和管理存储在Hadoop分布式文件系统(HDFS)中的大量数据。Hive提供了数据整合、元数据管理、查询和分析...

    Hive表生成工具,Hive表生成工具Hive表生成工具

    Hive表生成工具,Hive表生成工具Hive表生成工具

    《Hive数据仓库案例教程》教学课件 第5章 Hive数据操作.pdf

    《Hive数据仓库案例教程》教学课件 第5章 Hive数据操作.pdf《Hive数据仓库案例教程》教学课件 第5章 Hive数据操作.pdf《Hive数据仓库案例教程》教学课件 第5章 Hive数据操作.pdf《Hive数据仓库案例教程》教学课件 第...

    hive-site.xml

    hive+hadoop配置文件hive+hadoop配置文件hive+hadoop配置文件hive+hadoop配置文件hive+hadoop配置文件hive+hadoop配置文件hive+hadoop配置文件hive+hadoop配置文件hive+hadoop配置文件hive+hadoop配置文件hive+...

    SpringBoot整合hive-jdbc示例

    **SpringBoot整合Hive-JDBC详解** 在大数据处理领域,Hadoop生态中的Hive作为一个数据仓库工具,常常用于处理大规模的数据分析任务。而SpringBoot作为Java开发中的微服务框架,以其简洁的配置和快速的开发能力深受...

    hive-jdbc hive jdbc驱动

    hive-jdbc

    hive 驱动包 hive 链接 datagrip的驱动包

    在大数据处理领域,Hive是一款基于Hadoop的数据仓库工具,它允许用户使用SQL类的语言(称为HQL)来查询、管理、分析存储在Hadoop分布式文件系统中的大规模数据集。而DataGrip是一款由JetBrains公司开发的强大数据库...

    apache-hive-2.3.9-bin.tar大数据HIVE.zip

    Apache Hive 是一个基于 Hadoop 的数据仓库工具,用于组织、查询和分析大量数据。它提供了一个SQL-like(HQL,Hive SQL)接口,使得非专业程序员也能方便地处理存储在Hadoop分布式文件系统(HDFS)中的大规模数据集...

    hive所有jar文件

    Hive和HBase是两种大数据处理工具,它们在大数据生态系统中各自扮演着重要角色。Hive是一个基于Hadoop的数据仓库工具,它允许用户使用SQL-like语法(HQL,Hive Query Language)对大规模数据集进行分析。而HBase是...

    Hive总结.docx

    【Hive原理】 Hive是基于Hadoop平台的数据仓库解决方案,它主要解决了在大数据场景下,业务人员和数据科学家能够通过熟悉的SQL语言进行数据分析的问题。Hive并不存储数据,而是依赖于HDFS进行数据存储,并利用...

    如何在python中写hive脚本

    在Python中编写Hive脚本主要是为了方便地与Hadoop HIVE数据仓库进行交互,这样可以在数据分析和机器学习流程中无缝地集成大数据处理步骤。以下将详细介绍如何在Python环境中执行Hive查询和管理Hive脚本。 1. **直接...

    hive驱动包hive-jdbc-uber-2.6.5.0-292.jar(用户客户端连接使用)

    Hive是Apache Hadoop生态系统中的一个数据仓库工具,它允许我们对存储在HDFS上的大数据进行结构化查询和分析。Hive JDBC驱动是Hive与各种数据库管理工具、应用程序之间建立连接的关键组件,使得用户可以通过标准的...

Global site tag (gtag.js) - Google Analytics