- 浏览: 351920 次
- 性别:
- 来自: 杭州
-
文章分类
最新评论
-
lvyuan1234:
你好,你那个sample.txt文件可以分享给我吗
hive insert overwrite into -
107x:
不错,谢谢!
hive 表的一些默认值 -
on_way_:
赞
Hadoop相关书籍 -
bupt04406:
dengkanghua 写道出来这个问题该怎么解决?hbase ...
Unexpected state导致HMaster abort -
dengkanghua:
出来这个问题该怎么解决?hbase master启动不起来。
Unexpected state导致HMaster abort
Implement predicate push down for hive queries
https://issues.apache.org/jira/browse/HIVE-279
FilterOperator is applied twice with ppd on.
https://issues.apache.org/jira/browse/HIVE-1538 .
ppd(谓词下推)在HIVE-279中实现,在HIVE-1538中去除了冗余,ppd把一些过滤条件直接下推到紧挨着TableScanOperator,先过滤掉无用的数据。
org.apache.hadoop.hive.conf.HiveConf:
HIVEOPTPPD("hive.optimize.ppd", true), // predicate pushdown 默认是打开的。
org.apache.hadoop.hive.ql.ppd.PredicatePushDown中有解释:
/**
* Implements predicate pushdown. Predicate pushdown is a term borrowed from relational
* databases even though for Hive it is predicate pushup.
* The basic idea is to process expressions as early in the plan as possible. The default plan
* generation adds filters where they are seen but in some instances some of the filter expressions
* can be pushed nearer to the operator that sees this particular data for the first time.
* e.g.
* select a.*, b.*
* from a join b on (a.col1 = b.col1)
* where a.col1 > 20 and b.col2 > 40
*
* For the above query, the predicates (a.col1 > 20) and (b.col2 > 40), without predicate pushdown,
* would be evaluated after the join processing has been done. Suppose the two predicates filter out
* most of the rows from a and b, the join is unnecessarily processing these rows.
* With predicate pushdown, these two predicates will be processed before the join.
*
* Predicate pushdown is enabled by setting hive.optimize.ppd to true. It is disable by default.
*
* The high-level algorithm is describe here
* - An operator is processed after all its children have been processed
* - An operator processes its own predicates and then merges (conjunction) with the processed
* predicates of its children. In case of multiple children, there are combined using
* disjunction (OR).
* - A predicate expression is processed for an operator using the following steps
* - If the expr is a constant then it is a candidate for predicate pushdown
* - If the expr is a col reference then it is a candidate and its alias is noted
* - If the expr is an index and both the array and index expr are treated as children
* - If the all child expr are candidates for pushdown and all of the expression reference
* only one alias from the operator's RowResolver then the current expression is also a
* candidate
* One key thing to note is that some operators (Select, ReduceSink, GroupBy, Join etc) change
* the columns as data flows through them. In such cases the column references are replaced by
* the corresponding expression in the input data.
*/
下面可以看到打开关闭ppd,以及HIVE-1538打上的效果:
SQL: 来自 hive-trunk/ql/src/test/queries/clientpositive/ppd_gby.q
EXPLAIN
SELECT src1.c1
FROM
(SELECT src.value as c1, count(src.key) as c2 from src where src.value > 'val_10' group by src.value) src1
WHERE src1.c1 > 'val_200' and (src1.c2 > 30 or src1.c1 < 'val_400')
set hive.optimize.ppd = false;
hive> set hive.optimize.ppd = false;
hive> EXPLAIN
> SELECT src1.c1
> FROM
> (SELECT src.value as c1, count(src.key) as c2 from src where src.value > 'val_10' group by src.value) src1
> WHERE src1.c1 > 'val_200' and (src1.c2 > 30 or src1.c1 < 'val_400');
OK
ABSTRACT SYNTAX TREE:
(TOK_QUERY (TOK_FROM (TOK_SUBQUERY (TOK_QUERY (TOK_FROM (TOK_TABREF src)) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL src) value) c1) (TOK_SELEXPR (TOK_FUNCTION count (. (TOK_TABLE_OR_COL src) key)) c2)) (TOK_WHERE (> (. (TOK_TABLE_OR_COL src) value) 'val_10')) (TOK_GROUPBY (. (TOK_TABLE_OR_COL src) value)))) src1)) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL src1) c1))) (TOK_WHERE (and (> (. (TOK_TABLE_OR_COL src1) c1) 'val_200') (or (> (. (TOK_TABLE_OR_COL src1) c2) 30) (< (. (TOK_TABLE_OR_COL src1) c1) 'val_400'))))))
STAGE DEPENDENCIES:
Stage-1 is a root stage
Stage-0 is a root stage
STAGE PLANS:
Stage: Stage-1
Map Reduce
Alias -> Map Operator Tree:
src1:src
TableScan
alias: src
Filter Operator
predicate:
expr: (value > 'val_10')
type: boolean
Select Operator
expressions:
expr: key
type: string
expr: value
type: string
outputColumnNames: key, value
Group By Operator
aggregations:
expr: count(key)
bucketGroup: false
keys:
expr: value
type: string
mode: hash
outputColumnNames: _col0, _col1
Reduce Output Operator
key expressions:
expr: _col0
type: string
sort order: +
Map-reduce partition columns:
expr: _col0
type: string
tag: -1
value expressions:
expr: _col1
type: bigint
Reduce Operator Tree:
Group By Operator
aggregations:
expr: count(VALUE._col0)
bucketGroup: false
keys:
expr: KEY._col0
type: string
mode: mergepartial
outputColumnNames: _col0, _col1
Select Operator
expressions:
expr: _col0
type: string
expr: _col1
type: bigint
outputColumnNames: _col0, _col1
Filter Operator
predicate:
expr: ((_col0 > 'val_200') and ((_col1 > 30) or (_col0 < 'val_400')))
type: boolean
Select Operator
expressions:
expr: _col0
type: string
outputColumnNames: _col0
File Output Operator
compressed: false
GlobalTableId: 0
table:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
Stage: Stage-0
Fetch Operator
limit: -1
> set hive.ppd.remove.duplicatefilters=false;
hive> EXPLAIN
> SELECT src1.c1
> FROM
> (SELECT src.value as c1, count(src.key) as c2 from src where src.value > 'val_10' group by src.value) src1
> WHERE src1.c1 > 'val_200' and (src1.c2 > 30 or src1.c1 < 'val_400')
> ;
OK
ABSTRACT SYNTAX TREE:
(TOK_QUERY (TOK_FROM (TOK_SUBQUERY (TOK_QUERY (TOK_FROM (TOK_TABREF src)) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL src) value) c1) (TOK_SELEXPR (TOK_FUNCTION count (. (TOK_TABLE_OR_COL src) key)) c2)) (TOK_WHERE (> (. (TOK_TABLE_OR_COL src) value) 'val_10')) (TOK_GROUPBY (. (TOK_TABLE_OR_COL src) value)))) src1)) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL src1) c1))) (TOK_WHERE (and (> (. (TOK_TABLE_OR_COL src1) c1) 'val_200') (or (> (. (TOK_TABLE_OR_COL src1) c2) 30) (< (. (TOK_TABLE_OR_COL src1) c1) 'val_400'))))))
STAGE DEPENDENCIES:
Stage-1 is a root stage
Stage-0 is a root stage
STAGE PLANS:
Stage: Stage-1
Map Reduce
Alias -> Map Operator Tree:
src1:src
TableScan
alias: src
Filter Operator
predicate:
expr: ((value > 'val_10') and (value > 'val_200'))
type: boolean
Filter Operator
predicate:
expr: (value > 'val_10')
type: boolean
Select Operator
expressions:
expr: key
type: string
expr: value
type: string
outputColumnNames: key, value
Group By Operator
aggregations:
expr: count(key)
bucketGroup: false
keys:
expr: value
type: string
mode: hash
outputColumnNames: _col0, _col1
Reduce Output Operator
key expressions:
expr: _col0
type: string
sort order: +
Map-reduce partition columns:
expr: _col0
type: string
tag: -1
value expressions:
expr: _col1
type: bigint
Reduce Operator Tree:
Group By Operator
aggregations:
expr: count(VALUE._col0)
bucketGroup: false
keys:
expr: KEY._col0
type: string
mode: mergepartial
outputColumnNames: _col0, _col1
Select Operator
expressions:
expr: _col0
type: string
expr: _col1
type: bigint
outputColumnNames: _col0, _col1
Filter Operator
predicate:
expr: ((_col0 > 'val_200') and ((_col1 > 30) or (_col0 < 'val_400')))
type: boolean
Select Operator
expressions:
expr: _col0
type: string
outputColumnNames: _col0
File Output Operator
compressed: false
GlobalTableId: 0
table:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
Stage: Stage-0
Fetch Operator
limit: -1
Time taken: 0.208 seconds
hive> set hive.ppd.remove.duplicatefilters=true;
hive> EXPLAIN
> SELECT src1.c1
> FROM
> (SELECT src.value as c1, count(src.key) as c2 from src where src.value > 'val_10' group by src.value) src1
> WHERE src1.c1 > 'val_200' and (src1.c2 > 30 or src1.c1 < 'val_400');
OK
ABSTRACT SYNTAX TREE:
(TOK_QUERY (TOK_FROM (TOK_SUBQUERY (TOK_QUERY (TOK_FROM (TOK_TABREF src)) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL src) value) c1) (TOK_SELEXPR (TOK_FUNCTION count (. (TOK_TABLE_OR_COL src) key)) c2)) (TOK_WHERE (> (. (TOK_TABLE_OR_COL src) value) 'val_10')) (TOK_GROUPBY (. (TOK_TABLE_OR_COL src) value)))) src1)) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL src1) c1))) (TOK_WHERE (and (> (. (TOK_TABLE_OR_COL src1) c1) 'val_200') (or (> (. (TOK_TABLE_OR_COL src1) c2) 30) (< (. (TOK_TABLE_OR_COL src1) c1) 'val_400'))))))
STAGE DEPENDENCIES:
Stage-1 is a root stage
Stage-0 is a root stage
STAGE PLANS:
Stage: Stage-1
Map Reduce
Alias -> Map Operator Tree:
src1:src
TableScan
alias: src
Filter Operator
predicate:
expr: ((value > 'val_10') and (value > 'val_200'))
type: boolean
Select Operator
expressions:
expr: key
type: string
expr: value
type: string
outputColumnNames: key, value
Group By Operator
aggregations:
expr: count(key)
bucketGroup: false
keys:
expr: value
type: string
mode: hash
outputColumnNames: _col0, _col1
Reduce Output Operator
key expressions:
expr: _col0
type: string
sort order: +
Map-reduce partition columns:
expr: _col0
type: string
tag: -1
value expressions:
expr: _col1
type: bigint
Reduce Operator Tree:
Group By Operator
aggregations:
expr: count(VALUE._col0)
bucketGroup: false
keys:
expr: KEY._col0
type: string
mode: mergepartial
outputColumnNames: _col0, _col1
Filter Operator
predicate:
expr: ((_col0 > 'val_200') and ((_col1 > 30) or (_col0 < 'val_400')))
type: boolean
Select Operator
expressions:
expr: _col0
type: string
expr: _col1
type: bigint
outputColumnNames: _col0, _col1
Select Operator
expressions:
expr: _col0
type: string
outputColumnNames: _col0
File Output Operator
compressed: false
GlobalTableId: 0
table:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
Stage: Stage-0
Fetch Operator
limit: -1
https://issues.apache.org/jira/browse/HIVE-279
FilterOperator is applied twice with ppd on.
https://issues.apache.org/jira/browse/HIVE-1538 .
ppd(谓词下推)在HIVE-279中实现,在HIVE-1538中去除了冗余,ppd把一些过滤条件直接下推到紧挨着TableScanOperator,先过滤掉无用的数据。
org.apache.hadoop.hive.conf.HiveConf:
HIVEOPTPPD("hive.optimize.ppd", true), // predicate pushdown 默认是打开的。
org.apache.hadoop.hive.ql.ppd.PredicatePushDown中有解释:
/**
* Implements predicate pushdown. Predicate pushdown is a term borrowed from relational
* databases even though for Hive it is predicate pushup.
* The basic idea is to process expressions as early in the plan as possible. The default plan
* generation adds filters where they are seen but in some instances some of the filter expressions
* can be pushed nearer to the operator that sees this particular data for the first time.
* e.g.
* select a.*, b.*
* from a join b on (a.col1 = b.col1)
* where a.col1 > 20 and b.col2 > 40
*
* For the above query, the predicates (a.col1 > 20) and (b.col2 > 40), without predicate pushdown,
* would be evaluated after the join processing has been done. Suppose the two predicates filter out
* most of the rows from a and b, the join is unnecessarily processing these rows.
* With predicate pushdown, these two predicates will be processed before the join.
*
* Predicate pushdown is enabled by setting hive.optimize.ppd to true. It is disable by default.
*
* The high-level algorithm is describe here
* - An operator is processed after all its children have been processed
* - An operator processes its own predicates and then merges (conjunction) with the processed
* predicates of its children. In case of multiple children, there are combined using
* disjunction (OR).
* - A predicate expression is processed for an operator using the following steps
* - If the expr is a constant then it is a candidate for predicate pushdown
* - If the expr is a col reference then it is a candidate and its alias is noted
* - If the expr is an index and both the array and index expr are treated as children
* - If the all child expr are candidates for pushdown and all of the expression reference
* only one alias from the operator's RowResolver then the current expression is also a
* candidate
* One key thing to note is that some operators (Select, ReduceSink, GroupBy, Join etc) change
* the columns as data flows through them. In such cases the column references are replaced by
* the corresponding expression in the input data.
*/
下面可以看到打开关闭ppd,以及HIVE-1538打上的效果:
SQL: 来自 hive-trunk/ql/src/test/queries/clientpositive/ppd_gby.q
EXPLAIN
SELECT src1.c1
FROM
(SELECT src.value as c1, count(src.key) as c2 from src where src.value > 'val_10' group by src.value) src1
WHERE src1.c1 > 'val_200' and (src1.c2 > 30 or src1.c1 < 'val_400')
set hive.optimize.ppd = false;
hive> set hive.optimize.ppd = false;
hive> EXPLAIN
> SELECT src1.c1
> FROM
> (SELECT src.value as c1, count(src.key) as c2 from src where src.value > 'val_10' group by src.value) src1
> WHERE src1.c1 > 'val_200' and (src1.c2 > 30 or src1.c1 < 'val_400');
OK
ABSTRACT SYNTAX TREE:
(TOK_QUERY (TOK_FROM (TOK_SUBQUERY (TOK_QUERY (TOK_FROM (TOK_TABREF src)) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL src) value) c1) (TOK_SELEXPR (TOK_FUNCTION count (. (TOK_TABLE_OR_COL src) key)) c2)) (TOK_WHERE (> (. (TOK_TABLE_OR_COL src) value) 'val_10')) (TOK_GROUPBY (. (TOK_TABLE_OR_COL src) value)))) src1)) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL src1) c1))) (TOK_WHERE (and (> (. (TOK_TABLE_OR_COL src1) c1) 'val_200') (or (> (. (TOK_TABLE_OR_COL src1) c2) 30) (< (. (TOK_TABLE_OR_COL src1) c1) 'val_400'))))))
STAGE DEPENDENCIES:
Stage-1 is a root stage
Stage-0 is a root stage
STAGE PLANS:
Stage: Stage-1
Map Reduce
Alias -> Map Operator Tree:
src1:src
TableScan
alias: src
Filter Operator
predicate:
expr: (value > 'val_10')
type: boolean
Select Operator
expressions:
expr: key
type: string
expr: value
type: string
outputColumnNames: key, value
Group By Operator
aggregations:
expr: count(key)
bucketGroup: false
keys:
expr: value
type: string
mode: hash
outputColumnNames: _col0, _col1
Reduce Output Operator
key expressions:
expr: _col0
type: string
sort order: +
Map-reduce partition columns:
expr: _col0
type: string
tag: -1
value expressions:
expr: _col1
type: bigint
Reduce Operator Tree:
Group By Operator
aggregations:
expr: count(VALUE._col0)
bucketGroup: false
keys:
expr: KEY._col0
type: string
mode: mergepartial
outputColumnNames: _col0, _col1
Select Operator
expressions:
expr: _col0
type: string
expr: _col1
type: bigint
outputColumnNames: _col0, _col1
Filter Operator
predicate:
expr: ((_col0 > 'val_200') and ((_col1 > 30) or (_col0 < 'val_400')))
type: boolean
Select Operator
expressions:
expr: _col0
type: string
outputColumnNames: _col0
File Output Operator
compressed: false
GlobalTableId: 0
table:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
Stage: Stage-0
Fetch Operator
limit: -1
> set hive.ppd.remove.duplicatefilters=false;
hive> EXPLAIN
> SELECT src1.c1
> FROM
> (SELECT src.value as c1, count(src.key) as c2 from src where src.value > 'val_10' group by src.value) src1
> WHERE src1.c1 > 'val_200' and (src1.c2 > 30 or src1.c1 < 'val_400')
> ;
OK
ABSTRACT SYNTAX TREE:
(TOK_QUERY (TOK_FROM (TOK_SUBQUERY (TOK_QUERY (TOK_FROM (TOK_TABREF src)) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL src) value) c1) (TOK_SELEXPR (TOK_FUNCTION count (. (TOK_TABLE_OR_COL src) key)) c2)) (TOK_WHERE (> (. (TOK_TABLE_OR_COL src) value) 'val_10')) (TOK_GROUPBY (. (TOK_TABLE_OR_COL src) value)))) src1)) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL src1) c1))) (TOK_WHERE (and (> (. (TOK_TABLE_OR_COL src1) c1) 'val_200') (or (> (. (TOK_TABLE_OR_COL src1) c2) 30) (< (. (TOK_TABLE_OR_COL src1) c1) 'val_400'))))))
STAGE DEPENDENCIES:
Stage-1 is a root stage
Stage-0 is a root stage
STAGE PLANS:
Stage: Stage-1
Map Reduce
Alias -> Map Operator Tree:
src1:src
TableScan
alias: src
Filter Operator
predicate:
expr: ((value > 'val_10') and (value > 'val_200'))
type: boolean
Filter Operator
predicate:
expr: (value > 'val_10')
type: boolean
Select Operator
expressions:
expr: key
type: string
expr: value
type: string
outputColumnNames: key, value
Group By Operator
aggregations:
expr: count(key)
bucketGroup: false
keys:
expr: value
type: string
mode: hash
outputColumnNames: _col0, _col1
Reduce Output Operator
key expressions:
expr: _col0
type: string
sort order: +
Map-reduce partition columns:
expr: _col0
type: string
tag: -1
value expressions:
expr: _col1
type: bigint
Reduce Operator Tree:
Group By Operator
aggregations:
expr: count(VALUE._col0)
bucketGroup: false
keys:
expr: KEY._col0
type: string
mode: mergepartial
outputColumnNames: _col0, _col1
Select Operator
expressions:
expr: _col0
type: string
expr: _col1
type: bigint
outputColumnNames: _col0, _col1
Filter Operator
predicate:
expr: ((_col0 > 'val_200') and ((_col1 > 30) or (_col0 < 'val_400')))
type: boolean
Select Operator
expressions:
expr: _col0
type: string
outputColumnNames: _col0
File Output Operator
compressed: false
GlobalTableId: 0
table:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
Stage: Stage-0
Fetch Operator
limit: -1
Time taken: 0.208 seconds
hive> set hive.ppd.remove.duplicatefilters=true;
hive> EXPLAIN
> SELECT src1.c1
> FROM
> (SELECT src.value as c1, count(src.key) as c2 from src where src.value > 'val_10' group by src.value) src1
> WHERE src1.c1 > 'val_200' and (src1.c2 > 30 or src1.c1 < 'val_400');
OK
ABSTRACT SYNTAX TREE:
(TOK_QUERY (TOK_FROM (TOK_SUBQUERY (TOK_QUERY (TOK_FROM (TOK_TABREF src)) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL src) value) c1) (TOK_SELEXPR (TOK_FUNCTION count (. (TOK_TABLE_OR_COL src) key)) c2)) (TOK_WHERE (> (. (TOK_TABLE_OR_COL src) value) 'val_10')) (TOK_GROUPBY (. (TOK_TABLE_OR_COL src) value)))) src1)) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL src1) c1))) (TOK_WHERE (and (> (. (TOK_TABLE_OR_COL src1) c1) 'val_200') (or (> (. (TOK_TABLE_OR_COL src1) c2) 30) (< (. (TOK_TABLE_OR_COL src1) c1) 'val_400'))))))
STAGE DEPENDENCIES:
Stage-1 is a root stage
Stage-0 is a root stage
STAGE PLANS:
Stage: Stage-1
Map Reduce
Alias -> Map Operator Tree:
src1:src
TableScan
alias: src
Filter Operator
predicate:
expr: ((value > 'val_10') and (value > 'val_200'))
type: boolean
Select Operator
expressions:
expr: key
type: string
expr: value
type: string
outputColumnNames: key, value
Group By Operator
aggregations:
expr: count(key)
bucketGroup: false
keys:
expr: value
type: string
mode: hash
outputColumnNames: _col0, _col1
Reduce Output Operator
key expressions:
expr: _col0
type: string
sort order: +
Map-reduce partition columns:
expr: _col0
type: string
tag: -1
value expressions:
expr: _col1
type: bigint
Reduce Operator Tree:
Group By Operator
aggregations:
expr: count(VALUE._col0)
bucketGroup: false
keys:
expr: KEY._col0
type: string
mode: mergepartial
outputColumnNames: _col0, _col1
Filter Operator
predicate:
expr: ((_col0 > 'val_200') and ((_col1 > 30) or (_col0 < 'val_400')))
type: boolean
Select Operator
expressions:
expr: _col0
type: string
expr: _col1
type: bigint
outputColumnNames: _col0, _col1
Select Operator
expressions:
expr: _col0
type: string
outputColumnNames: _col0
File Output Operator
compressed: false
GlobalTableId: 0
table:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
Stage: Stage-0
Fetch Operator
limit: -1
发表评论
-
hive rename table name
2013-09-18 14:28 2625hive rename tablename hive re ... -
hive的distribute by如何partition long型的数据
2013-08-20 10:15 2509有用户问:hive的distribute by分桶是怎么分 ... -
hive like vs rlike vs regexp
2013-04-11 18:53 11236like vs rlike vs regexp r ... -
hive sql where条件很简单,但是太多
2012-07-18 15:51 8771insert overwrite table aaaa ... -
insert into时(string->bigint)自动类型转换
2012-06-14 12:30 8300原表src: hive> desc src; ... -
通过复合结构来优化udf的调用
2012-05-11 14:07 1232select split("accba&quo ... -
RegexSerDe
2012-03-14 09:58 1568官方示例在: https://cwiki.apache.or ... -
Hive 的 OutputCommitter
2012-01-30 19:44 1836Hive 的 OutputCommitter publi ... -
hive LATERAL VIEW 行转列
2011-11-09 14:49 5476drop table lateralview; create ... -
hive complex type
2011-11-08 19:56 1392数据: 1,100|3,20|2,70|5,100 建表: ... -
hive转义字符
2011-10-25 16:41 6273CREATE TABLE escape (id STRING, ... -
hive 两个不同类型的columns进行比较
2011-09-19 13:46 3061select case when "ab1234&q ... -
lateral view
2011-09-18 04:04 0lateral view与udtf相关 -
udf 中获得 FileSystem
2011-09-14 10:28 0在udf中获得FileSystem,需要获得知道fs.defa ... -
hive union mapjoin
2011-09-09 16:29 0union union.q union2.q ... -
hive eclipse
2011-09-08 17:42 0eclipse-templates$ vi .classpat ... -
hive join filter
2011-09-07 23:05 0join16.q.out hive.optimize.ppd ... -
hive limit
2011-09-07 21:02 0limit 关键字: input4_limit.q.out ... -
hive convertMapJoin MapJoinProcessor
2011-09-06 21:17 0join25.q join26 ... -
hive hive.merge.mapfiles hive.merge.mapredfiles
2011-09-06 19:14 0HiveConf: HIVEMERGEMAPFILES ...
相关推荐
【Hive性能优化及Hive3新特性】 在大数据处理领域,Hive作为一个基于Hadoop的数据仓库工具,常用于大规模数据处理和分析。本章节主要探讨如何优化Hive的性能,以及Hive3引入的新特性。 1. **分区表与分桶表优化** ...
- PredicatePushdown(PPD)的作用与实现; - 数据倾斜现象的诊断与解决策略; - 分区使用的优化方法。 2. **实战案例** - 通过具体的案例演示Hive的使用方法与技巧,包括但不限于表的设计、HiveQL语句的编写与...
- **谓词下推**:设置`hive.optimize.ppd`为true,将过滤条件尽可能早地应用到数据上。 2. **Reduce阶段优化**: - **Reduce数量调整**:合理设置`mapreduce.job.reduces`参数,减少Reduce任务的数量,从而降低...
比如设置`hive.optimize.ppd`为false,这一操作是为了禁止谓词下推(Predicate Pushdown),这是因为当使用where条件过滤时,启用谓词下推可能会导致错误。另一个重要的配置项是设置`mongo.input.split_size`,即...
- `set hive.optimize.ppd=true;` 开启谓词下推功能。谓词下推是一种优化技术,它将查询条件尽可能地推送到数据源层进行过滤,从而减少不必要的数据传输和处理开销。 2. **动态分区参数**: - `set hive.exec....
- 开启Map端谓语下推(`hive.optimize.ppd`),提前过滤掉不满足条件的数据记录。 - **REDUCE阶段优化**: - 直接设置`mapred.reduce.tasks`参数控制Reduce任务数量。 - 调整`hive.exec.reducers.max`和`hive.exec...
- **优化配置**:启用`hive.optimize.ppd=true`,将WHERE子句中的过滤条件尽可能地提前执行,减少后续处理的数据量。 - **示例**: - 下推:`SELECT ename, dept_name FROM E LEFT OUTER JOIN D ON (E.dept_id = ...
Hive 默认启用谓词下推,可通过 `hive.optimize.ppd=true` 配置。对于外部连接(outer join),不同类型的连接有不同的处理方式: - 左外连接:左表的过滤条件会被下推 - 右外连接:右表的过滤条件会被下推 - 全...