`
yugouai
  • 浏览: 497570 次
  • 性别: Icon_minigender_1
  • 来自: 深圳
社区版块
存档分类
最新评论

Hive索引

 
阅读更多

一、Hive Index具体实现

索引是标准的数据库技术,hive 0.7版本之后支持索引。hive索引采用的不是'one size fites all'的索引实现方式,而是提供插入式接口,并且提供一个具体的索引实现作为参考。Hive的Index接口如下:

/**
 * HiveIndexHandler defines a pluggable interface for adding new index handlers
 * to Hive.
 */
public interface HiveIndexHandler extends Configurable {
  /**
   * Determines whether this handler implements indexes by creating an index
   * table.
   *
   * @return true if index creation implies creation of an index table in Hive;
   *         false if the index representation is not stored in a Hive table
   */
  boolean usesIndexTable();

  /**
   * Requests that the handler validate an index definition and fill in
   * additional information about its stored representation.
   *
   * @param baseTable
   *          the definition of the table being indexed
   *
   * @param index
   *          the definition of the index being created
   *
   * @param indexTable
   *          a partial definition of the index table to be used for storing the
   *          index representation, or null if usesIndexTable() returns false;
   *          the handler can augment the index's storage descriptor (e.g. with
   *          information about input/output format) and/or the index table's
   *          definition (typically with additional columns containing the index
   *          representation, e.g. pointers into HDFS).
   *
   * @throws HiveException if the index definition is invalid with respect to
   *         either the base table or the supplied index table definition
   */
  void analyzeIndexDefinition(
      org.apache.hadoop.hive.metastore.api.Table baseTable,
      org.apache.hadoop.hive.metastore.api.Index index,
      org.apache.hadoop.hive.metastore.api.Table indexTable)
      throws HiveException;

  /**
   * Requests that the handler generate a plan for building the index; the plan
   * should read the base table and write out the index representation.
   *
   * @param baseTbl
   *          the definition of the table being indexed
   *
   * @param index
   *          the definition of the index
   *
   * @param baseTblPartitions
   *          list of base table partitions with each element mirrors to the
   *          corresponding one in indexTblPartitions
   *
   * @param indexTbl
   *          the definition of the index table, or null if usesIndexTable()
   *          returns null
   *
   * @param inputs
   *          inputs for hooks, supplemental outputs going
   *          along with the return value
   *
   * @param outputs
   *          outputs for hooks, supplemental outputs going
   *          along with the return value
   *
   * @return list of tasks to be executed in parallel for building the index
   *
   * @throws HiveException if plan generation fails
   */
  List<Task<?>> generateIndexBuildTaskList(
      org.apache.hadoop.hive.ql.metadata.Table baseTbl,
      org.apache.hadoop.hive.metastore.api.Index index,
      List<Partition> indexTblPartitions, List<Partition> baseTblPartitions,
      org.apache.hadoop.hive.ql.metadata.Table indexTbl,
      Set<ReadEntity> inputs, Set<WriteEntity> outputs)
      throws HiveException;

  /**
   * Generate the list of tasks required to run an index optimized sub-query for the
   * given predicate, using the given indexes. If multiple indexes are
   * provided, it is up to the handler whether to use none, one, some or all of
   * them. The supplied predicate may reference any of the columns from any of
   * the indexes. If the handler decides to use more than one index, it is
   * responsible for generating tasks to combine their search results
   * (e.g. performing a JOIN on the result).
   * @param indexes
   * @param predicate
   * @param pctx
   * @param queryContext contains results, such as query tasks and input configuration
   */
  void generateIndexQuery(List<Index> indexes, ExprNodeDesc predicate,
    ParseContext pctx, HiveIndexQueryContext queryContext);

  /**
   * Check the size of an input query to make sure it fits within the bounds
   *
   * @param inputSize size (in bytes) of the query in question
   * @param conf
   * @return true if query is within the bounds
   */
  boolean checkQuerySize(long inputSize, HiveConf conf);
}

创建索引的时候,Hive首先调用接口的usesIndexTable方法,判断索引是否是已Hive Table的方式存储(默认的实现是存储在Hive中的)。然后调用analyzeIndexDefinition分析索引创建语句是否合法,如果没有问题将在元数据标IDXS中添加索引表,否则抛出异常。如果索引创建语句中使用with deferred rebuild,在执行alter index xxx_index on xxx rebuild时将调用generateIndexBuildTaskList获取Index的MapReduce,并执行为索引填充数据。

 

测试索引例子:

 

1.创建测试数据

 

#! /bin/bash  
#generating 350M raw data.  
i=0  
while [ $i -ne 1000000 ]  
do  
        echo -e "$i\tA decade ago, many were predicting that Cooke, a New York City prodigy, would become a basketball shoe pitchman and would flaunt his wares and skills at All-Star weekends like the recent aerial show in Orlando, Fla. There was a time, however fleeting, when he was more heralded, or perhaps merely hyped, than any other high school player in America."  
        i=$(($i+1))  
done
 

 

2.创建测试表1

 

create table table01( id int, name string)  ROW FORMAT DELIMITED  FIELDS TERMINATED BY '\t'; 
 
load data local inpath '/data/tmp/huzhirong/dual.txt' overwrite into table table01;
 

 

3.创建测试表2,并从表1中select数据

 

create table table02 as select id,name as text from table01;
 table02在hdfs的数据
hive> dfs -ls /user/hive/warehouse/table02; 
Found 5 items
-rw-r--r--   3 hadoop supergroup   88453176 2013-04-26 20:56 /user/hive/warehouse/table02/000000_0
-rw-r--r--   3 hadoop supergroup   67108860 2013-04-26 20:56 /user/hive/warehouse/table02/000001_0
-rw-r--r--   3 hadoop supergroup   67109134 2013-04-26 20:56 /user/hive/warehouse/table02/000002_0
-rw-r--r--   3 hadoop supergroup   67108860 2013-04-26 20:56 /user/hive/warehouse/table02/000003_0
-rw-r--r--   3 hadoop supergroup   67108860 2013-04-26 20:56 /user/hive/warehouse/table02/000004_0
 测试查询:
select * from table02 where id=500000;
 耗时:
OK
500000  A decade ago, many were predicting that Cooke, a New York City prodigy, would become a basketball shoe pitchman and would flaunt his wares and skills at All-Star weekends like the recent aerial show in Orlando, Fla. There was a time, however fleeting, when he was more heralded, or perhaps merely hyped, than any other high school player in America.
Time taken: 35.022 seconds

 

4.创建索引

 

create index table02_index on table table02(id) as 'org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler' with deferred rebuild;
alter index table02_index on table02 rebuild;
Loading data to table default.default__table02_table02_index__
Moved to trash: hdfs://namenode.hadoop.game.yy.com/user/hive/warehouse/default__table02_table02_index__
Table default.default__table02_table02_index__ stats: [num_partitions: 0, num_files: 1, num_rows: 0, total_size: 87733114, raw_data_size: 0]
MapReduce Jobs Launched: 
Job 0: Map: 3  Reduce: 1   Cumulative CPU: 51.28 sec   HDFS Read: 357021261 HDFS Write: 87733114 SUCCESS
Total MapReduce CPU Time Spent: 51 seconds 280 msec
OK
Time taken: 65.6 seconds
hive> dfs -ls /user/hive/warehouse/default__table02_table02_index__;
Found 1 items
-rw-r--r--   3 hadoop supergroup   87733114 2013-04-26 21:04 /user/hive/warehouse/default__table02_table02_index__/000000_0
 
可以看到索引内存储的数据:
hive> select * from default__table02_table02_index__ limit 3;
OK
0       hdfs://namenode.hadoop.game.yy.com/user/hive/warehouse/table02/000002_0 [0]
1       hdfs://namenode.hadoop.game.yy.com/user/hive/warehouse/table02/000002_0 [352]
2       hdfs://namenode.hadoop.game.yy.com/user/hive/warehouse/table02/000002_0 [704]
 应该是{值,HDFS文件位置,偏移量的数组(可能有多个)}
自定义索引文件:
insert overwrite directory "/tmp/table02_index_data" select `_bucketname`, `_offsets` from   default__table02_table02_index__ where id =500000;
 
查询table02数据
直接查询:
hive> select * from table02 where id =500000;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201301301559_29049, Tracking URL = http://namenode.hadoop.game.yy.com:50030/jobdetails.jsp?jobid=job_201301301559_29049
Kill Command = /home/hadoop/hadoop-1.0.3/libexec/../bin/hadoop job  -Dmapred.job.tracker=namenode.hadoop.game.yy.com:8021 -kill job_201301301559_29049
Hadoop job information for Stage-1: number of mappers: 6; number of reducers: 0
2013-04-26 22:34:20,755 Stage-1 map = 0%,  reduce = 0%
2013-04-26 22:34:26,797 Stage-1 map = 17%,  reduce = 0%, Cumulative CPU 2.23 sec
2013-04-26 22:34:27,812 Stage-1 map = 17%,  reduce = 0%, Cumulative CPU 2.23 sec
2013-04-26 22:34:28,859 Stage-1 map = 17%,  reduce = 0%, Cumulative CPU 2.23 sec
2013-04-26 22:34:29,871 Stage-1 map = 17%,  reduce = 0%, Cumulative CPU 2.23 sec
2013-04-26 22:34:30,874 Stage-1 map = 17%,  reduce = 0%, Cumulative CPU 2.23 sec
2013-04-26 22:34:31,877 Stage-1 map = 17%,  reduce = 0%, Cumulative CPU 2.23 sec
2013-04-26 22:34:32,879 Stage-1 map = 83%,  reduce = 0%, Cumulative CPU 11.58 sec
2013-04-26 22:34:33,882 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 12.99 sec
2013-04-26 22:34:34,884 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 12.99 sec
2013-04-26 22:34:35,887 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 12.99 sec
2013-04-26 22:34:36,890 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 12.99 sec
2013-04-26 22:34:37,893 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 12.99 sec
2013-04-26 22:34:38,895 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 12.99 sec
2013-04-26 22:34:39,898 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 12.99 sec
MapReduce Total cumulative CPU time: 12 seconds 990 msec
Ended Job = job_201301301559_29049
MapReduce Jobs Launched: 
Job 0: Map: 6   Cumulative CPU: 12.99 sec   HDFS Read: 357021325 HDFS Write: 357 SUCCESS
Total MapReduce CPU Time Spent: 12 seconds 990 msec
OK
500000  A decade ago, many were predicting that Cooke, a New York City prodigy, would become a basketball shoe pitchman and would flaunt his wares and skills at All-Star weekends like the recent aerial show in Orlando, Fla. There was a time, however fleeting, when he was more heralded, or perhaps merely hyped, than any other high school player in America.
Time taken: 33.189 seconds
 指定索引:
hive> set hive.index.compact.file=/tmp/table02_index_data;                                        
hive> set hive.optimize.index.filter=false; 
hive> set hive.input.format=org.apache.hadoop.hive.ql.index.compact.HiveCompactIndexInputFormat;
hive> select * from table02 where id =500000; 
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201301301559_29051, Tracking URL = http://namenode.hadoop.game.yy.com:50030/jobdetails.jsp?jobid=job_201301301559_29051
Kill Command = /home/hadoop/hadoop-1.0.3/libexec/../bin/hadoop job  -Dmapred.job.tracker=namenode.hadoop.game.yy.com:8021 -kill job_201301301559_29051
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2013-04-26 22:40:06,793 Stage-1 map = 0%,  reduce = 0%
2013-04-26 22:40:12,803 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.69 sec
2013-04-26 22:40:13,806 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.69 sec
2013-04-26 22:40:14,808 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.69 sec
2013-04-26 22:40:15,811 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.69 sec
2013-04-26 22:40:16,813 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.69 sec
2013-04-26 22:40:17,815 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.69 sec
2013-04-26 22:40:18,818 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 1.69 sec
MapReduce Total cumulative CPU time: 1 seconds 690 msec
Ended Job = job_201301301559_29051
MapReduce Jobs Launched: 
Job 0: Map: 1   Cumulative CPU: 1.69 sec   HDFS Read: 33554658 HDFS Write: 357 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 690 msec
OK
500000  A decade ago, many were predicting that Cooke, a New York City prodigy, would become a basketball shoe pitchman and would flaunt his wares and skills at All-Star weekends like the recent aerial show in Orlando, Fla. There was a time, however fleeting, when he was more heralded, or perhaps merely hyped, than any other high school player in America.
Time taken: 26.776 seconds
 看Map数:变成1个了,不过这里是手工插入id=500000的索引。
总结:索引表的基本包含几列:1. 源表的索引列;2. _bucketname hdfs中文件地址 3. 索引列在hdfs文件中的偏移量。原理是通过记录索引列在HDFS中的偏移量,精准获取数据,避免全表扫描。
分享到:
评论

相关推荐

    hive面试题(Hadoop)

    ### Hive索引的理解与应用场景 #### 索引机制 - Hive支持索引机制,但与传统的关系型数据库索引有所不同。在Hive 3.0版本之前,索引的功能较为有限,且索引的建立和维护成本较高,因此实际应用中较少使用。 #### ...

    大数据面试二:hive

    - 虽然 Hive 支持索引,但功能较为有限,效率不高,一般不常用。索引主要适用于静态字段,以避免频繁重建索引。每次数据更新后,都需要重建索引来保持索引的有效性。 3. **Hive 的调度和运维**: - 使用脚本封装 ...

    Hadoop大数据常见问题及处理方法.docx

    尽管Hive支持索引,但其实现方式与传统的关系型数据库中的索引有所不同,而且Hive索引的使用并不广泛。 - **索引适用场景**:适用于静态字段,即那些不经常更新的字段,以避免频繁地重建索引。在每次建立或更新数据...

    Hive视图和索引.md

    Hive视图和索引,基础篇

    datav.js

    `Hive Table命令.txt`涵盖了创建、修改和管理Hive表的各种命令,而`Hive索引.txt`则讲解了Hive的索引机制及其对查询性能的影响。 最后,`Hive使用External Table.txt`介绍了外部表的概念,外部表允许Hive与数据存储...

    hive2.1.1中orc格式读取报数组越界错误解决方法

    它提供了压缩、索引和列式存储等特性,能够极大地提高查询性能。然而,有时候在使用ORC格式读取数据时,可能会遇到“数组越界”错误,这通常是由于软件bug或者不兼容性导致的。 “数组越界”错误是Java编程语言中...

    Hive总结.docx

    - 利用索引加速查询。 - 合理选择计算引擎,Tez和Spark相对于MapReduce能提供更好的性能。 - 减少数据倾斜,通过数据预处理、分区优化等方式避免部分节点负担过重。 【数据倾斜】 数据倾斜是指在分布式计算过程中,...

    HIVE从入门到精通.pdf

    - **数据存储**:Hive的数据存储建立在HDFS之上,不使用特定的数据格式,也不支持索引。用户可以自定义列和行的分隔符以解析数据。Hive支持表、外部表、分区和桶等数据模型。 #### 二、Hive的安装与配置 - **安装...

    hive执行计划可视化工具

    4. **优化建议**:根据分析结果,提供可能的优化策略,比如添加索引、调整表分区或重写查询语句。 5. **资源管理**:显示Hive如何分配资源(如MapReduce或Tez任务)来执行查询,帮助管理员理解资源使用情况,以便...

    Hive编程指南.pdf 高清,带索引

    Hive编程指南.pdf 高清,带索引

    13-Hive基本操作1

    Hive支持创建索引以优化查询性能,但这个功能并不广泛使用,因为Hive主要针对大数据的离线分析。 11. **表的分区和桶**: 分区允许将大表逻辑上划分为更小的部分,以便于查询优化。例如,`CREATE TABLE my_table ...

    Hive用户指南(Hive_user_guide)_中文版.pdf

    入, Hive 可以并行访问数据,因此即使没有索引,对于大数据量的访问, Hive 仍然 可以体现出优势。数据库中,通常会针对一个或者几个列建立索引,因此对于少量的特 定条件的数据的访问,数据库可以有很高的效率,...

    ES-HIVE数据互通

    ### ES-HIVE数据互通知识点详解 #### 环境配置 在进行Elasticsearch与Hive的数据互通之前,首先需要确保环境配置正确无误。本文档提到的环境为实验性的单节点集群,具体配置如下: - **操作系统**:Vagrant + ...

    Hive使用手册Hive使用手册

    - 考虑使用索引和Materialized Views以加速查询。 - 根据数据访问模式调整Hive的配置参数。 以上是Hive的基础知识和常见操作,理解并熟练运用这些内容,能帮助你更有效地在Hadoop集群上进行大数据处理和分析。在...

    大数据系列-Hive

    Hive虽然不支持传统数据库的B树索引,但提供了一种基于数据分区的索引机制——Bucketing和Sorting,以加速查询。 3. **数据倾斜问题**:当某些分区或分桶中的数据量远大于其他时,会导致计算资源分配不均,影响整体...

    项目实战——Spark将Hive表的数据写入ElasticSearch(Java版本)

    在从Hive加载数据到Spark后,可能需要对数据进行清洗、转换和格式化,以适应ElasticSearch的索引结构。这可以通过Spark DataFrame的操作来完成,例如选择字段、过滤记录、聚合数据等。 4. **ElasticSearch数据导入...

    Hive函数及语法说明

    Hive 函数及语法说明 Hive 是一个基于 Hadoop 的数据仓库工具,能够提供高效的数据查询和分析功能。Hive 的函数和语法是其核心组件之一,本文将对 Hive 的函数和语法进行详细的说明。 内置函数 Hive提供了一些...

    尚硅谷大数据视频_Hive视频教程

    7. **视图与索引**:学习如何创建和使用视图简化复杂查询,以及Hive的索引机制,虽然Hive的索引功能相对有限,但仍然可以提高某些查询的速度。 8. **数据倾斜与优化**:了解数据倾斜问题,这是大数据处理中的常见...

    2023最新最强大数据面试题汇总

    18. **Hive索引**:Hive支持基于桶的索引,用于优化查询性能,但与关系数据库的索引有所不同,使用需谨慎。 这些只是面试中可能涉及的一小部分知识点,全面准备大数据面试还需要深入了解每个技术的原理、优化技巧...

    大数据面试 Hive 八股文

    - **无索引**:查询时需全表扫描,导致查询效率较低。 2. **Hive与数据库的区别** - **事务处理**:数据库支持事务,而Hive在高版本之前不支持。 - **执行效率**:数据库响应快,适合实时操作;Hive在大数据量下...

Global site tag (gtag.js) - Google Analytics