Apache ORC the smallest, fastest columnar storage for Hadoop workloads.
Apache ORC 文件格式是一种Hadoop生态圈中的列式存储格式,它的产生早在2013年初,最初产生自Apache Hive,用于降低Hadoop数据存储空间和加速Hive查询速度。
ACID Support
Includes support for ACID transactions and snapshot isolation
Built-in Indexes
Jump to the right row with indexes including minimum, maximum, and bloom filters for each column.
Complex Types
Supports all of Hive's types including the compound types: structs, lists, maps, and unions
ORC(OptimizedRC File)存储源自于RC(RecordColumnar File)这种存储格式,RC是一种列式存储引擎,对schema演化(修改schema需要重新生成数据)支持较差,而ORC是对RC改进,但它仍对schema演化支持较差,主要是在压缩编码,查询性能方面做了优化。RC/ORC最初是在Hive中得到使用,最后发展势头不错,独立成一个单独的项目。Hive 1.x版本对事务和update操作的支持,便是基于ORC实现的(其他存储格式暂不支持)。ORC发展到今天,已经具备一些非常高级的feature,比如支持update操作,支持ACID,支持struct,array复杂类型。你可以使用复杂类型构建一个类似于parquet的嵌套式数据架构,但当层数非常多时,写起来非常麻烦和复杂,而parquet提供的schema表达方式更容易表示出多级嵌套的数据类型。
ACID support
Historically, the only way to atomically add data to a table in Hive was to add a new partition. Updating or deleting data in partition required removing the old partition and adding it back with the new data and it wasn’t possible to do atomically.
However, user’s data is continually changing and as Hive matured, users required reliability guarantees despite the churning data lake. Thus, we needed to implement ACID transactions that guarantee atomicity, consistency, isolation, and durability. Although we support ACID transactions, they are not designed to support OLTP requirements. It can support millions of rows updated per a transaction, but it can not support millions of transactions an hour.
Additionally, we wanted to support streaming ingest in to Hive tables where streaming applications like Flume or Storm could write data into Hive and have transactions commit once a minute and queries would either see all of a transaction or none of it.
HDFS is a write once file system and ORC is a write-once file format, so edits were implemented using base files and delta files where insert, update, and delete operations are recorded.
Indexes
ORC provides three level of indexes within each file:
file level - statistics about the values in each column across the entire file
stripe level - statistics about the values in each column for each stripe
row level - statistics about the values in each column for each set of 10,000 rows within a stripe
The file and stripe level column statistics are in the file footer so that they are easy to access to determine if the rest of the file needs to be read at all. Row level indexes include both the column statistics for each row group and the position for seeking to the start of the row group.
Column statistics always contain the count of values and whether there are null values present. Most other primitive types include the minimum and maximum values and for numeric types the sum. As of Hive 1.2, the indexes can include bloom filters, which provide a much more selective filter.
The indexes at all levels are used by the reader using Search ARGuments or SARGs, which are simplified expressions that restrict the rows that are of interest. For example, if a query was looking for people older than 100 years old, the SARG would be “age > 100” and only files, stripes, or row groups that had people over 100 years old would be read.
相关推荐
Apache ORC ORC是一种用于Hadoop工作负载的自描述类型感知列式文件格式。 它针对大型流读取进行了优化,但具有集成支持,可快速查找所需的行。 在列中存储数据Apache ORC ORC是一种用于Hadoop工作负载的自描述类型...
Maven坐标:org.apache.orc:orc-shims:1.5.5; 标签:apache、orc、shims、中文文档、jar包、java; 使用方法:解压翻译后的API文档,用浏览器打开“index.html”文件,即可纵览文档内容。 人性化翻译,文档中的代码...
Maven坐标:org.apache.orc:orc-shims:1.5.5; 标签:apache、orc、shims、中英对照文档、jar包、java; 使用方法:解压翻译后的API文档,用浏览器打开“index.html”文件,即可纵览文档内容。 人性化翻译,文档中的...
Apache Spark系列技术直播大数据列式存储之Parquet_ORC 本资源是Apache Spark系列技术直播的第七讲,主要讲解大数据列式存储中的Parquet和ORC两种技术。讲座内容涵盖了列式存储的概述、Parquet和ORC的介绍、编码在...
at org.apache.orc.OrcFile$WriterVersion.from(OrcFile.java:145) at org.apache.orc.impl.OrcTail.getWriterVersion(OrcTail.java:74) at org.apache.orc.impl.ReaderImpl.(ReaderImpl.java:385) at org....
基于 Apache ORC 最新分支1.7源码编译的 orc-tools-1.7.0-SNAPSHOT-uber.jar,主要为 ORC 的 一个 Java 工具包,工具使用文档可以看到官方文档https://orc.apache.org/docs/java-tools.html,支持 meta、data、scan...
Apache ORC(Optimized Row Columnar)文件格式是Hadoop生态系统中的一种高效、紧凑的列式存储格式,尤其适用于大数据处理场景。它由Apache Hive项目发展而来,旨在提高数据分析的性能,减少磁盘I/O,并优化内存使用...
CDH(Cloudera Distribution Including Apache Hadoop)是Cloudera公司提供的Hadoop发行版,其中包含了Hive等组件。`cdh6.3.2`表示CDH的一个特定版本。替换这些jar包意味着升级了Hive在CDH环境中的执行部分和ORC处理...
标签提到的“工具”可能是指使用Hadoop的命令行工具或如Apache Hive、Pig等高级查询语言来辅助转换过程。例如,Hive可以创建外部表指向TXT文件,然后执行INSERT INTO语句将数据直接写入ORC表。 7. **HadoopCraft_ ...
此外,Hive支持多种数据格式,包括文本文件、SequenceFile、Parquet、ORC等,这些格式都有各自的优点,如Parquet和ORC是列式存储格式,能提供高效的查询性能。 Hadoop是Apache Hive的基础框架,它是一个分布式存储...
at org.apache.spark.sql.hive.orc.OrcFileFormat$$anonfun$org$apache$spark$sql$hive$orc$OrcFileFormat$$unwrap$1$1.apply(OrcFileFormat.scala:339) at org.apache.spark.sql.hive.orc.OrcFileFormat$$anonfun$...
1. "hive安装.docx":这很可能是Hive的安装指南,详细介绍了如何在不同的操作系统上配置和安装Hive 1.2.1。这份文档可能涵盖了从环境准备、依赖库安装到Hive服务启动的全过程,对于初学者或者需要在新环境中部署Hive...
ORC是一种专为Hadoop工作负载设计的自描述类型感知列式文件格式。 它针对大型流读取进行了优化,但具有集成支持,可快速查找所需的行。 以列格式存储数据使阅读器仅可以读取,解压缩和处理当前查询所需的值。 由于...
7. **Storage Handling**:Hive可以支持多种数据存储格式,如TextFile、RCFile、Parquet、ORC等,每种格式都有其独特优势,例如压缩效率、列式存储和优化查询性能。 8. **Hive SerDes (SerDe)**:序列化和反序列化...
SequenceFile是Hadoop最基本的存储格式之一,它是二进制的键值对文件。SequenceFile支持各种类型的键值对,通过序列化和反序列化进行数据的读写。在上述代码示例中,`WriteSeqFileMapper`类展示了如何使用MapReduce...
Apache Hive 是一个强大的数据仓库工具,它建立在 Apache Hadoop 生态系统之上,主要用于处理和管理大规模的数据存储。Hive 提供了一种SQL-like(HQL,Hive SQL)的查询语言,使得用户无需深入了解 MapReduce 或其他...
以下将详细介绍orc文件格式及其在C#中处理的相关知识点。 1. **列式存储**:与传统的行式存储不同,orc文件将数据按列存储,使得在分析时只需要读取需要的列,极大地减少了I/O操作,提高了查询效率。 2. **类型...
Apache对象池技术是一种高效利用资源的策略,它通过预先创建并维护一组可重用对象来减少频繁创建和销毁对象带来的开销。在Java环境中,Apache Commons Pool库是实现对象池的常见工具,它提供了多种对象池实现,适用...
其次,提到的ORC(Optimized Row Columnar)也是另一种列式存储格式,主要由Apache Hive项目开发。它与Parquet类似,旨在提高大规模数据处理的性能。ORC的特点包括: 1. **高效压缩**:ORC支持多种压缩级别,可以...