- 浏览: 562435 次
- 性别:
- 来自: 杭州
文章分类
- 全部博客 (478)
- lucene (45)
- oracle (19)
- nutch (2)
- blog (2)
- 垂直搜索 (19)
- java综合 (89)
- spring (15)
- Hibernate (9)
- Struts (9)
- Hadoop (16)
- Mysql (12)
- nosql (10)
- Linux (3)
- MyEclipse (4)
- Ant (1)
- 设计模式 (19)
- JBPM (1)
- JSP (1)
- HtmlParser (5)
- SVN (2)
- 插件 (2)
- 收藏 (7)
- Others (1)
- Heritrix (18)
- Solr (4)
- 主题爬虫 (31)
- 内存数据库 (24)
- 分布式与海量数据 (32)
- httpclient (14)
- Tomcat (1)
- 面试宝典 (6)
- Python (14)
- 数据挖掘 (1)
- 算法 (6)
- 其他 (4)
- JVM (12)
- Redis (18)
最新评论
-
hanjiyun:
本人水平还有待提高,进步空间很大,看这些文章给我有很大的指导作 ...
JVM的内存管理 Ⅲ -
liuxinglanyue:
四年后的自己:这种方法 不靠谱。 使用javaagent的方式 ...
计算Java对象占用内存空间的大小(对于32位虚拟机而言) -
jaysoncn:
附件在哪里啊test.NoCertificationHttps ...
使用HttpClient过程中常见的一些问题 -
231fuchenxi:
你好,有redis,memlink,mysql的测试代码吗?可 ...
MemLink 性能测试 -
guyue1015:
[color=orange][/color][size=lar ...
JAVA同步机制
Apache Lucene - Index File Formats
Index File Formats
This document defines the index file formats used in Lucene version 3.0. If you are using a different version of Lucene, please consult the copy ofdocs/fileformats.html that was distributed with the version you are using.
Apache Lucene is written in Java, but several efforts are underway to write versions of Lucene in other programming languages. If these versions are to remain compatible with Apache Lucene, then a language-independent definition of the Lucene index format is required. This document thus attempts to provide a complete and independent definition of the Apache Lucene 3.0 file formats.
As Lucene evolves, this document should evolve. Versions of Lucene in different programming languages should endeavor to agree on file formats, and generate new versions of this document.
Compatibility notes are provided in this document, describing how file formats have changed from prior versions.
In version 2.1, the file format was changed to allow lock-less commits (ie, no more commit lock). The change is fully backwards compatible: you can open a pre-2.1 index for searching or adding/deleting of docs. When the new segments file is saved (committed), it will be written in the new file format (meaning no specific "upgrade" process is needed). But note that once a commit has occurred, pre-2.1 Lucene will not be able to read the index.
In version 2.3, the file format was changed to allow segments to share a single set of doc store (vectors & stored fields) files. This allows for faster indexing in certain cases. The change is fully backwards compatible (in the same way as the lock-less commits change in 2.1).
Definitions
The fundamental concepts in Lucene are index, document, field and term.
An index contains a sequence of documents.
-
A document is a sequence of fields.
-
A field is a named sequence of terms.
- A term is a string.
The same string in two different fields is considered a different term. Thus terms are represented as a pair of strings, the first naming the field, and the second naming text within the field.
Inverted Indexing
The index stores statistics about terms in order to make term-based search more efficient. Lucene's index falls into the family of indexes known as aninverted index. This is because it can list, for a term, the documents that contain it. This is the inverse of the natural relationship, in which documents list terms.
Types of Fields
In Lucene, fields may be stored, in which case their text is stored in the index literally, in a non-inverted manner. Fields that are inverted are calledindexed. A field may be both stored and indexed.
The text of a field may be tokenized into terms to be indexed, or the text of a field may be used literally as a term to be indexed. Most fields are tokenized, but sometimes it is useful for certain identifier fields to be indexed literally.
See the Field java docs for more information on Fields.
Segments
Lucene indexes may be composed of multiple sub-indexes, or segments. Each segment is a fully independent index, which could be searched separately. Indexes evolve by:
-
Creating new segments for newly added documents.
-
Merging existing segments.
Searches may involve multiple segments and/or multiple indexes, each index potentially composed of a set of segments.
Document Numbers
Internally, Lucene refers to documents by an integer document number. The first document added to an index is numbered zero, and each subsequent document added gets a number one greater than the previous.
Note that a document's number may change, so caution should be taken when storing these numbers outside of Lucene. In particular, numbers may change in the following situations:
-
The numbers stored in each segment are unique only within the segment, and must be converted before they can be used in a larger context. The standard technique is to allocate each segment a range of values, based on the range of numbers used in that segment. To convert a document number from a segment to an external value, the segment's base document number is added. To convert an external value back to a segment-specific value, the segment is identified by the range that the external value is in, and the segment's base value is subtracted. For example two five document segments might be combined, so that the first segment has a base value of zero, and the second of five. Document three from the second segment would have an external value of eight.
-
When documents are deleted, gaps are created in the numbering. These are eventually removed as the index evolves through merging. Deleted documents are dropped when segments are merged. A freshly-merged segment thus has no gaps in its numbering.
Overview
Each segment index maintains the following:
-
Field names. This contains the set of field names used in the index.
-
Stored Field values. This contains, for each document, a list of attribute-value pairs, where the attributes are field names. These are used to store auxiliary information about the document, such as its title, url, or an identifier to access a database. The set of stored fields are what is returned for each hit when searching. This is keyed by document number.
-
Term dictionary. A dictionary containing all of the terms used in all of the indexed fields of all of the documents. The dictionary also contains the number of documents which contain the term, and pointers to the term's frequency and proximity data.
-
Term Frequency data. For each term in the dictionary, the numbers of all the documents that contain that term, and the frequency of the term in that document if omitTf is false.
-
Term Proximity data. For each term in the dictionary, the positions that the term occurs in each document. Note that this will not exist if all fields in all documents set omitTf to true.
-
Normalization factors. For each field in each document, a value is stored that is multiplied into the score for hits on that field.
-
Term Vectors. For each field in each document, the term vector (sometimes called document vector) may be stored. A term vector consists of term text and term frequency. To add Term Vectors to your index see the Field constructors
-
Deleted documents. An optional file indicating which documents are deleted.
Details on each of these are provided in subsequent sections.
File Naming
All files belonging to a segment have the same name with varying extensions. The extensions correspond to the different file formats described below. When using the Compound File format (default in 1.4 and greater) these files are collapsed into a single .cfs file (see below for details)
Typically, all segments in an index are stored in a single directory, although this is not required.
As of version 2.1 (lock-less commits), file names are never re-used (there is one exception, "segments.gen", see below). That is, when any file is saved to the Directory it is given a never before used filename. This is achieved using a simple generations approach. For example, the first segments file is segments_1, then segments_2, etc. The generation is a sequential long integer represented in alpha-numeric (base 36) form.
Summary of File Extensions
The following table summarizes the names and extensions of the files in Lucene:
Segments File | segments.gen, segments_N | Stores information about segments |
Lock File | write.lock | The Write lock prevents multiple IndexWriters from writing to the same file. |
Compound File | .cfs | An optional "virtual" file consisting of all the other index files for systems that frequently run out of file handles. |
Fields | .fnm | Stores information about the fields |
Field Index | .fdx | Contains pointers to field data |
Field Data | .fdt | The stored fields for documents |
Term Infos | .tis | Part of the term dictionary, stores term info |
Term Info Index | .tii | The index into the Term Infos file |
Frequencies | .frq | Contains the list of docs which contain each term along with frequency |
Positions | .prx | Stores position information about where a term occurs in the index |
Norms | .nrm | Encodes length and boost factors for docs and fields |
Term Vector Index | .tvx | Stores offset into the document data file |
Term Vector Documents | .tvd | Contains information about each document that has term vectors |
Term Vector Fields | .tvf | The field level info about term vectors |
Deleted Documents | .del | Info about what files are deleted |
Primitive Types
Byte
The most primitive type is an eight-bit byte. Files are accessed as sequences of bytes. All other data types are defined as sequences of bytes, so file formats are byte-order independent.
UInt32
32-bit unsigned integers are written as four bytes, high-order bytes first.
UInt32 --> <Byte>4
Uint64
64-bit unsigned integers are written as eight bytes, high-order bytes first.
UInt64 --> <Byte>8
VInt
A variable-length format for positive integers is defined where the high-order bit of each byte indicates whether more bytes remain to be read. The low-order seven bits are appended as increasingly more significant bits in the resulting integer value. Thus values from zero to 127 may be stored in a single byte, values from 128 to 16,383 may be stored in two bytes, and so on.
VInt Encoding Example
Value |
First byte |
Second byte |
Third byte |
0 |
00000000 |
|
|
1 |
00000001 |
|
|
2 |
00000010 |
|
|
... |
|
|
|
127 |
01111111 |
|
|
128 |
10000000 |
00000001 |
|
129 |
10000001 |
00000001 |
|
130 |
10000010 |
00000001 |
|
... |
|
|
|
16,383 |
11111111 |
01111111 |
|
16,384 |
10000000 |
10000000 |
00000001 |
16,385 |
10000001 |
10000000 |
00000001 |
... |
|
|
|
This provides compression while still being efficient to decode.
Chars
Lucene writes unicode character sequences as UTF-8 encoded bytes.
String
Lucene writes strings as UTF-8 encoded bytes. First the length, in bytes, is written as a VInt, followed by the bytes.
String --> VInt, Chars
Compound Types
Map<String,String>
In a couple places Lucene stores a Map String->String.
Map<String,String> --> Count<String,String>Count
Per-Index Files
The files in this section exist one-per-index.
Segments File
The active segments in the index are stored in the segment info file, segments_N. There may be one or more segments_N files in the index; however, the one with the largest generation is the active one (when older segments_N files are present it's because they temporarily cannot be deleted, or, a writer is in the process of committing, or a custom IndexDeletionPolicy is in use). This file lists each segment by name, has details about the separate norms and deletion files, and also contains the size of each segment.
As of 2.1, there is also a file segments.gen. This file contains the current generation (the _N in segments_N) of the index. This is used only as a fallback in case the current generation cannot be accurately determined by directory listing alone (as is the case for some NFS clients with time-based directory cache expiraation). This file simply contains an Int32 version header (SegmentInfos.FORMAT_LOCKLESS = -2), followed by the generation recorded as Int64, written twice.
2.9 Segments --> Format, Version, NameCounter, SegCount, <SegName, SegSize, DelGen, DocStoreOffset, [DocStoreSegment, DocStoreIsCompoundFile], HasSingleNormFile, NumField, NormGenNumField, IsCompoundFile, DeletionCount, HasProx, Diagnostics>SegCount, CommitUserData, Checksum
Format, NameCounter, SegCount, SegSize, NumField, DocStoreOffset, DeletionCount --> Int32
Version, DelGen, NormGen, Checksum --> Int64
SegName, DocStoreSegment --> String
Diagnostics --> Map<String,String>
IsCompoundFile, HasSingleNormFile, DocStoreIsCompoundFile, HasProx --> Int8
CommitUserData --> Map<String,String>
Format is -9 (SegmentInfos.FORMAT_DIAGNOSTICS).
Version counts how often the index has been changed by adding or deleting documents.
NameCounter is used to generate names for new segment files.
SegName is the name of the segment, and is used as the file name prefix for all of the files that compose the segment's index.
SegSize is the number of documents contained in the segment index.
DelGen is the generation count of the separate deletes file. If this is -1, there are no separate deletes. If it is 0, this is a pre-2.1 segment and you must check filesystem for the existence of _X.del. Anything above zero means there are separate deletes (_X_N.del).
NumField is the size of the array for NormGen, or -1 if there are no NormGens stored.
NormGen records the generation of the separate norms files. If NumField is -1, there are no normGens stored and they are all assumed to be 0 when the segment file was written pre-2.1 and all assumed to be -1 when the segments file is 2.1 or above. The generation then has the same meaning as delGen (above).
IsCompoundFile records whether the segment is written as a compound file or not. If this is -1, the segment is not a compound file. If it is 1, the segment is a compound file. Else it is 0, which means we check filesystem to see if _X.cfs exists.
If HasSingleNormFile is 1, then the field norms are written as a single joined file (with extension .nrm); if it is 0 then each field's norms are stored as separate .fN files. See "Normalization Factors" below for details.
DocStoreOffset, DocStoreSegment, DocStoreIsCompoundFile: If DocStoreOffset is -1, this segment has its own doc store (stored fields values and term vectors) files and DocStoreSegment and DocStoreIsCompoundFile are not stored. In this case all files for stored field values (*.fdt and *.fdx) and term vectors (*.tvf, *.tvd and *.tvx) will be stored with this segment. Otherwise, DocStoreSegment is the name of the segment that has the shared doc store files; DocStoreIsCompoundFile is 1 if that segment is stored in compound file format (as a .cfx file); and DocStoreOffset is the starting document in the shared doc store files where this segment's documents begin. In this case, this segment does not store its own doc store files but instead shares a single set of these files with other segments.
Checksum contains the CRC32 checksum of all bytes in the segments_N file up until the checksum. This is used to verify integrity of the file on opening the index.
DeletionCount records the number of deleted documents in this segment.
HasProx is 1 if any fields in this segment have omitTf set to false; else, it's 0.
CommitUserData stores an optional user-supplied opaque Map<String,String> that was passed to IndexWriter's commit or prepareCommit, or IndexReader's flush methods.
The Diagnostics Map is privately written by IndexWriter, as a debugging aid, for each segment it creates. It includes metadata like the current Lucene version, OS, Java version, why the segment was created (merge, flush, addIndexes), etc.
Lock File
The write lock, which is stored in the index directory by default, is named "write.lock". If the lock directory is different from the index directory then the write lock will be named "XXXX-write.lock" where XXXX is a unique prefix derived from the full path to the index directory. When this file is present, a writer is currently modifying the index (adding or removing documents). This lock file ensures that only one writer is modifying the index at a time.
发表评论
-
关于Lucene的讨论
2011-01-01 10:20 1060分类为[lucene]的文章 ... -
有关Lucene的问题(收藏)推荐
2010-12-30 21:02 1106有关Lucene的问题(1):为 ... -
Lucene 学习总结(收藏)推荐
2010-12-30 20:54 1554Lucene学习总结之一:全文检索的基本原理 ... -
基于Lucene的Compass 资源(收藏)
2010-12-29 18:29 11411.2、Compass相关网上资源 1、官方网站1: http ... -
Lucene 3.0.2索引文件官方文档(二)
2010-12-28 22:36 1004Deletable File A writer dy ... -
Lucene 3.0 索引文件学习总结(收藏)
2010-12-28 22:28 935lucene学习1——词域信息 ... -
Lucene 字符编码问题
2010-12-27 20:29 992现在如果一个txt文件中包含了ANSI编码的文本文件和Uni ... -
Lucene 字符编码问题
2010-12-27 20:20 1029现在如果一个txt文件中包含了ANSI编码的文本文件和Unic ... -
Annotated Lucene(源码剖析中文版)
2010-12-25 22:52 1260Apache Lucene是一个高性能(high-pe ... -
Lucene 学习推荐博客
2010-12-25 22:42 1031深未来deepfuturelx http://deepfut ... -
Lucene3.0 初窥 总结(收藏)
2010-12-25 22:16 1807【Lucene3.0 初窥】全文检索的基本原理 ... -
转:基于lucene实现自己的推荐引擎
2010-12-17 17:05 1053采用基于数据挖掘的 ... -
加速 lucene 的搜索速度 ImproveSearchingSpeed(二)
2010-12-17 17:01 1031本文 为简单翻译,原文在:http://wiki.apac ... -
加速 lucene 索引建立速度 ImproveIndexingSpeed
2010-12-17 16:58 1070本文 只是简单的翻译,原文 在 http://wiki.a ... -
lucene 3.0 中的demo项目部署
2010-12-15 22:02 970转自:bjqincy 1 在myEclipise 建立 ... -
Lucene 3.0.2 源码 - final class Document
2010-12-14 22:33 888package org.apache.lucene.do ... -
Lucene 3.0.2 源码 - final class Field
2010-12-14 22:29 951package org.apache.lucene.do ... -
Lucene 3.0.2 源码 - abstract class AbstractField
2010-12-14 22:28 1039package org.apache.lucene.do ... -
Lucene 3.0.2 源码 - interface Fieldable
2010-12-14 22:28 1176package org.apache.lucene.do ... -
LinkedIn公司实现的实时搜索引擎Zoie
2010-12-14 21:02 872转自:forfuture1978 一 ...
相关推荐
Lucene-core-3.0.2.jar 文件包含了 Lucene 的核心组件,这些组件构成了 Lucene 搜索引擎的基础。其中包括: 1. 文档处理:Document 类用于封装待索引的信息,Field 类则定义了文档中的各个字段,如文本、日期或数值...
1. 文档索引:Lucene的核心功能之一是文档索引。它将非结构化的文本数据转化为可以高效搜索的结构化索引。在3.0.2版本中,索引过程支持多线程,提升了大规模数据的处理速度。 2. 分词器(Analyzer):Lucene提供了...
这是Lucene的核心库,包含了所有用于创建、索引和搜索文档的基本组件。它提供了一个高效的倒排索引结构,使得文本搜索变得快速且高效。在3.0.2版本中,Lucene引入了诸多优化,比如更高效的内存管理、更快的搜索速度...
索引阶段,Lucene将文本数据转换为可搜索的倒排索引,这是一种存储结构,能够快速查找包含特定词项的文档。搜索阶段,用户输入的查询会被解析,然后在倒排索引中进行匹配,返回最相关的文档。 在Lucene 3.0.2 API中...
在`lucene-3.0.2-dev`源码中,`IndexWriter`类负责创建和更新索引,通过`Term`和`Document`对象来表示关键词和文档。索引的构建过程包括分词(Tokenization)、词项分析(Tokenization)和文档编码(Document ...
- **文档处理增强**:引入了对PDF、HTML等更多文件格式的支持,使得Lucene可以处理更广泛的数据源。 - **多线程支持**:在3.0系列中,Lucene增强了对多线程环境的支持,允许并发索引和检索操作,提升性能。 3. **...
2. **索引文档**:通过Document对象,将要索引的数据(如网页内容)组织起来,添加到IndexWriter中,IndexWriter会自动调用Analyzer进行文本分析并创建索引。 3. **搜索操作**:使用IndexSearcher,配合QueryParser...
在本文档中,我们使用的Lucene版本为3.0.2。这个版本相比于早期版本,在性能和稳定性上有了显著提升,同时也增加了许多新特性,特别是对中文检索的支持更加完善。 ##### 3. 全文检索过程详解 全文检索可以分为两大...
Lucene是一款强大的全文搜索引擎库,由Apache软件基金会开发并维护,广泛应用于各种信息检索和数据分析场景。本资源提供了从2.0到4.1等多个版本的Lucene JAR包,涵盖了一个时期的版本发展,这对于开发者进行版本兼容...
Lucene是一个开源的全文检索库,它提供了强大的文本搜索功能,包括索引创建和搜索索引两个主要部分。在版本3.0.2中,Lucene已经能够很好地支持中文检索,这对于中文信息的处理具有重要意义。 全文检索的核心在于...
2. **索引文档**:每个文档表示为一个 Document 对象,包含多个 Field,每个 Field 代表文档的一个属性。 3. **查询**:使用 QueryParser 创建查询对象,然后通过 IndexSearcher 进行搜索。 4. **获取结果**:...
9. **lucene_3.6.1_API.CHM**:Apache Lucene是一个全文搜索引擎库,此文档详细介绍了Lucene 3.6.1版本的API,包括索引构建、查询解析、搜索等功能。 这些CHM文件提供了丰富的编程和Web开发知识,对于学习和提升...
在提供的文件`lucene-fast-vector-highlighter-3.0.2.jar`中,就包含了这个组件。 Fast Vector Highlighter的主要优点是速度和准确性,因为它不需要重新遍历整个文档来计算分数,而是直接基于预存的TermVector信息...
PDFBox是Apache软件基金会的一个开源项目,主要用于处理PDF(Portable Document Format)文档。这个压缩包包含了PDFBox的所有jar包以及源码,对于开发者来说,这是一个非常宝贵的资源,可以帮助理解和操作PDF文档,...
2. `poi-3.0.2-FINAL.jar`:Apache POI是一个用于处理Microsoft Office格式文档的库,例如Excel。在Solr中,可能用于导入或导出Excel数据到Solr索引。 3. `poi-scratchpad-3.0.1-FINAL.jar`:这是POI项目的扩展部分...
│ 淘淘商城第一天笔记.docx │ ├─02.第二天 │ 07.商品类目选择完成.avi │ 01.课程计划.avi │ 02.展示首页.avi │ 03.分页插件01.avi │ 04.分页插件的使用方法.avi │ 05.商品列表展示.avi │ 06.商品类目...