- 浏览: 561431 次
- 性别:
- 来自: 杭州
文章分类
- 全部博客 (478)
- lucene (45)
- oracle (19)
- nutch (2)
- blog (2)
- 垂直搜索 (19)
- java综合 (89)
- spring (15)
- Hibernate (9)
- Struts (9)
- Hadoop (16)
- Mysql (12)
- nosql (10)
- Linux (3)
- MyEclipse (4)
- Ant (1)
- 设计模式 (19)
- JBPM (1)
- JSP (1)
- HtmlParser (5)
- SVN (2)
- 插件 (2)
- 收藏 (7)
- Others (1)
- Heritrix (18)
- Solr (4)
- 主题爬虫 (31)
- 内存数据库 (24)
- 分布式与海量数据 (32)
- httpclient (14)
- Tomcat (1)
- 面试宝典 (6)
- Python (14)
- 数据挖掘 (1)
- 算法 (6)
- 其他 (4)
- JVM (12)
- Redis (18)
最新评论
-
hanjiyun:
本人水平还有待提高,进步空间很大,看这些文章给我有很大的指导作 ...
JVM的内存管理 Ⅲ -
liuxinglanyue:
四年后的自己:这种方法 不靠谱。 使用javaagent的方式 ...
计算Java对象占用内存空间的大小(对于32位虚拟机而言) -
jaysoncn:
附件在哪里啊test.NoCertificationHttps ...
使用HttpClient过程中常见的一些问题 -
231fuchenxi:
你好,有redis,memlink,mysql的测试代码吗?可 ...
MemLink 性能测试 -
guyue1015:
[color=orange][/color][size=lar ...
JAVA同步机制
package org.apache.lucene.document; /** * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.index.IndexWriter; // for javadoc import org.apache.lucene.util.StringHelper; import java.io.Reader; import java.io.Serializable; /** A field is a section of a Document. Each field has two parts, a name and a value. Values may be free text, provided as a String or as a Reader, or they may be atomic keywords, which are not further processed. Such keywords may be used to represent dates, urls, etc. Fields are optionally stored in the index, so that they may be returned with hits on the document. */ public final class Field extends AbstractField implements Fieldable, Serializable { /** Specifies whether and how a field should be stored. */ public static enum Store { /** Store the original field value in the index. This is useful for short texts * like a document's title which should be displayed with the results. The * value is stored in its original form, i.e. no analyzer is used before it is * stored. */ YES { @Override public boolean isStored() { return true; } }, /** Do not store the field value in the index. */ NO { @Override public boolean isStored() { return false; } }; public abstract boolean isStored(); } /** Specifies whether and how a field should be indexed. */ public static enum Index { /** Do not index the field value. This field can thus not be searched, * but one can still access its contents provided it is * {@link Field.Store stored}. */ NO { @Override public boolean isIndexed() { return false; } @Override public boolean isAnalyzed() { return false; } @Override public boolean omitNorms() { return true; } }, /** Index the tokens produced by running the field's * value through an Analyzer. This is useful for * common text. */ ANALYZED { @Override public boolean isIndexed() { return true; } @Override public boolean isAnalyzed() { return true; } @Override public boolean omitNorms() { return false; } }, /** Index the field's value without using an Analyzer, so it can be searched. * As no analyzer is used the value will be stored as a single term. This is * useful for unique Ids like product numbers. */ NOT_ANALYZED { @Override public boolean isIndexed() { return true; } @Override public boolean isAnalyzed() { return false; } @Override public boolean omitNorms() { return false; } }, /** Expert: Index the field's value without an Analyzer, * and also disable the storing of norms. Note that you * can also separately enable/disable norms by calling * {@link Field#setOmitNorms}. No norms means that * index-time field and document boosting and field * length normalization are disabled. The benefit is * less memory usage as norms take up one byte of RAM * per indexed field for every document in the index, * during searching. Note that once you index a given * field <i>with</i> norms enabled, disabling norms will * have no effect. In other words, for this to have the * above described effect on a field, all instances of * that field must be indexed with NOT_ANALYZED_NO_NORMS * from the beginning. */ NOT_ANALYZED_NO_NORMS { @Override public boolean isIndexed() { return true; } @Override public boolean isAnalyzed() { return false; } @Override public boolean omitNorms() { return true; } }, /** Expert: Index the tokens produced by running the * field's value through an Analyzer, and also * separately disable the storing of norms. See * {@link #NOT_ANALYZED_NO_NORMS} for what norms are * and why you may want to disable them. */ ANALYZED_NO_NORMS { @Override public boolean isIndexed() { return true; } @Override public boolean isAnalyzed() { return true; } @Override public boolean omitNorms() { return true; } }; /** Get the best representation of the index given the flags. */ public static Index toIndex(boolean indexed, boolean analyzed) { return toIndex(indexed, analyzed, false); } /** Expert: Get the best representation of the index given the flags. */ public static Index toIndex(boolean indexed, boolean analyzed, boolean omitNorms) { // If it is not indexed nothing else matters if (!indexed) { return Index.NO; } // typical, non-expert if (!omitNorms) { if (analyzed) { return Index.ANALYZED; } return Index.NOT_ANALYZED; } // Expert: Norms omitted if (analyzed) { return Index.ANALYZED_NO_NORMS; } return Index.NOT_ANALYZED_NO_NORMS; } public abstract boolean isIndexed(); public abstract boolean isAnalyzed(); public abstract boolean omitNorms(); } /** Specifies whether and how a field should have term vectors. */ public static enum TermVector { /** Do not store term vectors. */ NO { @Override public boolean isStored() { return false; } @Override public boolean withPositions() { return false; } @Override public boolean withOffsets() { return false; } }, /** Store the term vectors of each document. A term vector is a list * of the document's terms and their number of occurrences in that document. */ YES { @Override public boolean isStored() { return true; } @Override public boolean withPositions() { return false; } @Override public boolean withOffsets() { return false; } }, /** * Store the term vector + token position information * * @see #YES */ WITH_POSITIONS { @Override public boolean isStored() { return true; } @Override public boolean withPositions() { return true; } @Override public boolean withOffsets() { return false; } }, /** * Store the term vector + Token offset information * * @see #YES */ WITH_OFFSETS { @Override public boolean isStored() { return true; } @Override public boolean withPositions() { return false; } @Override public boolean withOffsets() { return true; } }, /** * Store the term vector + Token position and offset information * * @see #YES * @see #WITH_POSITIONS * @see #WITH_OFFSETS */ WITH_POSITIONS_OFFSETS { @Override public boolean isStored() { return true; } @Override public boolean withPositions() { return true; } @Override public boolean withOffsets() { return true; } }; /** Get the best representation of a TermVector given the flags. */ public static TermVector toTermVector(boolean stored, boolean withOffsets, boolean withPositions) { // If it is not stored, nothing else matters. if (!stored) { return TermVector.NO; } if (withOffsets) { if (withPositions) { return Field.TermVector.WITH_POSITIONS_OFFSETS; } return Field.TermVector.WITH_OFFSETS; } if (withPositions) { return Field.TermVector.WITH_POSITIONS; } return Field.TermVector.YES; } public abstract boolean isStored(); public abstract boolean withPositions(); public abstract boolean withOffsets(); } /** The value of the field as a String, or null. If null, the Reader value or * binary value is used. Exactly one of stringValue(), * readerValue(), and getBinaryValue() must be set. */ public String stringValue() { return fieldsData instanceof String ? (String)fieldsData : null; } /** The value of the field as a Reader, or null. If null, the String value or * binary value is used. Exactly one of stringValue(), * readerValue(), and getBinaryValue() must be set. */ public Reader readerValue() { return fieldsData instanceof Reader ? (Reader)fieldsData : null; } /** The TokesStream for this field to be used when indexing, or null. If null, the Reader value * or String value is analyzed to produce the indexed tokens. */ public TokenStream tokenStreamValue() { return tokenStream; } /** <p>Expert: change the value of this field. This can * be used during indexing to re-use a single Field * instance to improve indexing speed by avoiding GC cost * of new'ing and reclaiming Field instances. Typically * a single {@link Document} instance is re-used as * well. This helps most on small documents.</p> * * <p>Each Field instance should only be used once * within a single {@link Document} instance. See <a * href="http://wiki.apache.org/lucene-java/ImproveIndexingSpeed">ImproveIndexingSpeed</a> * for details.</p> */ public void setValue(String value) { if (isBinary) { throw new IllegalArgumentException("cannot set a String value on a binary field"); } fieldsData = value; } /** Expert: change the value of this field. See <a href="#setValue(java.lang.String)">setValue(String)</a>. */ public void setValue(Reader value) { if (isBinary) { throw new IllegalArgumentException("cannot set a Reader value on a binary field"); } if (isStored) { throw new IllegalArgumentException("cannot set a Reader value on a stored field"); } fieldsData = value; } /** Expert: change the value of this field. See <a href="#setValue(java.lang.String)">setValue(String)</a>. */ public void setValue(byte[] value) { if (!isBinary) { throw new IllegalArgumentException("cannot set a byte[] value on a non-binary field"); } fieldsData = value; binaryLength = value.length; binaryOffset = 0; } /** Expert: change the value of this field. See <a href="#setValue(java.lang.String)">setValue(String)</a>. */ public void setValue(byte[] value, int offset, int length) { if (!isBinary) { throw new IllegalArgumentException("cannot set a byte[] value on a non-binary field"); } fieldsData = value; binaryLength = length; binaryOffset = offset; } /** Expert: sets the token stream to be used for indexing and causes isIndexed() and isTokenized() to return true. * May be combined with stored values from stringValue() or getBinaryValue() */ public void setTokenStream(TokenStream tokenStream) { this.isIndexed = true; this.isTokenized = true; this.tokenStream = tokenStream; } /** * Create a field by specifying its name, value and how it will * be saved in the index. Term vectors will not be stored in the index. * * @param name The name of the field * @param value The string to process * @param store Whether <code>value</code> should be stored in the index * @param index Whether the field should be indexed, and if so, if it should * be tokenized before indexing * @throws NullPointerException if name or value is <code>null</code> * @throws IllegalArgumentException if the field is neither stored nor indexed */ public Field(String name, String value, Store store, Index index) { this(name, value, store, index, TermVector.NO); } /** * Create a field by specifying its name, value and how it will * be saved in the index. * * @param name The name of the field * @param value The string to process * @param store Whether <code>value</code> should be stored in the index * @param index Whether the field should be indexed, and if so, if it should * be tokenized before indexing * @param termVector Whether term vector should be stored * @throws NullPointerException if name or value is <code>null</code> * @throws IllegalArgumentException in any of the following situations: * <ul> * <li>the field is neither stored nor indexed</li> * <li>the field is not indexed but termVector is <code>TermVector.YES</code></li> * </ul> */ public Field(String name, String value, Store store, Index index, TermVector termVector) { this(name, true, value, store, index, termVector); } /** * Create a field by specifying its name, value and how it will * be saved in the index. * * @param name The name of the field * @param internName Whether to .intern() name or not * @param value The string to process * @param store Whether <code>value</code> should be stored in the index * @param index Whether the field should be indexed, and if so, if it should * be tokenized before indexing * @param termVector Whether term vector should be stored * @throws NullPointerException if name or value is <code>null</code> * @throws IllegalArgumentException in any of the following situations: * <ul> * <li>the field is neither stored nor indexed</li> * <li>the field is not indexed but termVector is <code>TermVector.YES</code></li> * </ul> */ public Field(String name, boolean internName, String value, Store store, Index index, TermVector termVector) { if (name == null) throw new NullPointerException("name cannot be null"); if (value == null) throw new NullPointerException("value cannot be null"); if (name.length() == 0 && value.length() == 0) throw new IllegalArgumentException("name and value cannot both be empty"); if (index == Index.NO && store == Store.NO) throw new IllegalArgumentException("it doesn't make sense to have a field that " + "is neither indexed nor stored"); if (index == Index.NO && termVector != TermVector.NO) throw new IllegalArgumentException("cannot store term vector information " + "for a field that is not indexed"); if (internName) // field names are optionally interned name = StringHelper.intern(name); this.name = name; this.fieldsData = value; this.isStored = store.isStored(); this.isIndexed = index.isIndexed(); this.isTokenized = index.isAnalyzed(); this.omitNorms = index.omitNorms(); if (index == Index.NO) { this.omitTermFreqAndPositions = false; } this.isBinary = false; setStoreTermVector(termVector); } /** * Create a tokenized and indexed field that is not stored. Term vectors will * not be stored. The Reader is read only when the Document is added to the index, * i.e. you may not close the Reader until {@link IndexWriter#addDocument(Document)} * has been called. * * @param name The name of the field * @param reader The reader with the content * @throws NullPointerException if name or reader is <code>null</code> */ public Field(String name, Reader reader) { this(name, reader, TermVector.NO); } /** * Create a tokenized and indexed field that is not stored, optionally with * storing term vectors. The Reader is read only when the Document is added to the index, * i.e. you may not close the Reader until {@link IndexWriter#addDocument(Document)} * has been called. * * @param name The name of the field * @param reader The reader with the content * @param termVector Whether term vector should be stored * @throws NullPointerException if name or reader is <code>null</code> */ public Field(String name, Reader reader, TermVector termVector) { if (name == null) throw new NullPointerException("name cannot be null"); if (reader == null) throw new NullPointerException("reader cannot be null"); this.name = StringHelper.intern(name); // field names are interned this.fieldsData = reader; this.isStored = false; this.isIndexed = true; this.isTokenized = true; this.isBinary = false; setStoreTermVector(termVector); } /** * Create a tokenized and indexed field that is not stored. Term vectors will * not be stored. This is useful for pre-analyzed fields. * The TokenStream is read only when the Document is added to the index, * i.e. you may not close the TokenStream until {@link IndexWriter#addDocument(Document)} * has been called. * * @param name The name of the field * @param tokenStream The TokenStream with the content * @throws NullPointerException if name or tokenStream is <code>null</code> */ public Field(String name, TokenStream tokenStream) { this(name, tokenStream, TermVector.NO); } /** * Create a tokenized and indexed field that is not stored, optionally with * storing term vectors. This is useful for pre-analyzed fields. * The TokenStream is read only when the Document is added to the index, * i.e. you may not close the TokenStream until {@link IndexWriter#addDocument(Document)} * has been called. * * @param name The name of the field * @param tokenStream The TokenStream with the content * @param termVector Whether term vector should be stored * @throws NullPointerException if name or tokenStream is <code>null</code> */ public Field(String name, TokenStream tokenStream, TermVector termVector) { if (name == null) throw new NullPointerException("name cannot be null"); if (tokenStream == null) throw new NullPointerException("tokenStream cannot be null"); this.name = StringHelper.intern(name); // field names are interned this.fieldsData = null; this.tokenStream = tokenStream; this.isStored = false; this.isIndexed = true; this.isTokenized = true; this.isBinary = false; setStoreTermVector(termVector); } /** * Create a stored field with binary value. Optionally the value may be compressed. * * @param name The name of the field * @param value The binary value * @param store How <code>value</code> should be stored (compressed or not) * @throws IllegalArgumentException if store is <code>Store.NO</code> */ public Field(String name, byte[] value, Store store) { this(name, value, 0, value.length, store); } /** * Create a stored field with binary value. Optionally the value may be compressed. * * @param name The name of the field * @param value The binary value * @param offset Starting offset in value where this Field's bytes are * @param length Number of bytes to use for this Field, starting at offset * @param store How <code>value</code> should be stored (compressed or not) * @throws IllegalArgumentException if store is <code>Store.NO</code> */ public Field(String name, byte[] value, int offset, int length, Store store) { if (name == null) throw new IllegalArgumentException("name cannot be null"); if (value == null) throw new IllegalArgumentException("value cannot be null"); this.name = StringHelper.intern(name); // field names are interned fieldsData = value; if (store == Store.NO) throw new IllegalArgumentException("binary values can't be unstored"); isStored = store.isStored(); isIndexed = false; isTokenized = false; omitTermFreqAndPositions = false; omitNorms = true; isBinary = true; binaryLength = length; binaryOffset = offset; setStoreTermVector(TermVector.NO); } }
发表评论
-
关于Lucene的讨论
2011-01-01 10:20 1057分类为[lucene]的文章 ... -
有关Lucene的问题(收藏)推荐
2010-12-30 21:02 1102有关Lucene的问题(1):为 ... -
Lucene 学习总结(收藏)推荐
2010-12-30 20:54 1553Lucene学习总结之一:全文检索的基本原理 ... -
基于Lucene的Compass 资源(收藏)
2010-12-29 18:29 11371.2、Compass相关网上资源 1、官方网站1: http ... -
Lucene 3.0.2索引文件官方文档(二)
2010-12-28 22:36 1003Deletable File A writer dy ... -
Lucene 3.0.2索引文件官方文档(一)
2010-12-28 22:34 1455Apache Lucene - Index File ... -
Lucene 3.0 索引文件学习总结(收藏)
2010-12-28 22:28 934lucene学习1——词域信息 ... -
Lucene 字符编码问题
2010-12-27 20:29 988现在如果一个txt文件中包含了ANSI编码的文本文件和Uni ... -
Lucene 字符编码问题
2010-12-27 20:20 1024现在如果一个txt文件中包含了ANSI编码的文本文件和Unic ... -
Annotated Lucene(源码剖析中文版)
2010-12-25 22:52 1254Apache Lucene是一个高性能(high-pe ... -
Lucene 学习推荐博客
2010-12-25 22:42 1028深未来deepfuturelx http://deepfut ... -
Lucene3.0 初窥 总结(收藏)
2010-12-25 22:16 1806【Lucene3.0 初窥】全文检索的基本原理 ... -
转:基于lucene实现自己的推荐引擎
2010-12-17 17:05 1050采用基于数据挖掘的 ... -
加速 lucene 的搜索速度 ImproveSearchingSpeed(二)
2010-12-17 17:01 1029本文 为简单翻译,原文在:http://wiki.apac ... -
加速 lucene 索引建立速度 ImproveIndexingSpeed
2010-12-17 16:58 1068本文 只是简单的翻译,原文 在 http://wiki.a ... -
lucene 3.0 中的demo项目部署
2010-12-15 22:02 969转自:bjqincy 1 在myEclipise 建立 ... -
Lucene 3.0.2 源码 - final class Document
2010-12-14 22:33 885package org.apache.lucene.do ... -
Lucene 3.0.2 源码 - abstract class AbstractField
2010-12-14 22:28 1036package org.apache.lucene.do ... -
Lucene 3.0.2 源码 - interface Fieldable
2010-12-14 22:28 1169package org.apache.lucene.do ... -
LinkedIn公司实现的实时搜索引擎Zoie
2010-12-14 21:02 872转自:forfuture1978 一 ...
相关推荐
首先,我们来看看`lucene-core-3.0.2.jar`。这是Lucene的核心库,包含了所有用于创建、索引和搜索文档的基本组件。它提供了一个高效的倒排索引结构,使得文本搜索变得快速且高效。在3.0.2版本中,Lucene引入了诸多...
Apache旗下的Lucene是一款用JAVA语言开发的最早的搜索引擎系统,对学习搜索引擎的朋友们有很大的帮助!
赠送jar包:lucene-analyzers-smartcn-7.7.0.jar; 赠送原API文档:lucene-analyzers-smartcn-7.7.0-javadoc.jar; 赠送源代码:lucene-analyzers-smartcn-7.7.0-sources.jar; 赠送Maven依赖信息文件:lucene-...
lucene-3.0.2-src.zip 源码
2. **Lucene 3.0.2和3.0.3** - **段合并优化**:这两个版本主要关注于索引段的合并策略,旨在减少磁盘I/O,提高检索速度。 - **文档处理增强**:引入了对PDF、HTML等更多文件格式的支持,使得Lucene可以处理更广泛...
lucene3.0.2包含lucene-analyzers-3.0.2.jar,lucene-core-3.0.2.jar,lucene-highlighter-3.0.2.jar,lucene-memory-3.0.2.jar等jar包使用lucene实现分词搜索
在`lucene-3.0.2-dev`源码中,`IndexWriter`类负责创建和更新索引,通过`Term`和`Document`对象来表示关键词和文档。索引的构建过程包括分词(Tokenization)、词项分析(Tokenization)和文档编码(Document ...
赠送jar包:lucene-core-7.7.0.jar; 赠送原API文档:lucene-core-7.7.0-javadoc.jar; 赠送源代码:lucene-core-7.7.0-sources.jar; 赠送Maven依赖信息文件:lucene-core-7.7.0.pom; 包含翻译后的API文档:lucene...
lucene-memory-3.0.2.jar,lucene高亮显示中不可少的jar包lucene-memory-*.jar
赠送jar包:lucene-analyzers-common-6.6.0.jar; 赠送原API文档:lucene-analyzers-common-6.6.0-javadoc.jar; 赠送源代码:lucene-analyzers-common-6.6.0-sources.jar; 赠送Maven依赖信息文件:lucene-...
在“jakarta-lucene-1.9-final.zip”这个压缩包中,我们找到了Lucene 1.9最终版本的相关文件。Lucene自1.9以来,已经经历了多次迭代,每个版本都带来了性能和功能上的提升,但1.9版本仍具有其历史价值,尤其对于那些...
lucene-demos-3.0.2.jar 搜索引擎
赠送jar包:lucene-suggest-6.6.0.jar; 赠送原API文档:lucene-suggest-6.6.0-javadoc.jar; 赠送源代码:lucene-suggest-6.6.0-sources.jar; 赠送Maven依赖信息文件:lucene-suggest-6.6.0.pom; 包含翻译后的API...
赠送jar包:lucene-core-7.2.1.jar; 赠送原API文档:lucene-core-7.2.1-javadoc.jar; 赠送源代码:lucene-core-7.2.1-sources.jar; 赠送Maven依赖信息文件:lucene-core-7.2.1.pom; 包含翻译后的API文档:lucene...
赠送jar包:lucene-backward-codecs-7.3.1.jar; 赠送原API文档:lucene-backward-codecs-7.3.1-javadoc.jar; 赠送源代码:lucene-backward-codecs-7.3.1-sources.jar; 赠送Maven依赖信息文件:lucene-backward-...
赠送jar包:lucene-spatial-extras-7.3.1.jar; 赠送原API文档:lucene-spatial-extras-7.3.1-javadoc.jar; 赠送源代码:lucene-spatial-extras-7.3.1-sources.jar; 赠送Maven依赖信息文件:lucene-spatial-extras...
赠送jar包:lucene-analyzers-smartcn-7.7.0.jar; 赠送原API文档:lucene-analyzers-smartcn-7.7.0-javadoc.jar; 赠送源代码:lucene-analyzers-smartcn-7.7.0-sources.jar; 赠送Maven依赖信息文件:lucene-...
赠送jar包:lucene-spatial-extras-7.2.1.jar; 赠送原API文档:lucene-spatial-extras-7.2.1-javadoc.jar; 赠送源代码:lucene-spatial-extras-7.2.1-sources.jar; 赠送Maven依赖信息文件:lucene-spatial-extras...
赠送jar包:lucene-spatial-extras-6.6.0.jar; 赠送原API文档:lucene-spatial-extras-6.6.0-javadoc.jar; 赠送源代码:lucene-spatial-extras-6.6.0-sources.jar; 赠送Maven依赖信息文件:lucene-spatial-extras...
赠送jar包:lucene-backward-codecs-7.2.1.jar; 赠送原API文档:lucene-backward-codecs-7.2.1-javadoc.jar; 赠送源代码:lucene-backward-codecs-7.2.1-sources.jar; 赠送Maven依赖信息文件:lucene-backward-...