- 浏览: 562517 次
- 性别:
- 来自: 杭州
文章分类
- 全部博客 (478)
- lucene (45)
- oracle (19)
- nutch (2)
- blog (2)
- 垂直搜索 (19)
- java综合 (89)
- spring (15)
- Hibernate (9)
- Struts (9)
- Hadoop (16)
- Mysql (12)
- nosql (10)
- Linux (3)
- MyEclipse (4)
- Ant (1)
- 设计模式 (19)
- JBPM (1)
- JSP (1)
- HtmlParser (5)
- SVN (2)
- 插件 (2)
- 收藏 (7)
- Others (1)
- Heritrix (18)
- Solr (4)
- 主题爬虫 (31)
- 内存数据库 (24)
- 分布式与海量数据 (32)
- httpclient (14)
- Tomcat (1)
- 面试宝典 (6)
- Python (14)
- 数据挖掘 (1)
- 算法 (6)
- 其他 (4)
- JVM (12)
- Redis (18)
最新评论
-
hanjiyun:
本人水平还有待提高,进步空间很大,看这些文章给我有很大的指导作 ...
JVM的内存管理 Ⅲ -
liuxinglanyue:
四年后的自己:这种方法 不靠谱。 使用javaagent的方式 ...
计算Java对象占用内存空间的大小(对于32位虚拟机而言) -
jaysoncn:
附件在哪里啊test.NoCertificationHttps ...
使用HttpClient过程中常见的一些问题 -
231fuchenxi:
你好,有redis,memlink,mysql的测试代码吗?可 ...
MemLink 性能测试 -
guyue1015:
[color=orange][/color][size=lar ...
JAVA同步机制
package org.apache.lucene.document; /** * Copyright 2004 The Apache Software Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.index.FieldInvertState; // for javadocs import org.apache.lucene.search.PhraseQuery; import org.apache.lucene.search.spans.SpanQuery; import java.io.Reader; import java.io.Serializable; /** * Synonymous with {@link Field}. * * <p><bold>WARNING</bold>: This interface may change within minor versions, despite Lucene's backward compatibility requirements. * This means new methods may be added from version to version. This change only affects the Fieldable API; other backwards * compatibility promises remain intact. For example, Lucene can still * read and write indices created within the same major version. * </p> * **/ public interface Fieldable extends Serializable { /** Sets the boost factor hits on this field. This value will be * multiplied into the score of all hits on this this field of this * document. * * <p>The boost is multiplied by {@link org.apache.lucene.document.Document#getBoost()} of the document * containing this field. If a document has multiple fields with the same * name, all such values are multiplied together. This product is then * used to compute the norm factor for the field. By * default, in the {@link * org.apache.lucene.search.Similarity#computeNorm(String, * FieldInvertState)} method, the boost value is multiplied * by the {@link * org.apache.lucene.search.Similarity#lengthNorm(String, * int)} and then rounded by {@link org.apache.lucene.search.Similarity#encodeNorm(float)} before it is stored in the * index. One should attempt to ensure that this product does not overflow * the range of that encoding. * * @see org.apache.lucene.document.Document#setBoost(float) * @see org.apache.lucene.search.Similarity#computeNorm(String, FieldInvertState) * @see org.apache.lucene.search.Similarity#encodeNorm(float) */ void setBoost(float boost); /** Returns the boost factor for hits for this field. * * <p>The default value is 1.0. * * <p>Note: this value is not stored directly with the document in the index. * Documents returned from {@link org.apache.lucene.index.IndexReader#document(int)} and * {@link org.apache.lucene.search.Searcher#doc(int)} may thus not have the same value present as when * this field was indexed. * * @see #setBoost(float) */ float getBoost(); /** Returns the name of the field as an interned string. * For example "date", "title", "body", ... */ String name(); /** The value of the field as a String, or null. * <p> * For indexing, if isStored()==true, the stringValue() will be used as the stored field value * unless isBinary()==true, in which case getBinaryValue() will be used. * * If isIndexed()==true and isTokenized()==false, this String value will be indexed as a single token. * If isIndexed()==true and isTokenized()==true, then tokenStreamValue() will be used to generate indexed tokens if not null, * else readerValue() will be used to generate indexed tokens if not null, else stringValue() will be used to generate tokens. */ public String stringValue(); /** The value of the field as a Reader, which can be used at index time to generate indexed tokens. * @see #stringValue() */ public Reader readerValue(); /** The TokenStream for this field to be used when indexing, or null. * @see #stringValue() */ public TokenStream tokenStreamValue(); /** True if the value of the field is to be stored in the index for return with search hits. */ boolean isStored(); /** True if the value of the field is to be indexed, so that it may be searched on. */ boolean isIndexed(); /** True if the value of the field should be tokenized as text prior to indexing. Un-tokenized fields are indexed as a single word and may not be Reader-valued. */ boolean isTokenized(); /** True if the term or terms used to index this field are stored as a term * vector, available from {@link org.apache.lucene.index.IndexReader#getTermFreqVector(int,String)}. * These methods do not provide access to the original content of the field, * only to terms used to index it. If the original content must be * preserved, use the <code>stored</code> attribute instead. * * @see org.apache.lucene.index.IndexReader#getTermFreqVector(int, String) */ boolean isTermVectorStored(); /** * True if terms are stored as term vector together with their offsets * (start and end positon in source text). */ boolean isStoreOffsetWithTermVector(); /** * True if terms are stored as term vector together with their token positions. */ boolean isStorePositionWithTermVector(); /** True if the value of the field is stored as binary */ boolean isBinary(); /** True if norms are omitted for this indexed field */ boolean getOmitNorms(); /** Expert: * * If set, omit normalization factors associated with this indexed field. * This effectively disables indexing boosts and length normalization for this field. */ void setOmitNorms(boolean omitNorms); /** * Indicates whether a Field is Lazy or not. The semantics of Lazy loading are such that if a Field is lazily loaded, retrieving * it's values via {@link #stringValue()} or {@link #getBinaryValue()} is only valid as long as the {@link org.apache.lucene.index.IndexReader} that * retrieved the {@link Document} is still open. * * @return true if this field can be loaded lazily */ boolean isLazy(); /** * Returns offset into byte[] segment that is used as value, if Field is not binary * returned value is undefined * @return index of the first character in byte[] segment that represents this Field value */ abstract int getBinaryOffset(); /** * Returns length of byte[] segment that is used as value, if Field is not binary * returned value is undefined * @return length of byte[] segment that represents this Field value */ abstract int getBinaryLength(); /** * Return the raw byte[] for the binary field. Note that * you must also call {@link #getBinaryLength} and {@link * #getBinaryOffset} to know which range of bytes in this * returned array belong to the field. * @return reference to the Field value as byte[]. */ abstract byte[] getBinaryValue(); /** * Return the raw byte[] for the binary field. Note that * you must also call {@link #getBinaryLength} and {@link * #getBinaryOffset} to know which range of bytes in this * returned array belong to the field.<p> * About reuse: if you pass in the result byte[] and it is * used, likely the underlying implementation will hold * onto this byte[] and return it in future calls to * {@link #getBinaryValue()}. * So if you subsequently re-use the same byte[] elsewhere * it will alter this Fieldable's value. * @param result User defined buffer that will be used if * possible. If this is null or not large enough, a new * buffer is allocated * @return reference to the Field value as byte[]. */ abstract byte[] getBinaryValue(byte[] result); /** @see #setOmitTermFreqAndPositions */ boolean getOmitTermFreqAndPositions(); /** Expert: * * If set, omit term freq, positions and payloads from * postings for this field. * * <p><b>NOTE</b>: While this option reduces storage space * required in the index, it also means any query * requiring positional information, such as {@link * PhraseQuery} or {@link SpanQuery} subclasses will * silently fail to find results. */ void setOmitTermFreqAndPositions(boolean omitTermFreqAndPositions); }
发表评论
-
关于Lucene的讨论
2011-01-01 10:20 1060分类为[lucene]的文章 ... -
有关Lucene的问题(收藏)推荐
2010-12-30 21:02 1107有关Lucene的问题(1):为 ... -
Lucene 学习总结(收藏)推荐
2010-12-30 20:54 1556Lucene学习总结之一:全文检索的基本原理 ... -
基于Lucene的Compass 资源(收藏)
2010-12-29 18:29 11421.2、Compass相关网上资源 1、官方网站1: http ... -
Lucene 3.0.2索引文件官方文档(二)
2010-12-28 22:36 1006Deletable File A writer dy ... -
Lucene 3.0.2索引文件官方文档(一)
2010-12-28 22:34 1457Apache Lucene - Index File ... -
Lucene 3.0 索引文件学习总结(收藏)
2010-12-28 22:28 936lucene学习1——词域信息 ... -
Lucene 字符编码问题
2010-12-27 20:29 992现在如果一个txt文件中包含了ANSI编码的文本文件和Uni ... -
Lucene 字符编码问题
2010-12-27 20:20 1029现在如果一个txt文件中包含了ANSI编码的文本文件和Unic ... -
Annotated Lucene(源码剖析中文版)
2010-12-25 22:52 1260Apache Lucene是一个高性能(high-pe ... -
Lucene 学习推荐博客
2010-12-25 22:42 1031深未来deepfuturelx http://deepfut ... -
Lucene3.0 初窥 总结(收藏)
2010-12-25 22:16 1807【Lucene3.0 初窥】全文检索的基本原理 ... -
转:基于lucene实现自己的推荐引擎
2010-12-17 17:05 1053采用基于数据挖掘的 ... -
加速 lucene 的搜索速度 ImproveSearchingSpeed(二)
2010-12-17 17:01 1031本文 为简单翻译,原文在:http://wiki.apac ... -
加速 lucene 索引建立速度 ImproveIndexingSpeed
2010-12-17 16:58 1070本文 只是简单的翻译,原文 在 http://wiki.a ... -
lucene 3.0 中的demo项目部署
2010-12-15 22:02 970转自:bjqincy 1 在myEclipise 建立 ... -
Lucene 3.0.2 源码 - final class Document
2010-12-14 22:33 888package org.apache.lucene.do ... -
Lucene 3.0.2 源码 - final class Field
2010-12-14 22:29 951package org.apache.lucene.do ... -
Lucene 3.0.2 源码 - abstract class AbstractField
2010-12-14 22:28 1039package org.apache.lucene.do ... -
LinkedIn公司实现的实时搜索引擎Zoie
2010-12-14 21:02 872转自:forfuture1978 一 ...
相关推荐
首先,我们来看看`lucene-core-3.0.2.jar`。这是Lucene的核心库,包含了所有用于创建、索引和搜索文档的基本组件。它提供了一个高效的倒排索引结构,使得文本搜索变得快速且高效。在3.0.2版本中,Lucene引入了诸多...
Apache旗下的Lucene是一款用JAVA语言开发的最早的搜索引擎系统,对学习搜索引擎的朋友们有很大的帮助!
赠送jar包:lucene-analyzers-smartcn-7.7.0.jar; 赠送原API文档:lucene-analyzers-smartcn-7.7.0-javadoc.jar; 赠送源代码:lucene-analyzers-smartcn-7.7.0-sources.jar; 赠送Maven依赖信息文件:lucene-...
lucene-3.0.2-src.zip 源码
2. **Lucene 3.0.2和3.0.3** - **段合并优化**:这两个版本主要关注于索引段的合并策略,旨在减少磁盘I/O,提高检索速度。 - **文档处理增强**:引入了对PDF、HTML等更多文件格式的支持,使得Lucene可以处理更广泛...
在`lucene-3.0.2-dev`源码中,`IndexWriter`类负责创建和更新索引,通过`Term`和`Document`对象来表示关键词和文档。索引的构建过程包括分词(Tokenization)、词项分析(Tokenization)和文档编码(Document ...
赠送jar包:lucene-core-7.7.0.jar; 赠送原API文档:lucene-core-7.7.0-javadoc.jar; 赠送源代码:lucene-core-7.7.0-sources.jar; 赠送Maven依赖信息文件:lucene-core-7.7.0.pom; 包含翻译后的API文档:lucene...
lucene3.0.2包含lucene-analyzers-3.0.2.jar,lucene-core-3.0.2.jar,lucene-highlighter-3.0.2.jar,lucene-memory-3.0.2.jar等jar包使用lucene实现分词搜索
赠送jar包:lucene-analyzers-common-6.6.0.jar; 赠送原API文档:lucene-analyzers-common-6.6.0-javadoc.jar; 赠送源代码:lucene-analyzers-common-6.6.0-sources.jar; 赠送Maven依赖信息文件:lucene-...
lucene-memory-3.0.2.jar,lucene高亮显示中不可少的jar包lucene-memory-*.jar
赠送jar包:lucene-suggest-6.6.0.jar; 赠送原API文档:lucene-suggest-6.6.0-javadoc.jar; 赠送源代码:lucene-suggest-6.6.0-sources.jar; 赠送Maven依赖信息文件:lucene-suggest-6.6.0.pom; 包含翻译后的API...
赠送jar包:lucene-core-7.2.1.jar; 赠送原API文档:lucene-core-7.2.1-javadoc.jar; 赠送源代码:lucene-core-7.2.1-sources.jar; 赠送Maven依赖信息文件:lucene-core-7.2.1.pom; 包含翻译后的API文档:lucene...
lucene-demos-3.0.2.jar 搜索引擎
赠送jar包:lucene-backward-codecs-7.3.1.jar; 赠送原API文档:lucene-backward-codecs-7.3.1-javadoc.jar; 赠送源代码:lucene-backward-codecs-7.3.1-sources.jar; 赠送Maven依赖信息文件:lucene-backward-...
赠送jar包:lucene-spatial-extras-7.3.1.jar; 赠送原API文档:lucene-spatial-extras-7.3.1-javadoc.jar; 赠送源代码:lucene-spatial-extras-7.3.1-sources.jar; 赠送Maven依赖信息文件:lucene-spatial-extras...
赠送jar包:lucene-analyzers-smartcn-7.7.0.jar; 赠送原API文档:lucene-analyzers-smartcn-7.7.0-javadoc.jar; 赠送源代码:lucene-analyzers-smartcn-7.7.0-sources.jar; 赠送Maven依赖信息文件:lucene-...
赠送jar包:lucene-spatial-extras-7.2.1.jar; 赠送原API文档:lucene-spatial-extras-7.2.1-javadoc.jar; 赠送源代码:lucene-spatial-extras-7.2.1-sources.jar; 赠送Maven依赖信息文件:lucene-spatial-extras...
赠送jar包:lucene-spatial-extras-6.6.0.jar; 赠送原API文档:lucene-spatial-extras-6.6.0-javadoc.jar; 赠送源代码:lucene-spatial-extras-6.6.0-sources.jar; 赠送Maven依赖信息文件:lucene-spatial-extras...
赠送jar包:lucene-backward-codecs-7.2.1.jar; 赠送原API文档:lucene-backward-codecs-7.2.1-javadoc.jar; 赠送源代码:lucene-backward-codecs-7.2.1-sources.jar; 赠送Maven依赖信息文件:lucene-backward-...
赠送jar包:lucene-sandbox-7.2.1.jar; 赠送原API文档:lucene-sandbox-7.2.1-javadoc.jar; 赠送源代码:lucene-sandbox-7.2.1-sources.jar; 赠送Maven依赖信息文件:lucene-sandbox-7.2.1.pom; 包含翻译后的API...