`
turingfellow
  • 浏览: 135164 次
  • 性别: Icon_minigender_1
  • 来自: 福建省莆田市
社区版块
存档分类
最新评论

standardtokenizer

阅读更多
/**
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements.  See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License.  You may obtain a copy of the License at
*
*     http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.lucene.analysis.standard;

import java.io.IOException;
import java.io.Reader;

import org.apache.lucene.analysis.Token;
import org.apache.lucene.analysis.Tokenizer;
import org.apache.lucene.analysis.tokenattributes.OffsetAttribute;
import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute;
import org.apache.lucene.analysis.tokenattributes.TermAttribute;
import org.apache.lucene.analysis.tokenattributes.TypeAttribute;
import org.apache.lucene.util.AttributeSource;
import org.apache.lucene.util.Version;

/** A grammar-based tokenizer constructed with JFlex
*
* <p> This should be a good tokenizer for most European-language documents:
*
* <ul>
*   <li>Splits words at punctuation characters, removing punctuation. However, a
*     dot that's not followed by whitespace is considered part of a token.
*   <li>Splits words at hyphens, unless there's a number in the token, in which case
*     the whole token is interpreted as a product number and is not split.
*   <li>Recognizes email addresses and internet hostnames as one token.
* </ul>
*
* <p>Many applications have specific tokenizer needs.  If this tokenizer does
* not suit your application, please consider copying this source code
* directory to your project and maintaining your own grammar-based tokenizer.
*
* <a name="version"/>
* <p>You must specify the required {@link Version}
* compatibility when creating StandardAnalyzer:
* <ul>
*   <li> As of 2.4, Tokens incorrectly identified as acronyms
*        are corrected (see <a href="https://issues.apache.org/jira/browse/LUCENE-1068">LUCENE-1608</a>
* </ul>
*/

public class StandardTokenizer extends Tokenizer {
  /** A private instance of the JFlex-constructed scanner */
  private final StandardTokenizerImpl scanner;

  public static final int ALPHANUM          = 0;
  public static final int APOSTROPHE        = 1;
  public static final int ACRONYM           = 2;
  public static final int COMPANY           = 3;
  public static final int EMAIL             = 4;
  public static final int HOST              = 5;
  public static final int NUM               = 6;
  public static final int CJ                = 7;

  /**
   * @deprecated this solves a bug where HOSTs that end with '.' are identified
   *             as ACRONYMs. It is deprecated and will be removed in the next
   *             release.
   */
  public static final int ACRONYM_DEP       = 8;

  /** String token types that correspond to token type int constants */
  public static final String [] TOKEN_TYPES = new String [] {
    "<ALPHANUM>",
    "<APOSTROPHE>",
    "<ACRONYM>",
    "<COMPANY>",
    "<EMAIL>",
    "<HOST>",
    "<NUM>",
    "<CJ>",
    "<ACRONYM_DEP>"
  };

  /** @deprecated Please use {@link #TOKEN_TYPES} instead */
  public static final String [] tokenImage = TOKEN_TYPES;

  /**
   * Specifies whether deprecated acronyms should be replaced with HOST type.
   * This is false by default to support backward compatibility.
   *<p/>
   * See http://issues.apache.org/jira/browse/LUCENE-1068
   *
   * @deprecated this should be removed in the next release (3.0).
   */
  private boolean replaceInvalidAcronym;
   
  private int maxTokenLength = StandardAnalyzer.DEFAULT_MAX_TOKEN_LENGTH;

  /** Set the max allowed token length.  Any token longer
   *  than this is skipped. */
  public void setMaxTokenLength(int length) {
    this.maxTokenLength = length;
  }

  /** @see #setMaxTokenLength */
  public int getMaxTokenLength() {
    return maxTokenLength;
  }

  /**
   * Creates a new instance of the {@link StandardTokenizer}. Attaches the
   * <code>input</code> to a newly created JFlex scanner.
   *
   * @deprecated Use {@link #StandardTokenizer(Version,
   * Reader)} instead
   */
  public StandardTokenizer(Reader input) {
    this(Version.LUCENE_24, input);
  }

  /**
   * Creates a new instance of the {@link org.apache.lucene.analysis.standard.StandardTokenizer}.  Attaches
   * the <code>input</code> to the newly created JFlex scanner.
   *
   * @param input The input reader
   * @param replaceInvalidAcronym Set to true to replace mischaracterized acronyms with HOST.
   *
   * See http://issues.apache.org/jira/browse/LUCENE-1068
   *
   * @deprecated Use {@link #StandardTokenizer(Version, Reader)} instead
   */
  public StandardTokenizer(Reader input, boolean replaceInvalidAcronym) {
    super();
    this.scanner = new StandardTokenizerImpl(input);
    init(input, replaceInvalidAcronym);
  }

  /**
   * Creates a new instance of the {@link org.apache.lucene.analysis.standard.StandardTokenizer}.  Attaches
   * the <code>input</code> to the newly created JFlex scanner.
   *
   * @param input The input reader
   *
   * See http://issues.apache.org/jira/browse/LUCENE-1068
   */
  public StandardTokenizer(Version matchVersion, Reader input) {
    super();
    this.scanner = new StandardTokenizerImpl(input);
    init(input, matchVersion);
  }

  /**
   * Creates a new StandardTokenizer with a given {@link AttributeSource}.
   *
   * @deprecated Use {@link #StandardTokenizer(Version, AttributeSource, Reader)} instead
   */
  public StandardTokenizer(AttributeSource source, Reader input, boolean replaceInvalidAcronym) {
    super(source);
    this.scanner = new StandardTokenizerImpl(input);
    init(input, replaceInvalidAcronym);
  }

  /**
   * Creates a new StandardTokenizer with a given {@link AttributeSource}.
   */
  public StandardTokenizer(Version matchVersion, AttributeSource source, Reader input) {
    super(source);
    this.scanner = new StandardTokenizerImpl(input);
    init(input, matchVersion);
  }

  /**
   * Creates a new StandardTokenizer with a given {@link org.apache.lucene.util.AttributeSource.AttributeFactory}
   *
   * @deprecated Use {@link #StandardTokenizer(Version, org.apache.lucene.util.AttributeSource.AttributeFactory, Reader)} instead
   */
  public StandardTokenizer(AttributeFactory factory, Reader input, boolean replaceInvalidAcronym) {
    super(factory);
    this.scanner = new StandardTokenizerImpl(input);
    init(input, replaceInvalidAcronym);
  }

  /**
   * Creates a new StandardTokenizer with a given {@link org.apache.lucene.util.AttributeSource.AttributeFactory}
   */
  public StandardTokenizer(Version matchVersion, AttributeFactory factory, Reader input) {
    super(factory);
    this.scanner = new StandardTokenizerImpl(input);
    init(input, matchVersion);
  }

  private void init(Reader input, boolean replaceInvalidAcronym) {
    this.replaceInvalidAcronym = replaceInvalidAcronym;
    this.input = input;   
    termAtt = (TermAttribute) addAttribute(TermAttribute.class);
    offsetAtt = (OffsetAttribute) addAttribute(OffsetAttribute.class);
    posIncrAtt = (PositionIncrementAttribute) addAttribute(PositionIncrementAttribute.class);
    typeAtt = (TypeAttribute) addAttribute(TypeAttribute.class);
  }

  private void init(Reader input, Version matchVersion) {
    if (matchVersion.onOrAfter(Version.LUCENE_24)) {
      init(input, true);
    } else {
      init(input, false);
    }
  }
 
  // this tokenizer generates three attributes:
  // offset, positionIncrement and type
  private TermAttribute termAtt;
  private OffsetAttribute offsetAtt;
  private PositionIncrementAttribute posIncrAtt;
  private TypeAttribute typeAtt;

  /*
   * (non-Javadoc)
   *
   * @see org.apache.lucene.analysis.TokenStream#next()
   */
  public final boolean incrementToken() throws IOException {
    clearAttributes();
    int posIncr = 1;

    while(true) {
      int tokenType = scanner.getNextToken();

      if (tokenType == StandardTokenizerImpl.YYEOF) {
        return false;
      }

      if (scanner.yylength() <= maxTokenLength) {
        posIncrAtt.setPositionIncrement(posIncr);
        scanner.getText(termAtt);
        final int start = scanner.yychar();
        offsetAtt.setOffset(correctOffset(start), correctOffset(start+termAtt.termLength()));
        // This 'if' should be removed in the next release. For now, it converts
        // invalid acronyms to HOST. When removed, only the 'else' part should
        // remain.
        if (tokenType == StandardTokenizerImpl.ACRONYM_DEP) {
          if (replaceInvalidAcronym) {
            typeAtt.setType(StandardTokenizerImpl.TOKEN_TYPES[StandardTokenizerImpl.HOST]);
            termAtt.setTermLength(termAtt.termLength() - 1); // remove extra '.'
          } else {
            typeAtt.setType(StandardTokenizerImpl.TOKEN_TYPES[StandardTokenizerImpl.ACRONYM]);
          }
        } else {
          typeAtt.setType(StandardTokenizerImpl.TOKEN_TYPES[tokenType]);
        }
        return true;
      } else
        // When we skip a too-long term, we still increment the
        // position increment
        posIncr++;
    }
  }
 
  public final void end() {
    // set final offset
    int finalOffset = correctOffset(scanner.yychar() + scanner.yylength());
    offsetAtt.setOffset(finalOffset, finalOffset);
  }

  /** @deprecated Will be removed in Lucene 3.0. This method is final, as it should
   * not be overridden. Delegates to the backwards compatibility layer. */
  public final Token next(final Token reusableToken) throws IOException {
    return super.next(reusableToken);
  }

  /** @deprecated Will be removed in Lucene 3.0. This method is final, as it should
   * not be overridden. Delegates to the backwards compatibility layer. */
  public final Token next() throws IOException {
    return super.next();
  }

  /*
   * (non-Javadoc)
   *
   * @see org.apache.lucene.analysis.TokenStream#reset()
   */
  public void reset() throws IOException {
    super.reset();
    scanner.yyreset(input);
  }

  public void reset(Reader reader) throws IOException {
    super.reset(reader);
    reset();
  }

  /**
   * Prior to https://issues.apache.org/jira/browse/LUCENE-1068, StandardTokenizer mischaracterized as acronyms tokens like www.abc.com
   * when they should have been labeled as hosts instead.
   * @return true if StandardTokenizer now returns these tokens as Hosts, otherwise false
   *
   * @deprecated Remove in 3.X and make true the only valid value
   */
  public boolean isReplaceInvalidAcronym() {
    return replaceInvalidAcronym;
  }

  /**
   *
   * @param replaceInvalidAcronym Set to true to replace mischaracterized acronyms as HOST.
   * @deprecated Remove in 3.X and make true the only valid value
   *
   * See https://issues.apache.org/jira/browse/LUCENE-1068
   */
  public void setReplaceInvalidAcronym(boolean replaceInvalidAcronym) {
    this.replaceInvalidAcronym = replaceInvalidAcronym;
  }
}
分享到:
评论

相关推荐

    StandardPlusTokenizer:Lucene 的 StandardTokenizer 的扩展

    Lucene 的 StandardTokenizer 的扩展。 此标记器标记除空格之外的所有字符。 (StandardTokenizer 不标记某些类型的字符:符号、标点符号……) 分配 建造 安装版本 2。 莱因罐 如果你修改 .flex,你应该得到并将...

    lucene5.5做同义词分析器

    StandardTokenizer source = new StandardTokenizer(reader); TokenStream sink = new SynonymFilter(source, wordList); return new TokenStreamComponents(source, sink); } } ``` 在这个例子中,我们首先...

    solr_solr_

    常见的分词器有StandardTokenizer、LetterTokenizer、KeywordTokenizer等,过滤器则包括LowerCaseFilter、StopFilter、SnowballPorterFilter等。 在Solr的Schema配置中,每个字段类型(Field Type,简称FT)都由一...

    搜索 关键词 分词研究技术

    例如,对于德语的复合词处理,`StandardTokenizer`默认情况下会将整个复合词作为一个整体进行处理,而不是将其分解为组成单词。这可能导致搜索结果不准确。 #### 高级分词技术 为了更好地支持多语言搜索,可以采用...

    apache-lucene-analyzers.jar

    6. 标准分析器(StandardAnalyzer):这是Lucene提供的一个常用分析器,它综合了StandardTokenizer、StandardFilter、LowerCaseFilter和StopFilter,适合处理大多数英文文本。 7. 跨语言分析器(ICUAnalyzer):...

    分词器6659282.zip

    在Solr中,有多种内置的分词器可供选择,如StandardTokenizer、SimpleTokenizer、KeywordTokenizer等,每种都有其特定的分词规则。例如,StandardTokenizer遵循Unicode标准,能够处理大多数语言的文本,而...

    TermAttribute.zip_TermAttribute

    在这个例子中,我们创建了一个自定义的分析器,它使用了`StandardTokenizer`作为基础分词器,并通过`TokenStream`的`incrementToken()`方法获取每个分词。如果分词是停用词,我们就忽略它;否则,我们就保留并更新`...

    lucene-4.8.0源代码,比较全

    1. 分词器(Tokenizer)与过滤器(Filter):4.8.0版本中,Lucene提供了一系列分词器和过滤器,如StandardTokenizer、KeywordTokenizer等,它们负责将原始文本转化为可索引的单元。例如,StandardTokenizer会处理...

    ThesaurusAnalyzer分词器

    ThesaurusAnalyzer的核心组件是Thesaurus和StandardTokenizer/StandardAnalyzer的结合。首先,StandardTokenizer对输入文本进行初步分词,生成基础词汇。接着,ThesaurusAnalyzer会检查这些词汇,并将它们与预定义的...

    hanlp-1.7.2-release.zip

    for (String term : StandardTokenizer.segment(text)) { System.out.println(term); } } } ``` 这段代码将打印出分词结果:“我”、“爱”、“自然语言”、“处理”。 五、总结 "hanlp-1.7.2-release.zip"是...

    中文名称转英文拼音

    for (Term term : StandardTokenizer.segment(chinese)) { String pinyin = term.nature.toString().startsWith("v") ? term.word : HanLP.convertToPinyinString(term.word); System.out.println(pinyin); } `...

    ASP.NET基于Ajax+Lucene构建搜索引擎的设计和实现(源代码+LW).zip

    `ConstantScoreQuery.cs`、`StandardTokenizer.cs`和`WhitespaceTokenizer.cs`这些文件分别涉及到Lucene的评分查询、标准分词器和空格分词器。评分查询用于确定匹配度,标准分词器是处理英文文本的常见工具,而空格...

    lucene2.0.0搜索引擎源代码

    1. **分词器(Tokenizer)与过滤器(Filter)**:Lucene 2.0.0提供了丰富的分词器和过滤器,如StandardTokenizer、StopFilter等,用于处理文本数据,将原始文档转换为可供搜索的术语。 2. **索引构建(Indexing)**...

    Lucene如何使用TokenFilter进行再分词

    Tokenizer tokenizer = new StandardTokenizer(reader); // 使用默认的StandardTokenizer TokenStream filter = new CustomTokenFilter(tokenizer); // 添加自定义的TokenFilter return new ...

    SimHash源码.docx

    首先对文本进行预处理,然后使用`StandardTokenizer`(来自Hankcs的HanLP库)进行分词。HanLP是一个强大的中文自然语言处理库,可以进行词汇、短语等的分割。 接着,代码创建了一个`int`数组`v`,长度等于`hashbits...

    Lucene4.X实战类baidu搜索的大型文档海量搜索系统-17.Lucene高级进阶3 共4页.pptx

    Lucene支持多种分词器,如标准分词器(StandardTokenizer)、中文分词器(SmartChineseAnalyzer)等。在特定场景下,可能需要开发自定义分词器以满足特定语言或业务需求,例如`Lucene分词器1`至`Lucene分词器3`章节...

    tantivy-main-源码.rar

    Tantivy还支持多种分词器(Tokenizer),如标准分词器(StandardTokenizer)和n-gram分词器(NGramTokenizer)。这些分词器在`tokenizer`模块中实现,它们将文本分解为词元,为建立倒排索引做准备。 此外,Tantivy...

    S4LuceneLib-an iOS native version of Apache's Lucene project.rar

    它可能包含了各种语言的字符,因此,S4LuceneLib内建了多种分词器,如StandardTokenizer,用于将文本拆分成可搜索的词语,同时处理标点符号和停用词。 3. **索引构建(Indexing)**: 索引是Lucene的核心部分,S4...

    solr分词器

    在Solr中,有多种内置的分词器可供选择,如StandardTokenizer(标准分词器)和IK Analyzer(智能中文分词器)。以IK Analyzer为例,它是专门为中文设计的分词器,能够处理复杂的中文词汇切分问题。IK Analyzer支持...

    Lucene测试程序3.5

    它包含了一个标准分词器(StandardTokenizer)、一个标准过滤器(StandardFilter)以及一些其他过滤器,如字母数字转换过滤器(LowerCaseFilter)和去除停用词过滤器(StopFilter)。这些组件协同工作,将输入文本...

Global site tag (gtag.js) - Google Analytics