- 浏览: 1048361 次
- 性别:
- 来自: 上海
文章分类
- 全部博客 (1441)
- 软件思想&演讲 (9)
- 行业常识 (250)
- 时时疑问 (5)
- java/guava/python/php/ruby/R/scala/groovy (213)
- struct/spring/springmvc (37)
- mybatis/hibernate/JPA (10)
- mysql/oracle/sqlserver/db2/mongdb/redis/neo4j/GreenPlum/Teradata/hsqldb/Derby/sakila (268)
- js/jquery/jqueryUi/jqueryEaseyUI/extjs/angulrJs/react/es6/grunt/zepto/raphael (81)
- ZMQ/RabbitMQ/ActiveMQ/JMS/kafka (17)
- lucene/solr/nuth/elasticsearch/MG4J (167)
- html/css/ionic/nodejs/bootstrap (19)
- Linux/shell/centos (56)
- cvs/svn/git/sourceTree/gradle/ant/maven/mantis/docker/Kubernetes (26)
- sonatype nexus (1)
- tomcat/jetty/netty/jboss (9)
- 工具 (17)
- ETL/SPASS/MATLAB/RapidMiner/weka/kettle/DataX/Kylin (11)
- hadoop/spark/Hbase/Hive/pig/Zookeeper/HAWQ/cloudera/Impala/Oozie (190)
- ios/swift/android (9)
- 机器学习&算法&大数据 (18)
- Mesos是Apache下的开源分布式资源管理框架 (1)
- echarts/d3/highCharts/tableau (1)
- 行业技能图谱 (1)
- 大数据可视化 (2)
- tornado/ansible/twisted (2)
- Nagios/Cacti/Zabbix (0)
- eclipse/intellijIDEA/webstorm (5)
- cvs/svn/git/sourceTree/gradle/jira/bitbucket (4)
- jsp/jsf/flex/ZKoss (0)
- 测试技术 (2)
- splunk/flunm (2)
- 高并发/大数据量 (1)
- freemarker/vector/thymeleaf (1)
- docker/Kubernetes (2)
- dubbo/ESB/dubboX/wso2 (2)
最新评论
Lucene内置很多的分词器工具包,几乎涵盖了全球所有的国家和地区,最近散仙,在搞多语言分词的一个处理,主要国家有西班牙,葡萄牙,德语,法语,意大利,其实这些语系都与英语非常类似,都是以空格为分割的语种。
那么首先,探讨下分词器的词形还原和词干提取的对搜索的意义?在这之前,先看下两者的概念:
词形还原(lemmatization),是把一个任何形式的语言词汇还原为一般形式(能表达完整语义),而词干提取
(stemming)是抽取词的词干或词根形式(不一定能够表达完整语义)。词形还原和词干提取是词形规范化的两类
重要方式,都能够达到有效归并词形的目的,二者既有联系也有区别
详细介绍,请参考这篇文章
在电商搜索里,词干的抽取,和单复数的还原比较重要(这里主要针对名词来讲),因为这有关搜索的查准率,和查全率的命中,如果我们的分词器没有对这些词做过处理,会造成什么影响呢?那么请看如下的一个例子?
句子: i have two cats
分词器如果什么都没有做:
这时候我们搜cat,就会无命中结果,而必须搜cats才能命中到一条数据,而事实上cat和cats是同一个东西,只不过单词的形式不一样,这样以来,如果不做处理,我们的查全率和查全率都会下降,会涉及影响到我们的搜索体验,所以stemming这一步,在某些场合的分词中至关重要。
本篇,散仙,会参考源码分析一下,关于德语分词中中如何做的词干提取,先看下德语的分词声明:
Java代码 复制代码 收藏代码
1.List<String> list=new ArrayList<String>();
2.list.add("player");//这里面的词,不会被做词干抽取,词形还原
3.CharArraySet ar=new CharArraySet(Version.LUCENE_43,list , true);
4.//分词器的第二个参数是禁用词参数,第三个参数是排除不做词形转换,或单复数的词
5.GermanAnalyzer sa=new GermanAnalyzer(Version.LUCENE_43,null,ar);
接着,我们具体看下,在德语的分词器中,都经过了哪几部分的过滤处理:
Java代码 复制代码 收藏代码
1. protected TokenStreamComponents createComponents(String fieldName,
2. Reader reader) {
3. //标准分词器过滤
4. final Tokenizer source = new StandardTokenizer(matchVersion, reader);
5. TokenStream result = new StandardFilter(matchVersion, source);
6.//转小写过滤
7. result = new LowerCaseFilter(matchVersion, result);
8.//禁用词过滤
9. result = new StopFilter( matchVersion, result, stopwords);
10.//排除词过滤
11. result = new SetKeywordMarkerFilter(result, exclusionSet);
12. if (matchVersion.onOrAfter(Version.LUCENE_36)) {
13.//在lucene3.6以后的版本,采用如下filter过滤
14. //规格化,将德语中的特殊字符,映射成英语
15. result = new GermanNormalizationFilter(result);
16. //stem词干抽取,词性还原
17. result = new GermanLightStemFilter(result);
18. } else if (matchVersion.onOrAfter(Version.LUCENE_31)) {
19.//在lucene3.1至3.6的版本中,采用SnowballFilter处理
20. result = new SnowballFilter(result, new German2Stemmer());
21. } else {
22.//在lucene3.1之前的采用兼容的GermanStemFilter处理
23. result = new GermanStemFilter(result);
24. }
25. return new TokenStreamComponents(source, result);
26. }
OK,我们从源码中得知,在Lucene4.x中对德语的分词也做了向前和向后兼容,现在我们主要关注在lucene4.x之后的版本如何的词形转换,下面分别看下
result = new GermanNormalizationFilter(result);
result = new GermanLightStemFilter(result);
这两个类的功能:
Java代码 复制代码 收藏代码
1.package org.apache.lucene.analysis.de;
2.
3./*
4. * Licensed to the Apache Software Foundation (ASF) under one or more
5. * contributor license agreements. See the NOTICE file distributed with
6. * this work for additional information regarding copyright ownership.
7. * The ASF licenses this file to You under the Apache License, Version 2.0
8. * (the "License"); you may not use this file except in compliance with
9. * the License. You may obtain a copy of the License at
10. *
11. * http://www.apache.org/licenses/LICENSE-2.0
12. *
13. * Unless required by applicable law or agreed to in writing, software
14. * distributed under the License is distributed on an "AS IS" BASIS,
15. * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16. * See the License for the specific language governing permissions and
17. * limitations under the License.
18. */
19.
20.import java.io.IOException;
21.
22.import org.apache.lucene.analysis.TokenFilter;
23.import org.apache.lucene.analysis.TokenStream;
24.import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
25.import org.apache.lucene.analysis.util.StemmerUtil;
26.
27./**
28. * Normalizes German characters according to the heuristics
29. * of the <a href="http://snowball.tartarus.org/algorithms/german2/stemmer.html">
30. * German2 snowball algorithm</a>.
31. * It allows for the fact that ä, ö and ü are sometimes written as ae, oe and ue.
32. *
33. *
39. * <p>
40. * This is useful if you want this normalization without using
41. * the German2 stemmer, or perhaps no stemming at all.
42. *上面的解释说得很清楚,主要是对德文的一些特殊字母,转换成对应的英文处理
43. *
44. */
45.
46.public final class GermanNormalizationFilter extends TokenFilter {
47. // FSM with 3 states:
48. private static final int N = 0; /* ordinary state */
49. private static final int V = 1; /* stops 'u' from entering umlaut state */
50. private static final int U = 2; /* umlaut state, allows e-deletion */
51.
52. private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class);
53.
54. public GermanNormalizationFilter(TokenStream input) {
55. super(input);
56. }
57.
58. @Override
59. public boolean incrementToken() throws IOException {
60. if (input.incrementToken()) {
61. int state = N;
62. char buffer[] = termAtt.buffer();
63. int length = termAtt.length();
64. for (int i = 0; i < length; i++) {
65. final char c = buffer[i];
66. switch(c) {
67. case 'a':
68. case 'o':
69. state = U;
70. break;
71. case 'u':
72. state = (state == N) ? U : V;
73. break;
74. case 'e':
75. if (state == U)
76. length = StemmerUtil.delete(buffer, i--, length);
77. state = V;
78. break;
79. case 'i':
80. case 'q':
81. case 'y':
82. state = V;
83. break;
84. case 'ä':
85. buffer[i] = 'a';
86. state = V;
87. break;
88. case 'ö':
89. buffer[i] = 'o';
90. state = V;
91. break;
92. case 'ü':
93. buffer[i] = 'u';
94. state = V;
95. break;
96. case 'ß':
97. buffer[i++] = 's';
98. buffer = termAtt.resizeBuffer(1+length);
99. if (i < length)
100. System.arraycopy(buffer, i, buffer, i+1, (length-i));
101. buffer[i] = 's';
102. length++;
103. state = N;
104. break;
105. default:
106. state = N;
107. }
108. }
109. termAtt.setLength(length);
110. return true;
111. } else {
112. return false;
113. }
114. }
115.}
Java代码 复制代码 收藏代码
1.package org.apache.lucene.analysis.de;
2.
3./*
4. * Licensed to the Apache Software Foundation (ASF) under one or more
5. * contributor license agreements. See the NOTICE file distributed with
6. * this work for additional information regarding copyright ownership.
7. * The ASF licenses this file to You under the Apache License, Version 2.0
8. * (the "License"); you may not use this file except in compliance with
9. * the License. You may obtain a copy of the License at
10. *
11. * http://www.apache.org/licenses/LICENSE-2.0
12. *
13. * Unless required by applicable law or agreed to in writing, software
14. * distributed under the License is distributed on an "AS IS" BASIS,
15. * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16. * See the License for the specific language governing permissions and
17. * limitations under the License.
18. */
19.
20.import java.io.IOException;
21.
22.import org.apache.lucene.analysis.TokenFilter;
23.import org.apache.lucene.analysis.TokenStream;
24.import org.apache.lucene.analysis.miscellaneous.SetKeywordMarkerFilter;
25.import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
26.import org.apache.lucene.analysis.tokenattributes.KeywordAttribute;
27.
28./**
29. * A {@link TokenFilter} that applies {@link GermanLightStemmer} to stem German
30. * words.
31. * <p>
32. * To prevent terms from being stemmed use an instance of
33. * {@link SetKeywordMarkerFilter} or a custom {@link TokenFilter} that sets
34. * the {@link KeywordAttribute} before this {@link TokenStream}.
35. *
36.
37. *
38. *
39. *这个类,主要做Stemmer(词干提取),而我们主要关注
40. *GermanLightStemmer这个类的作用
41. *
42. *
43. */
44.public final class GermanLightStemFilter extends TokenFilter {
45. private final GermanLightStemmer stemmer = new GermanLightStemmer();
46. private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class);
47. private final KeywordAttribute keywordAttr = addAttribute(KeywordAttribute.class);
48.
49. public GermanLightStemFilter(TokenStream input) {
50. super(input);
51. }
52.
53. @Override
54. public boolean incrementToken() throws IOException {
55. if (input.incrementToken()) {
56. if (!keywordAttr.isKeyword()) {
57. final int newlen = stemmer.stem(termAtt.buffer(), termAtt.length());
58. termAtt.setLength(newlen);
59. }
60. return true;
61. } else {
62. return false;
63. }
64. }
65.}
下面看下,在GermanLightStemmer中,如何做的词干提取:源码如下:
Java代码 复制代码 收藏代码
1. package org.apache.lucene.analysis.de;
2.
3./*
4. * Licensed to the Apache Software Foundation (ASF) under one or more
5. * contributor license agreements. See the NOTICE file distributed with
6. * this work for additional information regarding copyright ownership.
7. * The ASF licenses this file to You under the Apache License, Version 2.0
8. * (the "License"); you may not use this file except in compliance with
9. * the License. You may obtain a copy of the License at
10. *
11. * http://www.apache.org/licenses/LICENSE-2.0
12. *
13. * Unless required by applicable law or agreed to in writing, software
14. * distributed under the License is distributed on an "AS IS" BASIS,
15. * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16. * See the License for the specific language governing permissions and
17. * limitations under the License.
18. */
19.
20./*
21. * This algorithm is updated based on code located at:
22. * http://members.unine.ch/jacques.savoy/clef/
23. *
24. * Full copyright for that code follows:
25. */
26.
27./*
28. * Copyright (c) 2005, Jacques Savoy
29. * All rights reserved.
30. *
31. * Redistribution and use in source and binary forms, with or without
32. * modification, are permitted provided that the following conditions are met:
33. *
34. * Redistributions of source code must retain the above copyright notice, this
35. * list of conditions and the following disclaimer. Redistributions in binary
36. * form must reproduce the above copyright notice, this list of conditions and
37. * the following disclaimer in the documentation and/or other materials
38. * provided with the distribution. Neither the name of the author nor the names
39. * of its contributors may be used to endorse or promote products derived from
40. * this software without specific prior written permission.
41. *
42. * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
43. * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
44. * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
45. * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
46. * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
47. * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
48. * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
49. * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
50. * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
51. * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
52. * POSSIBILITY OF SUCH DAMAGE.
53. */
54.
55./**
56. * Light Stemmer for German.
57. * <p>
58. * This stemmer implements the "UniNE" algorithm in:
59. * <i>Light Stemming Approaches for the French, Portuguese, German and Hungarian Languages</i>
60. * Jacques Savoy
61. */
62.public class GermanLightStemmer {
63.
64. //处理特殊字符映射
65. public int stem(char s[], int len) {
66. for (int i = 0; i < len; i++)
67. switch(s[i]) {
68. case 'ä':
69. case 'à':
70. case 'á':
71. case 'â': s[i] = 'a'; break;
72. case 'ö':
73. case 'ò':
74. case 'ó':
75. case 'ô': s[i] = 'o'; break;
76. case 'ï':
77. case 'ì':
78. case 'í':
79. case 'î': s[i] = 'i'; break;
80. case 'ü':
81. case 'ù':
82. case 'ú':
83. case 'û': s[i] = 'u'; break;
84. }
85.
86. len = step1(s, len);
87. return step2(s, len);
88. }
89.
90.
91. private boolean stEnding(char ch) {
92. switch(ch) {
93. case 'b':
94. case 'd':
95. case 'f':
96. case 'g':
97. case 'h':
98. case 'k':
99. case 'l':
100. case 'm':
101. case 'n':
102. case 't': return true;
103. default: return false;
104. }
105. }
106. //处理基于以下规则的词干抽取和缩减
107. private int step1(char s[], int len) {
108. if (len > 5 && s[len-3] == 'e' && s[len-2] == 'r' && s[len-1] == 'n')
109. return len - 3;
110.
111. if (len > 4 && s[len-2] == 'e')
112. switch(s[len-1]) {
113. case 'm':
114. case 'n':
115. case 'r':
116. case 's': return len - 2;
117. }
118.
119. if (len > 3 && s[len-1] == 'e')
120. return len - 1;
121.
122. if (len > 3 && s[len-1] == 's' && stEnding(s[len-2]))
123. return len - 1;
124.
125. return len;
126. }
127. //处理基于以下规则est,er,en等的词干抽取和缩减
128. private int step2(char s[], int len) {
129. if (len > 5 && s[len-3] == 'e' && s[len-2] == 's' && s[len-1] == 't')
130. return len - 3;
131.
132. if (len > 4 && s[len-2] == 'e' && (s[len-1] == 'r' || s[len-1] == 'n'))
133. return len - 2;
134.
135. if (len > 4 && s[len-2] == 's' && s[len-1] == 't' && stEnding(s[len-3]))
136. return len - 2;
137.
138. return len;
139. }
140.}
具体的分析结果如下:
Java代码 复制代码 收藏代码
1.搜索技术交流群:324714439
2.大数据hadoop交流群:376932160
3.
4.0,将一些德语特殊字符,替换成对应的英文表示
5.1,将所有词干元音还原 a ,o,i,u
6.ste(2)(按先后顺序,符合以下任意一项,就完成一次校验(return))
7.2,单词长度大于5的词,以ern结尾的,直接去掉
8.3,单词长度大于4的词,以em,en,es,er结尾的,直接去掉
9.4,单词长度大于3的词,以e结尾的直接去掉
10.5,单词长度大于3的词,以bs,ds,fs,gs,hs,ks,ls,ms,ns,ts结尾的,直接去掉s
11.step(3)(按先后顺序,符合以下任意一项,就完成一次校验(return))
12.6,单词长度大于5的词,以est结尾的,直接去掉
13.7,单词长度大于4的词,以er或en结尾的直接去掉
14.8,单词长度大于4的词,bst,dst,fst,gst,hst,kst,lst,mst,nst,tst,直接去掉后两位字母st
最后,结合网上资料分析,基于er,en,e,s结尾的是做单复数转换的,其他的几条规则主要是对非名词的单词,做词干抽取。
那么首先,探讨下分词器的词形还原和词干提取的对搜索的意义?在这之前,先看下两者的概念:
词形还原(lemmatization),是把一个任何形式的语言词汇还原为一般形式(能表达完整语义),而词干提取
(stemming)是抽取词的词干或词根形式(不一定能够表达完整语义)。词形还原和词干提取是词形规范化的两类
重要方式,都能够达到有效归并词形的目的,二者既有联系也有区别
详细介绍,请参考这篇文章
在电商搜索里,词干的抽取,和单复数的还原比较重要(这里主要针对名词来讲),因为这有关搜索的查准率,和查全率的命中,如果我们的分词器没有对这些词做过处理,会造成什么影响呢?那么请看如下的一个例子?
句子: i have two cats
分词器如果什么都没有做:
这时候我们搜cat,就会无命中结果,而必须搜cats才能命中到一条数据,而事实上cat和cats是同一个东西,只不过单词的形式不一样,这样以来,如果不做处理,我们的查全率和查全率都会下降,会涉及影响到我们的搜索体验,所以stemming这一步,在某些场合的分词中至关重要。
本篇,散仙,会参考源码分析一下,关于德语分词中中如何做的词干提取,先看下德语的分词声明:
Java代码 复制代码 收藏代码
1.List<String> list=new ArrayList<String>();
2.list.add("player");//这里面的词,不会被做词干抽取,词形还原
3.CharArraySet ar=new CharArraySet(Version.LUCENE_43,list , true);
4.//分词器的第二个参数是禁用词参数,第三个参数是排除不做词形转换,或单复数的词
5.GermanAnalyzer sa=new GermanAnalyzer(Version.LUCENE_43,null,ar);
接着,我们具体看下,在德语的分词器中,都经过了哪几部分的过滤处理:
Java代码 复制代码 收藏代码
1. protected TokenStreamComponents createComponents(String fieldName,
2. Reader reader) {
3. //标准分词器过滤
4. final Tokenizer source = new StandardTokenizer(matchVersion, reader);
5. TokenStream result = new StandardFilter(matchVersion, source);
6.//转小写过滤
7. result = new LowerCaseFilter(matchVersion, result);
8.//禁用词过滤
9. result = new StopFilter( matchVersion, result, stopwords);
10.//排除词过滤
11. result = new SetKeywordMarkerFilter(result, exclusionSet);
12. if (matchVersion.onOrAfter(Version.LUCENE_36)) {
13.//在lucene3.6以后的版本,采用如下filter过滤
14. //规格化,将德语中的特殊字符,映射成英语
15. result = new GermanNormalizationFilter(result);
16. //stem词干抽取,词性还原
17. result = new GermanLightStemFilter(result);
18. } else if (matchVersion.onOrAfter(Version.LUCENE_31)) {
19.//在lucene3.1至3.6的版本中,采用SnowballFilter处理
20. result = new SnowballFilter(result, new German2Stemmer());
21. } else {
22.//在lucene3.1之前的采用兼容的GermanStemFilter处理
23. result = new GermanStemFilter(result);
24. }
25. return new TokenStreamComponents(source, result);
26. }
OK,我们从源码中得知,在Lucene4.x中对德语的分词也做了向前和向后兼容,现在我们主要关注在lucene4.x之后的版本如何的词形转换,下面分别看下
result = new GermanNormalizationFilter(result);
result = new GermanLightStemFilter(result);
这两个类的功能:
Java代码 复制代码 收藏代码
1.package org.apache.lucene.analysis.de;
2.
3./*
4. * Licensed to the Apache Software Foundation (ASF) under one or more
5. * contributor license agreements. See the NOTICE file distributed with
6. * this work for additional information regarding copyright ownership.
7. * The ASF licenses this file to You under the Apache License, Version 2.0
8. * (the "License"); you may not use this file except in compliance with
9. * the License. You may obtain a copy of the License at
10. *
11. * http://www.apache.org/licenses/LICENSE-2.0
12. *
13. * Unless required by applicable law or agreed to in writing, software
14. * distributed under the License is distributed on an "AS IS" BASIS,
15. * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16. * See the License for the specific language governing permissions and
17. * limitations under the License.
18. */
19.
20.import java.io.IOException;
21.
22.import org.apache.lucene.analysis.TokenFilter;
23.import org.apache.lucene.analysis.TokenStream;
24.import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
25.import org.apache.lucene.analysis.util.StemmerUtil;
26.
27./**
28. * Normalizes German characters according to the heuristics
29. * of the <a href="http://snowball.tartarus.org/algorithms/german2/stemmer.html">
30. * German2 snowball algorithm</a>.
31. * It allows for the fact that ä, ö and ü are sometimes written as ae, oe and ue.
32. *
33. *
34. * <li> 'ß' is replaced by 'ss'
35. * <li> 'ä', 'ö', 'ü' are replaced by 'a', 'o', 'u', respectively.
36. * <li> 'ae' and 'oe' are replaced by 'a', and 'o', respectively.
37. * <li> 'ue' is replaced by 'u', when not following a vowel or q.
38. *
39. * <p>
40. * This is useful if you want this normalization without using
41. * the German2 stemmer, or perhaps no stemming at all.
42. *上面的解释说得很清楚,主要是对德文的一些特殊字母,转换成对应的英文处理
43. *
44. */
45.
46.public final class GermanNormalizationFilter extends TokenFilter {
47. // FSM with 3 states:
48. private static final int N = 0; /* ordinary state */
49. private static final int V = 1; /* stops 'u' from entering umlaut state */
50. private static final int U = 2; /* umlaut state, allows e-deletion */
51.
52. private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class);
53.
54. public GermanNormalizationFilter(TokenStream input) {
55. super(input);
56. }
57.
58. @Override
59. public boolean incrementToken() throws IOException {
60. if (input.incrementToken()) {
61. int state = N;
62. char buffer[] = termAtt.buffer();
63. int length = termAtt.length();
64. for (int i = 0; i < length; i++) {
65. final char c = buffer[i];
66. switch(c) {
67. case 'a':
68. case 'o':
69. state = U;
70. break;
71. case 'u':
72. state = (state == N) ? U : V;
73. break;
74. case 'e':
75. if (state == U)
76. length = StemmerUtil.delete(buffer, i--, length);
77. state = V;
78. break;
79. case 'i':
80. case 'q':
81. case 'y':
82. state = V;
83. break;
84. case 'ä':
85. buffer[i] = 'a';
86. state = V;
87. break;
88. case 'ö':
89. buffer[i] = 'o';
90. state = V;
91. break;
92. case 'ü':
93. buffer[i] = 'u';
94. state = V;
95. break;
96. case 'ß':
97. buffer[i++] = 's';
98. buffer = termAtt.resizeBuffer(1+length);
99. if (i < length)
100. System.arraycopy(buffer, i, buffer, i+1, (length-i));
101. buffer[i] = 's';
102. length++;
103. state = N;
104. break;
105. default:
106. state = N;
107. }
108. }
109. termAtt.setLength(length);
110. return true;
111. } else {
112. return false;
113. }
114. }
115.}
Java代码 复制代码 收藏代码
1.package org.apache.lucene.analysis.de;
2.
3./*
4. * Licensed to the Apache Software Foundation (ASF) under one or more
5. * contributor license agreements. See the NOTICE file distributed with
6. * this work for additional information regarding copyright ownership.
7. * The ASF licenses this file to You under the Apache License, Version 2.0
8. * (the "License"); you may not use this file except in compliance with
9. * the License. You may obtain a copy of the License at
10. *
11. * http://www.apache.org/licenses/LICENSE-2.0
12. *
13. * Unless required by applicable law or agreed to in writing, software
14. * distributed under the License is distributed on an "AS IS" BASIS,
15. * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16. * See the License for the specific language governing permissions and
17. * limitations under the License.
18. */
19.
20.import java.io.IOException;
21.
22.import org.apache.lucene.analysis.TokenFilter;
23.import org.apache.lucene.analysis.TokenStream;
24.import org.apache.lucene.analysis.miscellaneous.SetKeywordMarkerFilter;
25.import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
26.import org.apache.lucene.analysis.tokenattributes.KeywordAttribute;
27.
28./**
29. * A {@link TokenFilter} that applies {@link GermanLightStemmer} to stem German
30. * words.
31. * <p>
32. * To prevent terms from being stemmed use an instance of
33. * {@link SetKeywordMarkerFilter} or a custom {@link TokenFilter} that sets
34. * the {@link KeywordAttribute} before this {@link TokenStream}.
35. *
36.
37. *
38. *
39. *这个类,主要做Stemmer(词干提取),而我们主要关注
40. *GermanLightStemmer这个类的作用
41. *
42. *
43. */
44.public final class GermanLightStemFilter extends TokenFilter {
45. private final GermanLightStemmer stemmer = new GermanLightStemmer();
46. private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class);
47. private final KeywordAttribute keywordAttr = addAttribute(KeywordAttribute.class);
48.
49. public GermanLightStemFilter(TokenStream input) {
50. super(input);
51. }
52.
53. @Override
54. public boolean incrementToken() throws IOException {
55. if (input.incrementToken()) {
56. if (!keywordAttr.isKeyword()) {
57. final int newlen = stemmer.stem(termAtt.buffer(), termAtt.length());
58. termAtt.setLength(newlen);
59. }
60. return true;
61. } else {
62. return false;
63. }
64. }
65.}
下面看下,在GermanLightStemmer中,如何做的词干提取:源码如下:
Java代码 复制代码 收藏代码
1. package org.apache.lucene.analysis.de;
2.
3./*
4. * Licensed to the Apache Software Foundation (ASF) under one or more
5. * contributor license agreements. See the NOTICE file distributed with
6. * this work for additional information regarding copyright ownership.
7. * The ASF licenses this file to You under the Apache License, Version 2.0
8. * (the "License"); you may not use this file except in compliance with
9. * the License. You may obtain a copy of the License at
10. *
11. * http://www.apache.org/licenses/LICENSE-2.0
12. *
13. * Unless required by applicable law or agreed to in writing, software
14. * distributed under the License is distributed on an "AS IS" BASIS,
15. * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16. * See the License for the specific language governing permissions and
17. * limitations under the License.
18. */
19.
20./*
21. * This algorithm is updated based on code located at:
22. * http://members.unine.ch/jacques.savoy/clef/
23. *
24. * Full copyright for that code follows:
25. */
26.
27./*
28. * Copyright (c) 2005, Jacques Savoy
29. * All rights reserved.
30. *
31. * Redistribution and use in source and binary forms, with or without
32. * modification, are permitted provided that the following conditions are met:
33. *
34. * Redistributions of source code must retain the above copyright notice, this
35. * list of conditions and the following disclaimer. Redistributions in binary
36. * form must reproduce the above copyright notice, this list of conditions and
37. * the following disclaimer in the documentation and/or other materials
38. * provided with the distribution. Neither the name of the author nor the names
39. * of its contributors may be used to endorse or promote products derived from
40. * this software without specific prior written permission.
41. *
42. * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
43. * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
44. * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
45. * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
46. * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
47. * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
48. * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
49. * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
50. * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
51. * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
52. * POSSIBILITY OF SUCH DAMAGE.
53. */
54.
55./**
56. * Light Stemmer for German.
57. * <p>
58. * This stemmer implements the "UniNE" algorithm in:
59. * <i>Light Stemming Approaches for the French, Portuguese, German and Hungarian Languages</i>
60. * Jacques Savoy
61. */
62.public class GermanLightStemmer {
63.
64. //处理特殊字符映射
65. public int stem(char s[], int len) {
66. for (int i = 0; i < len; i++)
67. switch(s[i]) {
68. case 'ä':
69. case 'à':
70. case 'á':
71. case 'â': s[i] = 'a'; break;
72. case 'ö':
73. case 'ò':
74. case 'ó':
75. case 'ô': s[i] = 'o'; break;
76. case 'ï':
77. case 'ì':
78. case 'í':
79. case 'î': s[i] = 'i'; break;
80. case 'ü':
81. case 'ù':
82. case 'ú':
83. case 'û': s[i] = 'u'; break;
84. }
85.
86. len = step1(s, len);
87. return step2(s, len);
88. }
89.
90.
91. private boolean stEnding(char ch) {
92. switch(ch) {
93. case 'b':
94. case 'd':
95. case 'f':
96. case 'g':
97. case 'h':
98. case 'k':
99. case 'l':
100. case 'm':
101. case 'n':
102. case 't': return true;
103. default: return false;
104. }
105. }
106. //处理基于以下规则的词干抽取和缩减
107. private int step1(char s[], int len) {
108. if (len > 5 && s[len-3] == 'e' && s[len-2] == 'r' && s[len-1] == 'n')
109. return len - 3;
110.
111. if (len > 4 && s[len-2] == 'e')
112. switch(s[len-1]) {
113. case 'm':
114. case 'n':
115. case 'r':
116. case 's': return len - 2;
117. }
118.
119. if (len > 3 && s[len-1] == 'e')
120. return len - 1;
121.
122. if (len > 3 && s[len-1] == 's' && stEnding(s[len-2]))
123. return len - 1;
124.
125. return len;
126. }
127. //处理基于以下规则est,er,en等的词干抽取和缩减
128. private int step2(char s[], int len) {
129. if (len > 5 && s[len-3] == 'e' && s[len-2] == 's' && s[len-1] == 't')
130. return len - 3;
131.
132. if (len > 4 && s[len-2] == 'e' && (s[len-1] == 'r' || s[len-1] == 'n'))
133. return len - 2;
134.
135. if (len > 4 && s[len-2] == 's' && s[len-1] == 't' && stEnding(s[len-3]))
136. return len - 2;
137.
138. return len;
139. }
140.}
具体的分析结果如下:
Java代码 复制代码 收藏代码
1.搜索技术交流群:324714439
2.大数据hadoop交流群:376932160
3.
4.0,将一些德语特殊字符,替换成对应的英文表示
5.1,将所有词干元音还原 a ,o,i,u
6.ste(2)(按先后顺序,符合以下任意一项,就完成一次校验(return))
7.2,单词长度大于5的词,以ern结尾的,直接去掉
8.3,单词长度大于4的词,以em,en,es,er结尾的,直接去掉
9.4,单词长度大于3的词,以e结尾的直接去掉
10.5,单词长度大于3的词,以bs,ds,fs,gs,hs,ks,ls,ms,ns,ts结尾的,直接去掉s
11.step(3)(按先后顺序,符合以下任意一项,就完成一次校验(return))
12.6,单词长度大于5的词,以est结尾的,直接去掉
13.7,单词长度大于4的词,以er或en结尾的直接去掉
14.8,单词长度大于4的词,bst,dst,fst,gst,hst,kst,lst,mst,nst,tst,直接去掉后两位字母st
最后,结合网上资料分析,基于er,en,e,s结尾的是做单复数转换的,其他的几条规则主要是对非名词的单词,做词干抽取。
发表评论
-
elasticsearch异常信息汇总
2017-11-06 09:34 15421.IndexMissingException 异常信息 ... -
Elasticsearch的架构
2018-03-22 10:30 507为什么要学习架构? Elasticsearch的一些架构 ... -
怎么在Ubuntu上打开端口
2017-10-21 20:45 0Netstat -tln 命令是用来查看linux的端口使用情 ... -
Elasticsearch工作原理
2018-03-22 10:30 448一、关于搜索引擎 各 ... -
Elasticsearch的路由(Routing)特性
2017-10-11 10:41 0Elasticsearch路由机制介 ... -
Elasticsearch中的segment理解
2017-10-11 09:58 1876在Elasticsearch中, 需要搞清楚几个名词,如se ... -
Elasticsearch的路由(Routing)特性
2017-09-28 16:52 614Elasticsearch路由机制介绍 Elastics ... -
Elasticsearch 的 Shard 和 Segment
2017-09-28 16:05 1198Shard(分片) 一个Shard就是一个Lu ... -
开源大数据查询分析引擎现状
2017-09-22 03:04 828大数据查询分析是云计算中核心问题之一,自从Google在20 ... -
大数据处理方面的 7 个开源搜索引擎
2017-09-22 03:01 494大数据是一个包括一切 ... -
开源大数据查询分析引擎现状
2017-09-23 11:26 547大数据查询分析是云计算中核心问题之一,自从Google在2 ... -
elasticsearch 把很多类型都放在一个索引下面 会不会导致查询慢
2017-09-25 09:45 979主要看数据量ES索引优 ... -
腾讯大数据Hermes爱马仕的系统
2017-09-23 11:15 982腾讯大数据最近做了几件事,上线了一个官方网站http:// ... -
配置高性能Elasticsearch集群的9个小贴士
2017-09-25 10:02 589Loggly服务底层的很多 ... -
Elasticsearch与Solr
2017-09-25 16:24 546Elasticsearch简介* Elasti ... -
大数据杂谈微课堂|Elasticsearch 5.0新版本的特性与改进
2017-09-26 09:57 808Elastic将在今年秋季的 ... -
ElasticSearch性能优化策略
2017-09-26 09:51 447ElasticSearch性能优化主 ... -
ES索引优化
2017-09-19 20:39 0ES索引优化篇主要从两个方面解决问题,一是索引数据过程;二是 ... -
分词与索引的关系
2017-09-19 20:33 0分词与索引,是中文搜索里最重要的两个技术,而且两者间是密不可 ... -
Elasticsearch中的segment理解
2017-09-19 20:30 0在Elasticsearch中, 需要搞清楚几个名词,如se ...
相关推荐
《支持Lucene的词典机械中文分词技术详解》 在信息技术领域,中文分词是自然语言处理(NLP)中...在实际应用中,开发者可以结合Lucene提供的分词工具和自定义策略,以满足特定场景的需求,提升系统的性能和用户体验。
在Lucene中,分词器是关键组件之一,因为搜索引擎的工作很大程度上依赖于准确的分词结果。IkAnalyzer对中文的处理能力强大,支持多种分词模式,包括全模式、精确模式、最短路径模式等,以满足不同场景的需求。它还...
7. **多语言支持**:Lucene内置了多种语言的分析器,可以处理不同语言的文本。 8. **分布式搜索**:随着数据量的增长,Lucene可以通过Solr或Elasticsearch等扩展,实现分布式搜索,提高性能和容错能力。 9. **社区...
3. **分析器测试**:Luke提供了一个内置的分析器测试工具,可以输入文本并观察经过分析器处理后的结果,帮助你理解分析过程如何影响搜索性能。 4. **搜索功能**:你可以直接在Luke中执行查询,实时看到查询结果和...
Lucene内置了多种Analyzer类,如StandardAnalyzer,用于处理常见的文本清洗和分词任务。分词是将输入文本拆分成独立的词汇单元,这些单元称为“术语”或“Token”。分词器会考虑语言特性,如英文的单词边界和中文的...
2.0.0版本中,内置了一些常见的分词器,如StandardAnalyzer,用于处理英文文本。 3. **Document与Field**:Document是Lucene的基本存储单元,代表一份文档,由多个Field组成。每个Field对应文档的一个属性,如标题...
3. **分词与分析**:内置了多种分词器(Analyzer),可以对输入的文本进行预处理,如去除停用词、词干提取等,提高搜索精度。开发者也可以自定义分词器以适应特定的语言或领域需求。 4. **内存与磁盘索引**:Lucene...
标题中的"分词器.zip"暗示了我们讨论的是与文本处理和搜索引擎相关的技术,特别是分词工具。在信息检索和自然语言处理领域,分词器是将连续的文本分割成有意义的词语单元(如单词或短语)的工具,这对于理解和分析...
【Lucene】是一个基于Java的全文检索引擎工具包,由Doug Cutting创建,旨在方便地将全文检索功能集成到各种应用程序中。它不是一个完整的全文检索应用,而是一个库,可以被其他Java项目用作后台索引引擎。Lucene的...
- **多语言支持**:Lucene内置了多种语言的分析器,可以处理不同语言的文本。 - **更新和删除**:可以动态地更新索引中的文档,或者标记文档为删除。 - **复杂查询**:支持布尔查询、短语查询、模糊查询、范围...
2. **分析器(Analyzer)**:Lucene提供了多种内置分析器,如标准分析器(StandardAnalyzer),用于处理英文文本。分析器可以根据需求定制,调整分词规则,以适应不同的语言或业务需求。 3. **文档与字段**:在...
- **分析器**:内置了多种分析器,可以处理文本数据,进行分词、去除停用词等操作。 - **插件支持**:可以通过插件扩展其功能,例如 Compass 是一个流行的插件,可以提供更高级别的搜索引擎框架。 **2. 版本历史** ...
8. **多段索引支持**:在处理大型索引时,Lucene可能会创建多个段。Luke能够处理这种情况,展示所有段的信息,并支持合并或优化索引操作。 9. **版本兼容性**:尽管Luke 0.8.1是较早的版本,但它仍然可以用于理解...
Lucene的核心功能包括分词、文档分析、索引构建和搜索。它的索引结构是倒排索引,这种结构对于快速全文搜索极其有效。Lucene支持多种查询语法,如布尔查询、短语查询和模糊查询,使得用户可以根据需求进行复杂的查询...
Lucene提供了多种内置分析器,如StandardAnalyzer,同时允许开发者自定义分析流程以满足特定需求。 4. **查询API**:Lucene提供了丰富的查询构造器,如TermQuery、BooleanQuery、WildcardQuery等,允许开发者构建...
总的来说,IKAnalyzer6.5.0是Java开发者在处理中文全文检索时的一个强大工具,它提供了高效、易用的分词功能,并且很好地兼容了Lucene 6.0.x版本。通过熟练掌握和使用IKAnalyzer,可以提升中文文本的处理质量和搜索...
4. **多语言支持**:Lucene内置了多种语言的分析器,可以处理不同语言的文本。 5. **高性能**:Lucene设计时考虑了性能优化,能够处理大量数据,并且在内存和磁盘空间使用上都有很好的控制。 **Heritrix** ...
综上所述,Lucene是一个功能强大且灵活的全文检索工具包,不仅适用于英文环境,通过适当的扩展和配置也能很好地支持中文等其他语言的全文检索需求。无论是构建Web论坛系统、邮件列表查询系统还是其他类型的文档管理...
Elasticsearch作为一款基于Lucene开发的搜索引擎,在很多方面展现了其相对于Lucene的优势。在本书的开头部分,作者李猛(ynuosoft)就针对Elasticsearch与Lucene之间的竞争进行了深入的分析。 **Lucene的问题与挑战...
另一方面,`elasticsearch-7.17.3-x86_64.rpm` 是一个RPM包,适用于使用RPM包管理器的Linux发行版,如Red Hat Enterprise Linux、CentOS等。通过RPM包,用户可以利用包管理器(如`yum`或`dnf`)轻松安装、升级和卸载...