浏览 9699 次
锁定老帖子 主题:使用开源组件搭建搜索引擎
精华帖 (0) :: 良好帖 (0) :: 新手帖 (0) :: 隐藏帖 (0)
|
|
---|---|
作者 | 正文 |
发表时间:2007-06-22
在开发中小型搜索引擎的过程中,我使用由Java开发的开源软件:jspider,htmlparser,lucence,IKAnalyzer,下面我一一道来。 lucence很著名啦,不必多说,我写的这个引擎就是在它自带的demo基础上重构的。 jspider顾名思义,是一个用Java开发的爬虫。 htmlparser是解析html页面的,因为lucene自带的html解析器不够健壮,所以用了这个。 IKAnalyzer是为lucence定做的中文分词组件,在使用中我发现效果不错。 具体的编码下次再说吧,呵呵 声明:ITeye文章版权属于作者,受法律保护。没有作者书面许可不得转载。
推荐链接
|
|
返回顶楼 | |
发表时间:2007-06-22
接着说。
用jspider从新浪和163开始爬了大约200M的材料,作为构建搜索引擎的基础,我开始了自己的搜索引擎之路。 |
|
返回顶楼 | |
发表时间:2007-06-22
剩下的工作就是建立索引和利用索引提供查询服务。
|
|
返回顶楼 | |
发表时间:2007-06-22
建立索引
我首先分析了 lucene-2.1.0\src\demo\org\apache\lucene\demo\IndexHtml.java 这个文件提供了很好的实例,我简单重构了一下就可以用了 |
|
返回顶楼 | |
发表时间:2007-06-22
import org.mira.lucene.analysis.MIK_CAnalyzer;
import org.apache.lucene.analysis.standard.StandardAnalyzer; import org.apache.lucene.demo.HTMLDocument; import org.apache.lucene.document.Document; import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.IndexWriter; import org.apache.lucene.index.Term; import org.apache.lucene.index.TermEnum; import org.apache.lucene.store.Directory; import org.apache.lucene.store.FSDirectory; import java.io.File; import java.util.Date; import java.util.Arrays; /** Indexer for HTML files. */ public class IndexHTML { private IndexHTML() {} private static boolean deleting = false; // true during deletion pass private static IndexReader reader; // existing index private static IndexWriter writer; // new index being built private static TermEnum uidIter; // document id iterator /** Indexer for HTML files.*/ public static void main(String[] argv) { try { String index = "index"; boolean create = true; File root = null; String usage = "IndexHTML [-create] [-index <index>] <root_directory>"; /* if (argv.length == 0) { System.err.println("Usage: " + usage); return; } for (int i = 0; i < argv.length; i++) { if (argv[i].equals("-index")) { // parse -index option index = argv[++i]; } else if (argv[i].equals("-create")) { // parse -create option create = true; } else if (i != argv.length-1) { System.err.println("Usage: " + usage); return; } else root = new File(argv[i]); } */ Date start = new Date(); root = new File("c://tes1"); index = "c://index1"; if (!create) { // delete stale docs deleting = true; indexDocs(root, index, create); } writer = new IndexWriter(index, new MIK_CAnalyzer(), create); writer.setMaxBufferedDocs(30); writer.setMergeFactor(100); writer.setMaxFieldLength(1000000); indexDocs(root, index, create); // add new docs System.out.println("Optimizing index..."); writer.optimize(); writer.close(); Date end = new Date(); System.out.print(end.getTime() - start.getTime()); System.out.println(" total milliseconds"); } catch (Exception e) { System.out.println(" caught a " + e.getClass() + "\n with message: " + e.getMessage()); } } /* Walk directory hierarchy in uid order, while keeping uid iterator from /* existing index in sync. Mismatches indicate one of: (a) old documents to /* be deleted; (b) unchanged documents, to be left alone; or (c) new /* documents, to be indexed. */ private static void indexDocs(File file, String index, boolean create) throws Exception { if (!create) { // incrementally update reader = IndexReader.open(index); // open existing index uidIter = reader.terms(new Term("uid", "")); // init uid iterator indexDocs(file); if (deleting) { // delete rest of stale docs while (uidIter.term() != null && uidIter.term().field() == "uid") { System.out.println("deleting " + HTMLDocument.uid2url(uidIter.term().text())); reader.deleteDocuments(uidIter.term()); uidIter.next(); } deleting = false; } uidIter.close(); // close uid iterator reader.close(); // close existing index } else // don't have exisiting indexDocs(file); } private static void indexDocs(File file) throws Exception { if (file.isDirectory()) { // if a directory String[] files = file.list(); // list its files Arrays.sort(files); // sort the files for (int i = 0; i < files.length; i++) // recursively index them indexDocs(new File(file, files[i])); } else if (file.getPath().endsWith(".html") || // index .html files file.getPath().endsWith(".htm") || // index .htm files file.getPath().endsWith(".txt")) { // index .txt files if (uidIter != null) { String uid = HTMLDocument.uid(file); // construct uid for doc while (uidIter.term() != null && uidIter.term().field() == "uid" && uidIter.term().text().compareTo(uid) < 0) { if (deleting) { // delete stale docs System.out.println("deleting " + HTMLDocument.uid2url(uidIter.term().text())); reader.deleteDocuments(uidIter.term()); } uidIter.next(); } if (uidIter.term() != null && uidIter.term().field() == "uid" && uidIter.term().text().compareTo(uid) == 0) { uidIter.next(); // keep matching docs } else if (!deleting) { // add new docs Document doc = HTMLDocument.Document(file); System.out.println("adding " + doc.get("path")); writer.addDocument(doc); } } else { // creating a new index Document doc = HTMLDocument.Document(file); System.out.println("adding " + doc.get("path")); writer.addDocument(doc); // add docs unconditionally } } } } |
|
返回顶楼 | |
发表时间:2007-06-22
从源代码可以看到,中文分词已经加进去了,
而索引HTML的关键工作是由 HTMLDocument这个类完成的,需要对它进行重构 |
|
返回顶楼 | |
发表时间:2007-06-22
这个时候,htmlparser闪亮登场
|
|
返回顶楼 | |
发表时间:2007-06-25
请说jspider只能是对HTML的url进行检索,
是嘛, 不知道Heritrix这个怎么样呢,对比起来!! |
|
返回顶楼 | |