`
芝加哥09
  • 浏览: 61148 次
社区版块
存档分类
最新评论

基于lucene5.5.0的创建索引与查询

    博客分类:
  • Java
阅读更多

lucene不同版本之间的创建索引与查询,稍微有一些不一样。目前lucene的最新版为5.5.0。查看源代码自带的Demo之后,我写基于lucene5.5.0的创建索引与查询方法。以下是源代码:

 

 

IndexFiles.java

package com.tongtongxue.lucene;

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.nio.charset.StandardCharsets;
import java.nio.file.FileVisitResult;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.SimpleFileVisitor;
import java.nio.file.attribute.BasicFileAttributes;
import java.util.Date;

import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.LongField;
import org.apache.lucene.document.StringField;
import org.apache.lucene.document.TextField;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.index.IndexWriterConfig.OpenMode;
import org.apache.lucene.index.Term;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.FSDirectory;

public class IndexFiles {

    public static void main(String[] args) {
        String indexPath = "E:\\lucene\\02\\index";
        String docsPath = "E:\\lucene\\02\\content";
        boolean create = true;

        Path docDir = Paths.get(docsPath);
        if (!Files.isReadable(docDir)) {
            System.out.println("Document directory '" +docDir.toAbsolutePath()+ "' does not exist or is not readable, please check the path");
            System.exit(1);
        }

        Date start = new Date();
        try {
            System.out.println("Indexing to directory '" + indexPath + "'...");

            Directory dir = FSDirectory.open(Paths.get(indexPath));
            Analyzer analyzer = new StandardAnalyzer();
            IndexWriterConfig iwc = new IndexWriterConfig(analyzer);

            if (create) {
                // Create a new index in the directory, removing any
                // previously indexed documents:
                iwc.setOpenMode(OpenMode.CREATE);
            } else {
                // Add new documents to an existing index:
                iwc.setOpenMode(OpenMode.CREATE_OR_APPEND);
            }

            IndexWriter writer = new IndexWriter(dir, iwc);
            indexDocs(writer, docDir);

            // NOTE: if you want to maximize search performance,
            // you can optionally call forceMerge here.  This can be
            // a terribly costly operation, so generally it's only
            // worth it when your index is relatively static (ie
            // you're done adding documents to it):
            //
            // writer.forceMerge(1);

            writer.close();

            Date end = new Date();
            System.out.println(end.getTime() - start.getTime() + " total milliseconds");
        } catch (Exception e) {
            System.out.println(" caught a " + e.getClass() +
                    "\n with message: " + e.getMessage());
        }
    }

    static void indexDocs(final IndexWriter writer, Path path) throws IOException {
        if (Files.isDirectory(path)) {
            Files.walkFileTree(path, new SimpleFileVisitor() {
                @Override
                public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
                    try {
                        indexDoc(writer, file, attrs.lastModifiedTime().toMillis());
                    } catch (IOException ignore) {
                        // don't index files that can't be read.
                    }
                    return FileVisitResult.CONTINUE;
                }
            });
        } else {
            indexDoc(writer, path, Files.getLastModifiedTime(path).toMillis());
        }
    }

    static void indexDoc(IndexWriter writer, Path file, long lastModified) throws IOException {
        try (InputStream stream = Files.newInputStream(file)) {
            // make a new, empty document
            Document doc = new Document();

            // Add the path of the file as a field named "path".  Use a
            // field that is indexed (i.e. searchable), but don't tokenize
            // the field into separate words and don't index term frequency
            // or positional information:
            Field pathField = new StringField("path", file.toString(), Field.Store.YES);
            doc.add(pathField);

            // Add the last modified date of the file a field named "modified".
            // Use a LongField that is indexed (i.e. efficiently filterable with
            // NumericRangeFilter).  This indexes to milli-second resolution, which
            // is often too fine.  You could instead create a number based on
            // year/month/day/hour/minutes/seconds, down the resolution you require.
            // For example the long value 2011021714 would mean
            // February 17, 2011, 2-3 PM.
            doc.add(new LongField("modified", lastModified, Field.Store.NO));

            // Add the contents of the file to a field named "contents".  Specify a Reader,
            // so that the text of the file is tokenized and indexed, but not stored.
            // Note that FileReader expects the file to be in UTF-8 encoding.
            // If that's not the case searching for special characters will fail.
            doc.add(new TextField("contents", new BufferedReader(new InputStreamReader(stream, StandardCharsets.UTF_8))));

            if (writer.getConfig().getOpenMode() == OpenMode.CREATE) {
                // New index, so we just add the document (no old document can be there):
                System.out.println("adding " + file);
                writer.addDocument(doc);
            } else {
                // Existing index (an old copy of this document may have been indexed) so
                // we use updateDocument instead to replace the old one matching the exact
                // path, if present:
                System.out.println("updating " + file);
                writer.updateDocument(new Term("path", file.toString()), doc);
            }
        }
    }
}

 

 

SearchFiles.java

 

package com.tongtongxue.lucene;

import java.io.IOException;
import java.nio.file.Paths;
import java.util.Date;

import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.queryparser.classic.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.store.FSDirectory;

public class SearchFiles {

    public static void main(String[] args) throws Exception {

        String index = "E:\\lucene\\02\\index";
        String field = "contents";
        String queryString = "error";
        int hitsPerPage = 10;

        IndexReader reader = DirectoryReader.open(FSDirectory.open(Paths.get(index)));
        IndexSearcher searcher = new IndexSearcher(reader);
        Analyzer analyzer = new StandardAnalyzer();

        QueryParser parser = new QueryParser(field, analyzer);

        Query query = parser.parse(queryString);
        System.out.println("Searching for: " + query.toString(field));

        Date start = new Date();
        doPagingSearch(searcher, query, hitsPerPage);
        Date end = new Date();
        System.out.println("Time: "+(end.getTime()-start.getTime())+"ms");

        reader.close();
    }

    public static void doPagingSearch(IndexSearcher searcher, Query query, int hitsPerPage) throws IOException {
        TopDocs results = searcher.search(query, 5 * hitsPerPage);
        ScoreDoc[] hits = results.scoreDocs;

        if (hits != null && hits.length > 0) {
            for (ScoreDoc hit : hits) {
                Document hitDoc = searcher.doc(hit.doc);
                System.out.println(hitDoc.get("path"));
            }
        }
    }
}

 

原文:http://www.tongtongxue.com/archives/832.html

  • 大小: 3.9 KB
1
2
分享到:
评论

相关推荐

    lucene5.5.0源码

    8. **内存与磁盘索引**:Lucene允许在内存中或磁盘上创建索引,这取决于性能和持久性的需求。在5.5.0版本中,对内存管理和磁盘I/O进行了优化,提高了索引和搜索的速度。 9. **分布式搜索**:Solr,基于Lucene的搜索...

    springboot 整合elasticsearch5.5.0 示例以及简单查询

    Elasticsearch是一款开源的分布式全文搜索引擎,基于Lucene,提供实时、分布式、高可用的数据存储和搜索解决方案。它不仅适用于全文搜索,还支持结构化和半结构化数据的检索。 三、集成步骤 1. 添加依赖:在...

    lucene(jar)

    `lucene-highlighter-5.5.4.jar` 是高亮显示模块,它在搜索结果中突出显示与查询相关的部分。在返回搜索结果时,高亮显示可以显著提高用户体验,让用户一眼看出搜索词在文档中的位置。5.5.4版本的Highlighter支持...

    elasticsearch5.x的java实现搜索

    Elasticsearch是一个基于Lucene的分布式、RESTful搜索和分析引擎。它提供了一个分布式、多租户的全文搜索环境,具备实时分析、高可用性和可扩展性。在Elasticsearch 5.x中,对性能和用户体验进行了优化,引入了更...

    elasticsearch-5.3.2 附带IK插件

    Elasticsearch是一个强大的开源搜索引擎,基于Lucene库,设计用于分布式、实时的全文搜索和分析引擎。它允许用户快速地存储、搜索和分析大量数据。版本5.3.2是Elasticsearch的一个稳定版本,提供了许多改进和优化,...

    tomcart 部署 solr5.0的部署方法

    **Solr** 是一个高性能、基于Lucene的全文检索服务。Solr5.0版本作为Solr发展过程中的一个重要版本,提供了丰富的功能和改进。本文将详细介绍如何在Tomcat环境下部署Solr5.0,并进行必要的配置。 #### 二、部署前的...

    hibernate全 jar包

    Hibernate Search结合了Lucene搜索引擎,允许开发者在数据库中执行复杂的全文检索,而不仅仅是基于简单的键值匹配。5.5版本的Hibernate Search可能包含了更高效的索引管理、更强大的分析器配置和更灵活的查询构造...

    Photobook with similar images finder:允许创建相册的软件。 支持类似图片查找-开源

    "lucene-core-5.5.0.jar"是Apache Lucene库,这是一款高性能全文搜索引擎框架,很可能被用来实现图像的索引和搜索。"lire-0.9.3.jar"是LIRE(Lucene Image REtrieval)库,它是专门为图像检索扩展Apache Lucene而...

Global site tag (gtag.js) - Google Analytics