1. Lucene Delete Function
/** * Delete Index * */ public void delete() { Directory dir = FSDirectory.open(new File("E:/LuceneIndex")); IndexWriter writer = null;
try { writer = new IndexWriter(dir, new IndexWriterConfig( Version.LUCENE_35, new SimpleAnalyzer(Version.LUCENE_35)));
// Param is a selector. It can be a Query or a Term // A Query is a set of conditions (id like %1%) // A Term is a specific condition (name = 1) writer.deleteDocuments(new Term("name", "FileItemIterator.java")); } catch (CorruptIndexException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (LockObtainFailedException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } finally { try { writer.close(); } catch (CorruptIndexException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } }
2.
1) Like Windows, lucene provides a recycle bin for indices.
2) When we execute query, we won't get the data that has been deleted.
3) But we can fetch the indices whenever we want to rollback. And the deleted item is tag as _*_*.del
3. We can use IndexReader to get the number of deleted files
/** * Search * @throws IOException * @throws CorruptIndexException * */ public void search() throws CorruptIndexException, IOException { IndexReader reader = IndexReader.open(dir); // We can get index file count by using reader System.out.println("numDocs = " + reader.numDeletedDocs()); System.out.println("maxDocs = " + reader.maxDoc()); System.out.println("deleteDocs = " + reader.numDeletedDocs()); }
4. We can ue IndexReader to undelete deleted index files
/** * Undelete * */ public void undelete() { IndexReader reader = null; try { // param1: the directory // param2: readOnly reader = IndexReader.open(dir, false); reader.undeleteAll(); } catch (CorruptIndexException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } finally { try { reader.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } }
Comments:
1) When we want to recovery the deleted files, we have to tag the readonly as false. Because by default, readonly is true.
2) After the undelete operation, the file that with the suffix of .del has gone. The data in it has been recovery into index files.
5. How do we empty recycle bin? (Delete files that with the suffix of .del)
1) Befory Lucene-3.5, this operation is called writer.optimize(). But it's now deprecated as every time we optimize, Lucene has to update all the index files. It's really high cost.
2) In/After Lucene-3.5, the operation writer.forceMerge() is the alias of writer.optimize(). They do the same operation and both are high cost.
3) So instead, we can use writer.forceMergeDeletes() to delete all deleted index files and is low cost.
6. About index file redundancy:
1)We can find that every time we execute buildIndex(), there will be another group of index files that are built.
2) As the count of execution grows, the index dir would become larger and larger. We should force the index file to update.
3) But the operation of index file update is deprecated as Lucene will maintain these index files for us automatically.
4) But we can merge index file manually.
/** * Merge */ public void merge() { IndexWriter writer = null; try { writer = new IndexWriter(dir, new IndexWriterConfig( Version.LUCENE_35, new SimpleAnalyzer(Version.LUCENE_35))); // Lucene will merge index files into two segments. The deleted item will be empty. // After Lucene-3.5, this method is deprecated. writer.forceMerge(2); } catch (CorruptIndexException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (LockObtainFailedException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } }
6. How to delete all index files every time before we build index?
/**
* Create Index
*
* @throws IOException
* @throws LockObtainFailedException
* @throws CorruptIndexException
*/
public void buildIndex() throws CorruptIndexException,
LockObtainFailedException, IOException
{
// 2. Create IndexWriter
// --> It is used to write data into index files
IndexWriterConfig config = new IndexWriterConfig(Version.LUCENE_35,
new SimpleAnalyzer(Version.LUCENE_35));
IndexWriter writer = new IndexWriter(dir, config);
// This function will empty index directory.
writer.deleteAll();
// Before 3.5 the way to create index is like below(depreciated):
// new IndexWriter(Direcotry d, Analyzer a, boolean c, MaxFieldLength
// mfl);
// d: Directory, a: Analyzer, c: Shoule we create new one each time
// mlf: The max length of the field to be indexed.
// 3. Create Document
// --> The target we want to search may be a doc file or a table in DB.
// --> The path, name, size and modified date of the file.
// --> All the information of the file should be stored in the Document.
Document doc = null;
// 4. Each Item of The Document is Called a Field.
// --> The relationship of document and field is like table and cell.
// Eg. We want to build index for all the txt file in the c:/lucene dir.
// So each txt file in this dir is called a document.
// And the name, size, modified date, content is called a field.
File files = new File("E:/LuceneData");
for (File file : files.listFiles())
{
doc = new Document();
// Using FileReader, we didn't store content into index file
// doc.add(new Field("content", new FileReader(file)));
// If we want to store content into index file, we have to read
// content into string.
String content = FileUtils.readFileToString(file);
doc.add(new Field("content", content, Field.Store.YES,
Field.Index.ANALYZED));
doc.add(new Field("name", file.getName(), Field.Store.YES,
Field.Index.NOT_ANALYZED));
// Field.Store.YES --> The field should be stored in index file
// Field.Index.ANALYZED --> The filed should be participled
doc.add(new Field("path", file.getAbsolutePath(), Field.Store.YES,
Field.Index.NOT_ANALYZED));
// 5. Create Index File for Target Document by IndexWriter.
writer.addDocument(doc);
}
// 6. Close Index Writer
if (null != writer)
{
writer.close();
}
}
Comments: writer.deleteAll() --> Will delete all index files.
6. How to update index?
/** * Update * */ public void update() { IndexWriter writer = null; Document doc = null; try { writer = new IndexWriter(dir, new IndexWriterConfig( Version.LUCENE_35, new SimpleAnalyzer(Version.LUCENE_35))); doc = new Document(); doc.add(new Field("id", "1", Field.Store.YES, Field.Index.ANALYZED)); doc.add(new Field("name", "Yang", Field.Store.YES, Field.Index.NOT_ANALYZED)); doc.add(new Field("password", "Kunlun", Field.Store.YES, Field.Index.NOT_ANALYZED)); doc.add(new Field("gender", "Male", Field.Store.YES, Field.Index.NOT_ANALYZED)); doc.add(new Field("score", 110 + "", Field.Store.YES, Field.Index.NOT_ANALYZED)); /* * Actually, Lucene doesn't provide update function. The update * function is delete + add First, delete index files that match the * term Second, build new index based on doc passed in */ writer.updateDocument(new Term("name", "Davy"), doc); } catch (CorruptIndexException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (LockObtainFailedException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } }
Summary:
1. Delete: Using writer.deleteAll(); writer.delete(new Term(key, value)); writer.optimize(); writer.forceMergeDeletes(maxSegments);
2. Recovery: Using reader.undeleteAll() to recovery all items that are deleted.
3. Update: Using writer.update(new Term(key, value), doc); It will delete items that match the term and will add doc using the passing in doc.
相关推荐
《Nutch入门经典翻译1:Introduction to Nutch, Part 1: Crawling》一文深入介绍了Nutch这一开源网络爬虫框架的基本概念、体系结构及其关键组件,为初学者提供了全面的理解视角。以下是对该文章核心知识点的详细解读...
Maven坐标:org.apache.lucene:lucene-core:7.7.0; 标签:apache、lucene、core、中文文档、jar包、java; 使用方法:解压翻译后的API文档,用浏览器打开“index.html”文件,即可纵览文档内容。 人性化翻译,文档...
Lucene是一个基于Java的全文索引工具包。 1. 基于Java的全文索引引擎Lucene简介:关于作者和Lucene的...5. Hacking Lucene:简化的查询分析器,删除的实现,定制的排序,应用接口的 扩展 6. 从Lucene我们还可以学到什么
由于林良益先生在2012之后未对IKAnalyzer进行更新,后续lucene分词接口发生变化,导致不可使用,所以此jar包支持lucene6.0以上版本
指南-Lucene:ES篇.md
【Lucene:基于Java的全文检索引擎简介】 Lucene是一个由Java编写的开源全文检索引擎工具包,由Doug Cutting创建并贡献给Apache基金会,成为Jakarta项目的一部分。它不是一个独立的全文检索应用,而是提供了一个可...
**Lucene:基于Java的全文检索引擎简介** Lucene是一个高度可扩展的、高性能的全文检索库,由Apache软件基金会开发并维护。它是Java开发者在构建搜索引擎应用时的首选工具,因为它提供了完整的索引和搜索功能,同时...
### 基于Java的全文检索引擎Lucene简介 #### 1. Lucene概述与历史背景 Lucene是一个开源的全文检索引擎库,完全用Java编写。它为开发者提供了构建高性能搜索应用程序的基础组件。尽管Lucene本身不是一个现成的应用...
**Lucene:基于Java的全文检索引擎** Lucene是一个由Apache软件基金会的Jakarta项目维护的开源全文检索引擎。它不是一个完整的全文检索应用,而是一个用Java编写的库,允许开发人员轻松地在他们的应用程序中集成...
快速上手 1. 运行环境 IDE:IntelliJ IDEA 项目构建工具:Maven 数据库:MySQL Tomcat:Tomcat 8.0.47 2. 初始化项目 创建一个名为bookshop的数据库,将bookshop.sql导入 打开IntelliJ IDEA,将项目导入 ...
【Lucene:基于Java的全文检索引擎简介】 Lucene是一个由Java编写的全文索引工具包,它不是一个完整的全文检索应用,而是作为一个可嵌入的引擎,为各种应用程序提供全文检索功能。Lucene的设计目标是简化全文检索的...
《Lucene分词技术与IKAnalyzer详解》 在信息技术领域,搜索引擎是不可或缺的一部分,而Lucene作为Apache软件基金会的一个开放源代码项目,是Java语言开发的全文检索引擎库,为构建高效、可扩展的信息检索应用提供了...
Maven坐标:org.apache.lucene:lucene-sandbox:6.6.0; 标签:apache、lucene、sandbox、jar包、java、中文文档; 使用方法:解压翻译后的API文档,用浏览器打开“index.html”文件,即可纵览文档内容。 人性化翻译...
《Lucene实战源码(Lucene in Action Source Code)part2》是针对知名搜索库Lucene的一份重要学习资源,其包含的是书籍《Lucene in Action》中的实践代码,主要聚焦于Lucene的深入理解和应用。这个压缩包的第二部分...
Lucene: : 用Gradle构建 基本步骤: 安装OpenJDK 11(或更高版本) 从Apache下载Lucene并解压缩 连接到安装的顶层(lucene顶层目录的父目录) 运行gradle 步骤0)设置您的开发环境(OpenJDK 11或更高版本) ...