`
- 浏览:
13601 次
- 性别:
- 来自:
沈阳
-
Apache Solr solrconfig.xml 中文说明
- dataDir parameter:<dataDir>/var/data/solr</dataDir>
- 用来指定一个替换原先在Solr目录下默认存放所有的索引数据,可以在Solr目录以外的任意目录中。如果复制使用后应该符合该参数。如果这个目录不是绝对路径的话,那么应该以当前的容器为相对路径。
- mainIndex :
- 这个参数的值用来控制合并多个索引段。
- <useCompoundFile>: 通过将很多 Lucene 内部文件整合到单一一个文件来减少使用中的文件的数量。这可有助于减少 Solr 使用的文件句柄数目,代价是降低了性能。除 非是应用程序用完了文件句柄,否则 false 的默认值应该就已经足够。
- mergeFactor:
- 决定低水平的 Lucene 段被合并的频率。较小的值(最小为 2)使用的内存较少但导致的索引时间也更慢。较大的值可使索引时间变快但会牺牲较多的内存。
- maxBufferedDocs:
- 在合并内存中文档和创建新段之前,定义所需索引的最小文档数。段 是用来存储索引信息的 Lucene 文件。较大的值可使索引时间变快但会牺牲较多的内存。
- maxMergeDocs:
- 控制可由 Solr ,000) 最适合于具有合并的 Document 的最大数。较小的值 (< 10大量更新的应用程序。该参数不允许lucene在任何索引段里包含比这个值更多的文档,但是,多余的文档可以创建一个新的索引段进行替换。
- maxFieldLength:
- 对于给定的 Document,控制可添加到 Field 的最大条目数,进而截断该文档。如果文档可能会很大,就需要增加这个数值。然而,若将这个值设置得过高会导致内存不足错误。
- unlockOnStartup:
- unlockOnStartup 告知 Solr 忽略在多线程环境中用来保护索引的锁定机制。在某些情况下,索引可能会由于不正确的关机或其他错误而一直处于锁定,这就妨碍了添加和更新。将其设置为 true 可以禁用启动锁定,进而允许进行添加和更新。
- <mainIndex>
- <!-- lucene options specific to the main on-disk lucene index -->
- <useCompoundFile>false</useCompoundFile>
- <mergeFactor>10</mergeFactor>
- <maxBufferedDocs>1000</maxBufferedDocs>
- <maxMergeDocs>2147483647</maxMergeDocs>
- <maxFieldLength>10000</maxFieldLength>
- </mainIndex>
-
- updateHandler:
- 这个更新处理器主要涉及底层的关于如何更新处理内部的信息。(此参数不能跟高层次的配置参数Request Handlers对处理发自客户端的更新相混淆)。
- <updateHandler class="solr.DirectUpdateHandler2">
-
- <!-- Limit the number of deletions Solr will buffer during doc updating.
-
- Setting this lower can help bound memory use during indexing.
- -->
- 缓冲更新这么多的数目,设置如下比较低的值,可以约束索引时候所用的内存
- <maxPendingDeletes>100000</maxPendingDeletes>
- 等待文档满足一定的标准后将自动提交,未来版本可以扩展现有的标准
- <!-- autocommit pending docs if certain criteria are met. Future versions may expand the available
- criteria -->
- <autoCommit>
- <maxDocs>10000</maxDocs> <!-- maximum uncommited docs before autocommit triggered -->
- 触发自动提交前最多可以等待提交的文档数量
- <maxTime>86000</maxTime> <!-- maximum time (in MS) after adding a doc before an autocommit is triggered -->
- 在添加了一个文档之后,触发自动提交之前所最大的等待时间
- </autoCommit>
-
- 这个参数用来配置执行外部的命令。
- 一个postCommit的事件被触发当每一个提交之后
- <listener event="postCommit" class="solr.RunExecutableListener">
- <str name="exe">snapshooter</str>
- <str name="dir">solr/bin</str>
- <bool name="wait">true</bool>
- <!--
- <arr name="args"> <str>arg1</str> <str>arg2</str> </arr>
- <arr name="env"> <str>MYVAR=val1</str> </arr>
- -->
- </listener>
- exe--可执行的文件类型
- dir--可以用该目录做为当前的工作目录。默认为"."
- wait--调用线程要等到可执行的返回值
- args--传递给程序的参数 默认nothing
- env--环境变量的设置 默认nothing
-
- <query>
- <!-- Maximum number of clauses in a boolean query... can affect range
- or wildcard queries that expand to big boolean queries.
- 一次布尔查询的最大数量,可以影响查询的范围或者进行通配符的查询,借此来扩展一个更强大的查询。
- An exception is thrown if exceeded.
- -->
- <maxBooleanClauses>1024</maxBooleanClauses>
-
- <query>:
- 控制跟查询相关的一切东东。
-
- Caching:修改这个参数可以做为索引的增长和变化。
-
- <!-- Cache used by SolrIndexSearcher for filters (DocSets),
- unordered sets of *all* documents that match a query.
- 在过滤器中过滤文档集合的时候,或者是一个无序的所有的文档集合中将在在SolrIndexSearcher中使用缓存来匹配符合查询的所有文档。
- When a new searcher is opened, its caches may be prepopulated
- or "autowarmed" using data from caches in the old searcher.
- 当一次搜索被打开,它可以自动的或者预先从旧的搜索中使用缓存数据。
- autowarmCount is the number of items to prepopulate.
- autowarmCount这个值是预先设置的数值项。
- For LRUCache,
- the autowarmed items will be the most recently accessed items.
- 在LRUCache中,这个autowarmed 项中保存的是最近访问的项。
- Parameters: 参数选项
- class - the SolrCache implementation (currently only LRUCache)实现SolrCache接口的类 当前仅有LRUCache
-
- size - the maximum number of entries in the cache
- 在cache中最大的上限值
- initialSize - the initial capacity (number of entries) of
- the cache. (seel java.util.HashMap)
- 在cache中初始化的数量
- autowarmCount - the number of entries to prepopulate from
- and old cache.
- 从旧的缓存中预先设置的项数。
- -->
- <filterCache
- class="solr.LRUCache"
- size="512"
- initialSize="512"
- autowarmCount="256"/>
-
- <!-- queryResultCache caches results of searches - ordered lists of
- document ids (DocList) based on a query, a sort, and the range
- of documents requested. -->
- 查询结果缓存
- <queryResultCache
- class="solr.LRUCache"
- size="512"
- initialSize="512"
- autowarmCount="256"/>
-
- <!-- documentCache caches Lucene Document objects (the stored fields for each document).
- documentCache缓存Lucene的文档对象(存储领域的每一个文件)
- Since Lucene internal document ids are transient, this cache will not be autowarmed. -->
- 由于Lucene的内部文档ID标识(文档名称)是短暂的,所以这种缓存不会被自动warmed。
- <documentCache
- class="solr.LRUCache"
- size="512"
- initialSize="512"
- autowarmCount="0"/>
-
- <!-- Example of a generic cache.
- 一个通用缓存的列子。
- These caches may be accessed by name
- through SolrIndexSearcher.getCache().cacheLookup(), and cacheInsert().
- 这些缓存可以通过在SolrIndexSearcher.getCache().cacheLookup()和cacheInsert()中利用缓存名称访问得到。
- The purpose is to enable easy caching of user/application level data.
- 这样做的目的就是很方便的缓存用户级或应用程序级的数据。
- The regenerator argument should be specified as an implementation
- of solr.search.CacheRegenerator if autowarming is desired. -->
- 这么做的的关键就是应该明确规定实现solr.search.CacheRegenerator接口如果autowarming是比较理想化的设置。
- <!--
- <cache name="myUserCache"
- class="solr.LRUCache"
- size="4096"
- initialSize="1024"
- autowarmCount="1024"
- regenerator="org.mycompany.mypackage.MyRegenerator"
- />
- -->
-
- <!-- An optimization that attempts to use a filter to satisfy a search.
- 一种优化方式就是利用一个过滤器,以满足搜索需求。
- If the requested sort does not include a score,
- 如果请求的不是要求包括得分的类型,filterCache 这种过滤器将检查与过滤器相匹配的结果。如果找到,过滤器将被用来作为文档的来源识别码,并在这个基础上进行排序。
- then the filterCache
- will be checked for a filter matching the query. If found, the filter
- will be used as the source of document ids, and then the sort will be
- applied to that.
- -->
- <useFilterForSortedQuery>true</useFilterForSortedQuery>
-
- <!-- An optimization for use with the queryResultCache. When a search
- is requested, a superset of the requested number of document ids
- are collected. For example, of a search for a particular query
- requests matching documents 10 through 19, and queryWindowSize is 50,
- then documents 0 through 50 will be collected and cached. Any further
- requests in that range can be satisfied via the cache.
- -->
-
- 一种优化用于queryResultCache,当一个搜索被请求,也会收集一定数量的文档ID做为一个超集。举个例子,一个特定的查询请求匹配的文档是10到19,此时,queryWindowSize是50,这样,文档从0到50都会被收集并缓存。这样,任何更多的在这个范围内的请求都会通过缓存来满足查询。
- <queryResultWindowSize>50</queryResultWindowSize>
-
- <!-- This entry enables an int hash representation for filters (DocSets)
- when the number of items in the set is less than maxSize. For smaller
- sets, this representation is more memory efficient, more efficient to
- iterate over, and faster to take intersections.
- -->
- <HashDocSet maxSize="3000" loadFactor="0.75"/>
-
-
- <!-- boolToFilterOptimizer converts boolean clauses with zero boost
- cached filters if the number of docs selected by the clause exceeds the
- threshold (represented as a fraction of the total index)
- -->
- <boolTofilterOptimizer enabled="true" cacheSize="32" threshold=".05"/>
-
- <!-- Lazy field loading will attempt to read only parts of documents on disk that are
- requested. Enabling should be faster if you aren't retrieving all stored fields.
- -->
- <enableLazyFieldLoading>false</enableLazyFieldLoading>
- <?xml version="1.0" encoding="UTF-8" ?>
- <!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements. See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License. You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
- -->
-
- <!--
- This is the Solr schema file. This file should be named "schema.xml" and
- should be in the conf directory under the solr home
- (i.e. ./solr/conf/schema.xml by default)
- or located where the classloader for the Solr webapp can find it.
-
- This example schema is the recommended starting point for users.
- It should be kept correct and concise, usable out-of-the-box.
-
- For more information, on how to customize this file, please see
- http://wiki.apache.org/solr/SchemaXml
- -->
-
- <schema name="musicbrainz" version="1.1">
- <!-- attribute "name" is the name of this schema and is only used for display purposes.
- Applications should change this to reflect the nature of the search collection.
- version="1.1" is Solr's version number for the schema syntax and semantics. It should
- not normally be changed by applications.
- 1.0: multiValued attribute did not exist, all fields are multiValued by nature
- 1.1: multiValued attribute introduced, false by default -->
-
- <types>
- <!-- field type definitions. The "name" attribute is
- just a label to be used by field definitions. The "class"
- attribute and any other attributes determine the real
- behavior of the fieldType.
- Class names starting with "solr" refer to java classes in the
- org.apache.solr.analysis package.
- -->
-
- <!-- The StrField type is not analyzed, but indexed/stored verbatim.
- - StrField and TextField support an optional compressThreshold which
- limits compression (if enabled in the derived fields) to values which
- exceed a certain size (in characters).
- name: 字段类型名
- class: java类名
- indexed:缺省true。 说明这个数据应被搜索和排序,如果数据没有indexed,则stored应是true。
- stored: 缺省true。说明这个字段被包含在搜索结果中是合适的。如果数据没有stored,则indexed应是true。
- sortMissingLast:指没有该指定字段数据的document排在有该指定字段数据的document的后面
- sortMissingFirst:指没有该指定字段数据的document排在有该指定字段数据的document的前面
- omitNorms:字段的长度不影响得分和在索引时不做boost时,设置它为true。一般文本字段不设置为true。
- termVectors:如果字段被用来做more like this 和highlight的特性时应设置为true。
- compressed:字段是压缩的。这可能导致索引和搜索变慢,但会减少存储空间,只有StrField和TextField是可以压缩,这通常适合字段的长度超过200个字符。
- multiValued:字段多于一个值的时候,可设置为true。
- positionIncrementGap:和multiValued一起使用,设置多个值之间的虚拟空白的数量
- -->
-
- <fieldType name="string" class="solr.StrField" sortMissingLast="true" omitNorms="true"/>
-
-
- <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true" omitNorms="true"/>
-
- <!-- The optional sortMissingLast and sortMissingFirst attributes are
- currently supported on types that are sorted internally as strings.
- - If sortMissingLast="true", then a sort on this field will cause documents
- without the field to come after documents with the field,
- regardless of the requested sort order (asc or desc).
- - If sortMissingFirst="true", then a sort on this field will cause documents
- without the field to come before documents with the field,
- regardless of the requested sort order.
- - If sortMissingLast="false" and sortMissingFirst="false" (the default),
- then default lucene sorting will be used which places docs without the
- field first in an ascending sort and last in a descending sort.
- -->
-
-
- <!-- numeric field types that store and index the text
- value verbatim (and hence don't support range queries, since the
- lexicographic ordering isn't equal to the numeric ordering) -->
- <fieldType name="integer" class="solr.IntField" omitNorms="true"/>
- <fieldType name="long" class="solr.LongField" omitNorms="true"/>
- <fieldType name="float" class="solr.FloatField" omitNorms="true"/>
- <fieldType name="double" class="solr.DoubleField" omitNorms="true"/>
-
-
- <!-- Numeric field types that manipulate the value into
- a string value that isn't human-readable in its internal form,
- but with a lexicographic ordering the same as the numeric ordering,
- so that range queries work correctly. -->
- <fieldType name="sint" class="solr.SortableIntField" sortMissingLast="true" omitNorms="true"/>
- <fieldType name="slong" class="solr.SortableLongField" sortMissingLast="true" omitNorms="true"/>
- <fieldType name="sfloat" class="solr.SortableFloatField" sortMissingLast="true" omitNorms="true"/>
- <fieldType name="sdouble" class="solr.SortableDoubleField" sortMissingLast="true" omitNorms="true"/>
-
-
- <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and
- is a more restricted form of the canonical representation of dateTime
- http://www.w3.org/TR/xmlschema-2/#dateTime
- The trailing "Z" designates UTC time and is mandatory.
- Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z
- All other components are mandatory.
-
- Expressions can also be used to denote calculations that should be
- performed relative to "NOW" to determine the value, ie...
-
- NOW/HOUR
- ... Round to the start of the current hour
- NOW-1DAY
- ... Exactly 1 day prior to now
- NOW/DAY+6MONTHS+3DAYS
- ... 6 months and 3 days in the future from the start of
- the current day
-
- Consult the DateField javadocs for more information.
- -->
- <fieldType name="date" class="solr.DateField" sortMissingLast="true" omitNorms="true"/>
-
-
- <!-- The "RandomSortField" is not used to store or search any
- data. You can declare fields of this type it in your schema
- to generate psuedo-random orderings of your docs for sorting
- purposes. The ordering is generated based on the field name
- and the version of the index, As long as the index version
- remains unchanged, and the same field name is reused,
- the ordering of the docs will be consistent.
- If you want differend psuedo-random orderings of documents,
- for the same version of the index, use a dynamicField and
- change the name
- -->
- <fieldType name="random" class="solr.RandomSortField" indexed="true" />
-
- <!-- solr.TextField allows the specification of custom text analyzers
- specified as a tokenizer and a list of token filters. Different
- analyzers may be specified for indexing and querying.
-
- The optional positionIncrementGap puts space between multiple fields of
- this type on the same document, with the purpose of preventing false phrase
- matching across fields.
-
- For more info on customizing your analyzer chain, please see
- http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
- -->
-
- <!-- One can also specify an existing Analyzer class that has a
- default constructor via the class attribute on the analyzer element
- <fieldType name="text_greek" class="solr.TextField">
- <analyzer class="org.apache.lucene.analysis.el.GreekAnalyzer"/>
- </fieldType>
- -->
-
-
- <fieldType name="text_ws" class="solr.TextField" positionIncrementGap="100">
- <analyzer>
- <tokenizer class="solr.WhitespaceTokenizerFactory"/>
- </analyzer>
- </fieldType>
-
- <!-- A text field that uses WordDelimiterFilter to enable splitting and matching of
- words on case-change, alpha numeric boundaries, and non-alphanumeric chars,
- so that a query of "wifi" or "wi fi" could match a document containing "Wi-Fi".
- Synonyms and stopwords are customized by external files, and stemming is enabled.
- Duplicate tokens at the same position (which may result from Stemmed Synonyms or
- WordDelim parts) are removed.
- -->
- <fieldType name="text" class="solr.TextField" positionIncrementGap="100">
- <analyzer type="index">
- <tokenizer class="solr.WhitespaceTokenizerFactory"/>
- <!-- in this example, we will only use synonyms at query time
- <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
- -->
- <!-- Case insensitive stop word removal.
- enablePositionIncrements=true ensures that a 'gap' is left to
- allow for accurate phrase queries.
- -->
- <filter class="solr.StopFilterFactory"
- ignoreCase="true"
- words="stopwords.txt"
- enablePositionIncrements="true"
- />
- <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
- <filter class="solr.LowerCaseFilterFactory"/>
- <filter class="solr.EnglishPorterFilterFactory" protected="protwords.txt"/>
- <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
- </analyzer>
- <analyzer type="query">
- <tokenizer class="solr.WhitespaceTokenizerFactory"/>
- <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
- <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/>
- <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
- <filter class="solr.LowerCaseFilterFactory"/>
- <filter class="solr.EnglishPorterFilterFactory" protected="protwords.txt"/>
- <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
- </analyzer>
- </fieldType>
-
-
- <!-- Less flexible matching, but less false matches. Probably not ideal for product names,
- but may be good for SKUs. Can insert dashes in the wrong place and still match. -->
- <fieldType name="textTight" class="solr.TextField" positionIncrementGap="100" >
- <analyzer>
- <tokenizer class="solr.WhitespaceTokenizerFactory"/>
- <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>
- <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/>
- <filter class="solr.WordDelimiterFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>
- <filter class="solr.LowerCaseFilterFactory"/>
- <filter class="solr.EnglishPorterFilterFactory" protected="protwords.txt"/>
- <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
- </analyzer>
- </fieldType>
-
-
- <fieldType name="title" class="solr.TextField" positionIncrementGap="100" >
- <analyzer>
- <tokenizer class="solr.WhitespaceTokenizerFactory"/>
-
- <filter class="solr.WordDelimiterFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>
- <filter class="solr.LowerCaseFilterFactory"/>
-
- <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
- </analyzer>
- </fieldType>
-
- <!--
- Setup simple analysis for spell checking
-
- <fieldType name="textSpell" class="solr.TextField" positionIncrementGap="100" stored="false" >
- <analyzer>
- <tokenizer class="solr.StandardTokenizerFactory"/>
- <filter class="solr.LowerCaseFilterFactory"/>
- <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
- </analyzer>
- </fieldType>
- -->
-
- <fieldType name="textSpell" class="solr.TextField" positionIncrementGap="100" stored="false" multiValued="true">
- <analyzer type="index">
- <tokenizer class="solr.StandardTokenizerFactory"/>
- <filter class="solr.LowerCaseFilterFactory"/>
- <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
- <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/>
- <filter class="solr.StandardFilterFactory"/>
- <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
- </analyzer>
- <analyzer type="query">
- <tokenizer class="solr.StandardTokenizerFactory"/>
- <filter class="solr.LowerCaseFilterFactory"/>
- <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/>
- <filter class="solr.StandardFilterFactory"/>
- <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
- </analyzer>
- </fieldType>
-
- <fieldType name="textSpellPhrase" class="solr.TextField" positionIncrementGap="100" stored="false" multiValued="true">
- <analyzer>
- <tokenizer class="solr.KeywordTokenizerFactory"/>
- <filter class="solr.LowerCaseFilterFactory"/>
- </analyzer>
- </fieldType>
-
- <!-- This is an example of using the KeywordTokenizer along
- With various TokenFilterFactories to produce a sortable field
- that does not include some properties of the source text
- -->
- <fieldType name="alphaOnlySort" class="solr.TextField" sortMissingLast="true" omitNorms="true">
- <analyzer>
- <!-- KeywordTokenizer does no actual tokenizing, so the entire
- input string is preserved as a single token
- -->
- <tokenizer class="solr.KeywordTokenizerFactory"/>
- <!-- The LowerCase TokenFilter does what you expect, which can be
- when you want your sorting to be case insensitive
- -->
- <filter class="solr.LowerCaseFilterFactory" />
-
- <filter class="solr.TrimFilterFactory" />
- <!-- The PatternReplaceFilter gives you the flexibility to use
- Java Regular expression to replace any sequence of characters
- matching a pattern with an arbitrary replacement string,
- which may include back refrences to portions of the orriginal
- string matched by the pattern.
-
- See the Java Regular Expression documentation for more
- infomation on pattern and replacement string syntax.
-
- http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/package-summary.html
- -->
- <filter class="solr.PatternReplaceFilterFactory"
- pattern="([^a-z])" replacement="" replace="all"
- />
- </analyzer>
- </fieldType>
-
- <fieldType name="rType" class="solr.TextField" sortMissingLast="true" omitNorms="true">
- <analyzer>
- <tokenizer class="solr.KeywordTokenizerFactory"/>
- <filter class="solr.PatternReplaceFilterFactory"
- pattern="^(0|1\d\d)$" replacement="" replace="first" />
- <filter class="solr.LengthFilterFactory" min="1" max="100" />
- <filter class="solr.SynonymFilterFactory" synonyms="mb_attributes.txt" ignoreCase="false" expand="false"/>
- </analyzer>
- </fieldType>
-
- <fieldType name="rOfficial" class="solr.TextField" sortMissingLast="true" omitNorms="true">
- <analyzer>
- <tokenizer class="solr.KeywordTokenizerFactory"/>
- <filter class="solr.PatternReplaceFilterFactory"
- pattern="^(0|\d\d?)$" replacement="" replace="first" />
- <filter class="solr.LengthFilterFactory" min="1" max="100" />
- <filter class="solr.SynonymFilterFactory" synonyms="mb_attributes.txt" ignoreCase="false" expand="false"/>
- </analyzer>
- </fieldType>
-
- <fieldType name="bucketFirstLetter" class="solr.TextField" sortMissingLast="true" omitNorms="true">
- <analyzer type="index">
- <tokenizer class="solr.PatternTokenizerFactory" pattern="^([a-zA-Z]).*" group="1" />
- <filter class="solr.SynonymFilterFactory" synonyms="mb_letterBuckets.txt" ignoreCase="true" expand="false"/>
- </analyzer>
- <analyzer type="query">
- <tokenizer class="solr.KeywordTokenizerFactory"/>
- </analyzer>
- </fieldType>
-
- <!-- since fields of this type are by default not stored or indexed, any data added to
- them will be ignored outright
- -->
- <fieldtype name="ignored" stored="false" indexed="false" class="solr.StrField" />
-
- </types>
-
-
- <fields>
- <!-- Valid attributes for fields:
- name: mandatory - the name for the field
- type: mandatory - the name of a previously defined type from the <types> section
- indexed: true if this field should be indexed (searchable or sortable)
- stored: true if this field should be retrievable
- compressed: [false] if this field should be stored using gzip compression
- (this will only apply if the field type is compressable; among
- the standard field types, only TextField and StrField are)
- multiValued: true if this field may contain multiple values per document
- omitNorms: (expert) set to true to omit the norms associated with
- this field (this disables length normalization and index-time
- boosting for the field, and saves some memory). Only full-text
- fields or fields that need an index-time boost need norms.
- termVectors: [false] set to true to store the term vector for a given field.
- When using MoreLikeThis, fields used for similarity should be stored for
- best performance.
- name:字段的名字。
- type:字段的类型。
- default:一般用来记录索引的时间。
- required:设置为true时,当字段没有值,则solr会索引文档失败。
- -->
-
- <field name="id" type="string" required="true" />
-
- <field name="type" type="string" required="true" />
-
-
-
- <field name="a_name" type="title" />
- <field name="a_name_sort" type="string" stored="false" />
- <field name="a_alias" type="title" stored="false" multiValued="true" />
- <field name="a_type" type="string" />
- <field name="a_begin_date" type="date" />
- <field name="a_end_date" type="date" />
- <field name="a_member_name" type="title" multiValued="true" />
- <field name="a_member_id" type="title" multiValued="true" />
- <field name="a_release_date_latest" type="date" />
-
-
- <field name="a_spell" type="textSpell" />
- <field name="a_spellPhrase" type="textSpellPhrase" />
-
-
-
- <field name="r_name" type="title" />
- <field name="r_name_sort" type="alphaOnlySort" stored="false"/>
- <field name="r_name_facetLetter" type="bucketFirstLetter" stored="false" />
-
- <field name="r_a_name" type="title" />
- <field name="r_a_id" type="string" />
-
-
- <field name="r_attributes" type="integer" multiValued="true" indexed="false" />
- <field name="r_type" type="rType" multiValued="true" stored="false"/>
- <field name="r_official" type="rOfficial" multiValued="true" stored="false"/>
-
- <field name="r_lang" type="string" indexed="false" />
- <field name="r_tracks" type="sint" indexed="false" />
- <field name="r_event_country" type="string" multiValued="true" />
- <field name="r_event_date" type="date" multiValued="true" />
- <field name="r_event_date_earliest" type="date" multiValued="false" />
-
-
-
- <field name="l_name" type="title" />
- <field name="l_name_sort" type="string" stored="false" />
- <field name="l_type" type="string" />
- <field name="l_begin_date" type="date" />
- <field name="l_end_date" type="date" />
-
-
-
- <field name="t_name" type="title" />
- <field name="t_duration" type="sint"/>
- <field name="t_a_id" type="string" />
- <field name="t_a_name" type="title" />
- <field name="t_num" type="integer" indexed="false" />
- <field name="t_r_id" type="string" />
- <field name="t_r_name" type="title" />
- <field name="t_r_attributes" multiValued="true" type="integer" />
- <field name="t_r_tracks" type="sint" />
- <field name="t_trm_lookups" type="sint" />
-
-
- <field name="word" type="ignored" />
- <field name="includes" type="ignored" />
-
- </fields>
-
- <!-- Field to use to determine and enforce document uniqueness.
- Unless this field is marked with required="false", it will be a required field
- 唯一标识文档的字段
- -->
- <uniqueKey>id</uniqueKey>
-
- <!--
- field for the QueryParser to use when an explicit fieldname is absent
- 默认搜索字段
- -->
-
- <defaultSearchField>a_name</defaultSearchField>
-
-
-
-
- <!-- copyField commands copy one field to another at the time a document
- is added to the index. It's used either to index the same field differently,
- or to add multiple fields to the same field for easier/faster searching. -->
- <copyField source="a_name" dest="a_spell" />
- <copyField source="a_alias" dest="a_spell" />
- <copyField source="a_name" dest="a_spellPhrase" />
- <copyField source="a_alias" dest="a_spellPhrase" />
-
- <copyField source="r_name" dest="r_name_sort" />
- <copyField source="r_name" dest="r_name_facetLetter" />
- <copyField source="r_attributes" dest="r_type" />
- <copyField source="r_attributes" dest="r_official" />
-
-
-
- <!-- Similarity is the scoring routine for each document vs. a query.
- A custom similarity may be specified here, but the default is fine
- for most applications. -->
-
- <!-- ... OR ...
- Specify a SimilarityFactory class name implementation
- allowing parameters to be used.
- -->
- <!--
- <similarity class="com.example.solr.CustomSimilarityFactory">
- <str name="paramkey">param value</str>
- </similarity>
- -->
-
- </schema>
分享到:
Global site tag (gtag.js) - Google Analytics
相关推荐
Solrconfig.xml 是 Apache Solr 的核心配置文件之一,主要用于定义 Solr 实例如何处理文档的索引与查询请求。该文件中包含了多种配置项,用于定制化 Solr 的行为。 #### Solrconfig.xml 详解 **datadir 节点** - ...
配置文件(如`schema.xml`或`solrconfig.xml`)需要更新以指示Solr使用IKAnalyzer进行分词。JAR库文件(如`ik-analyzer.jar`)则需要添加到Solr的类路径中,以便在运行时能够加载和使用分词器。字典文件通常包含预定...
Solr是Apache Lucene项目的一个子项目,是一个高性能、基于Java的企业级全文搜索引擎服务器。当你在尝试启动Solr时遇到404错误,这通常意味着Solr服务没有正确地启动或者配置文件设置不正确。404错误表示“未找到”...
`solrconfig.xml`定义了Solr实例的行为,包括搜索处理流程、缓存策略和更新处理。`schema.xml`(在较新版本中为managed schema)用于定义字段类型和字段,以及文档结构。 5. **请求处理器**:Solr提供多种请求...
然后,在Solr的配置文件(如solrconfig.xml)中,你需要定义一个DIH的配置,包括数据源类型、查询语句、映射规则等。接着,设置定时任务的配置,例如定义一个cron表达式来指定数据导入的频率。 定时任务的触发可以...
2. **添加Scheduler配置**:在Solr的配置文件`solrconfig.xml`中,你需要定义DataImportScheduler的配置,包括定时任务的频率、执行时间等。这通常涉及到`<requestHandler>`和`<lst name="dataimport">`标签的设置。...
- **Solr 的配置文件**: 如何理解 solrconfig.xml 和 schema.xml 文件的作用及其配置选项。 2. **数据导入** - **数据导入处理器**: 使用 Solr 的 Data Import Handler (DIH) 进行数据导入,包括配置文件的编写和...
3. 修改Solr的配置文件`solrconfig.xml`,在`<searcher>`标签内添加IK分析器的定义: ```xml <tokenizer class="org.apache.lucene.analysis.cn.smart.SmartChineseAnalyzer"/> <tokenizer class="org.apache....
6. **server/solr 目录**:存储了配置集合的目录,每个集合都有自己的配置文件,如`schema.xml`用于定义字段和字段类型,`solrconfig.xml`定义了索引和查询的行为。 7. **contrib 目录**:包含了一些社区贡献的模块...
Solr4.X是Apache Lucene项目的一个开源搜索引擎服务器,它提供了全文检索、命中高亮、 faceted search(分面搜索)等多种高级功能。在处理中文文档时,由于中文的复杂性,如词语边界不明显,需要使用特定的中文分词...
在部署和使用Solr时,你需要知道如何配置Solr核心(core)以满足你的需求,这可能涉及到创建和修改`solrconfig.xml`(配置文件)、`schema.xml`(定义字段和分析器)和`managed-schema`(在较新版本中,用于替代`...
Apache Solr 是一个开源的全文搜索引擎,广泛应用于各种企业级数据搜索和分析场景。增量更新是Solr的一个关键特性,它允许系统仅处理自上次完整索引以来发生更改的数据,从而提高了性能并降低了资源消耗。"apache-...
2. **配置灵活**:Solr的配置文件如solrconfig.xml和schema.xml允许用户自定义索引和查询行为。solrconfig.xml用于配置索引和查询处理链,而schema.xml则定义了字段类型、字段和字段的处理规则。 3. **分布式搜索**...
尽管Solr推荐使用solrconfig.xml来管理配置,但旧版的Solr配置文件仍然可用,例如solr.xml。在创建新Solr实例时,通常需要将这些配置文件放置到适当的目录中,并使用solr脚本初始化。 整体而言,Apache Solr ...
集成IKAnalyzer到Solr的过程中,通常需要修改Solr的配置文件,包括solrconfig.xml和schema.xml。在solrconfig.xml中,需要配置分词器的相关参数,如词典路径等;在schema.xml中,需要定义字段类型(fieldType)并...
4. **配置与Schema**:Solr的配置文件(solrconfig.xml)和Schema文件(schema.xml)定义了索引和查询的行为。在源代码中,`Config`和`Schema`类处理这些配置,提供动态修改Schema的能力。 5. **分布式搜索**:...
要使用这个插件,你需要将`apache-solr-dataimportscheduler.jar`部署到Solr服务器的`lib`目录下,然后在Solr的配置文件(通常为`solrconfig.xml`)中添加相关的数据导入调度器配置。这样,Solr启动后就会按照配置...
以及`solrconfig.xml`和`schema.xml`(位于Solr的`conf`目录下),用于定义Solr的索引配置和字段类型。 Solr的配置文件`solrconfig.xml`定义了索引的创建、更新和查询行为。例如,你可以配置搜索请求处理器、缓存...
1. **配置文件**:在Solr中,配置文件位于`conf`目录下,包括`schema.xml`(定义字段和索引规则)、`solrconfig.xml`(配置索引和查询行为)等,它们是定制Solr核心行为的关键。 2. **索引目录**:索引文件通常存储...
在实际使用中,可以通过修改`solr/conf`目录下的配置文件来定制Solr的行为,如`schema.xml`用于定义字段和索引规则,`solrconfig.xml`配置索引和查询行为。同时,可以通过Web管理界面(`...