- 浏览: 853687 次
文章分类
- 全部博客 (365)
- java (124)
- spring mvc (21)
- spring (22)
- struts2 (6)
- jquery (27)
- javascript (24)
- mybatis/ibatis (8)
- hibernate (7)
- compass (11)
- lucene (26)
- flex (0)
- actionscript (0)
- webservice (8)
- rabbitMQ/Socket (15)
- jsp/freemaker (5)
- 数据库 (27)
- 应用服务器 (21)
- Hadoop (1)
- PowerDesigner (3)
- EJB (0)
- JPA (0)
- PHP (2)
- C# (0)
- .NET (0)
- html (2)
- xml (5)
- android (7)
- flume (1)
- zookeeper (0)
- 证书加密 (2)
- maven (1)
- redis (2)
- cas (11)
最新评论
-
zuxianghuang:
通过pom上传报错 Artifact upload faile ...
nexus上传了jar包.通过maven引用当前jar,不能取得jar的依赖 -
流年末年:
百度网盘的挂了吧???
SSO单点登录系列3:cas-server端配置认证方式实践(数据源+自定义java类认证) -
953434367:
UfgovDBUtil 是什么类
Java发HTTP POST请求(内容为xml格式) -
smilease:
帮大忙了,非常感谢
freemaker自动生成源代码 -
syd505:
十分感谢作者无私的分享,仔细阅读后很多地方得以解惑。
Nginx 反向代理、负载均衡、页面缓存、URL重写及读写分离详解
Solrj已经是很强大的solr客户端了。它本身就包装了httpCliet,以完全对象的方式对solr进行交互。很小很好很强大。
不过在实际使用中,设置SolrQuery 的过程中,为了设置多个搜索条件和排序规则等等参数,我们往往会陷入并接字符串的地步,实在是很丑陋,不符合面向对象的思想。扩展性几乎为0,。基于这点,开发了一个小东西,我们只需要设置搜索对象,将对象扔给后台就可以了。
比如,我们搭建的solr服务支持某10个字段的搜索,我们要搜索其中的一些,那么我们只需要传入要搜索的对象POJO,将要搜索的字段内容,set到POJO对象对应额字段即可。
比如如下一个类:
- package org.uppower.tnt.biz.core.manager.blog.dataobject;
- /**
- * @author yingmu
- * @version 2010-7-20 下午01:00:55
- */
- public class SolrPropertyDO {
- private String auction_id;
- private String opt_tag;
- private String exp_tag;
- private String title;
- private String desc;
- private String brand;
- private String category;
- private String price;
- private String add_prov;
- private String add_city;
- private String quality;
- private String flag;
- private String sales;
- private String sellerrate;
- private String selleruid;
- private String ipv15;
- public String getAuction_id() {
- return auction_id;
- }
- public void setAuction_id(String auctionId) {
- auction_id = auctionId;
- }
- ……
- public String getExp_tag() {
- return exp_tag;
- }
- public void setExp_tag(String expTag) {
- exp_tag = expTag;
- }
- }
package org.uppower.tnt.biz.core.manager.blog.dataobject; /** * @author yingmu * @version 2010-7-20 下午01:00:55 */ public class SolrPropertyDO { private String auction_id; private String opt_tag; private String exp_tag; private String title; private String desc; private String brand; private String category; private String price; private String add_prov; private String add_city; private String quality; private String flag; private String sales; private String sellerrate; private String selleruid; private String ipv15; public String getAuction_id() { return auction_id; } public void setAuction_id(String auctionId) { auction_id = auctionId; } …… public String getExp_tag() { return exp_tag; } public void setExp_tag(String expTag) { exp_tag = expTag; } }
那么我们在定义搜索对象时候,就按照如下设置:
- SolrPropertyDO propertyDO = new SolrPropertyDO();
- propertyDO.setAdd_city("(杭州AND成都)OR北京");
- propertyDO.setTitle("丝绸OR剪刀");
- ……
SolrPropertyDO propertyDO = new SolrPropertyDO(); propertyDO.setAdd_city("(杭州AND成都)OR北京"); propertyDO.setTitle("丝绸OR剪刀"); ……
设置排序条件,也是类似的做法:
- SolrPropertyDO compositorDO = new SolrPropertyDO();
- compositorDO.setPrice ("desc");
- compositorDO.setQuality ("asc");
- ……
SolrPropertyDO compositorDO = new SolrPropertyDO(); compositorDO.setPrice ("desc"); compositorDO.setQuality ("asc"); ……
将定义好的两个对象扔给后面的接口就可以了。
接口函数querySolrResult传入四个参数,其中包含搜索字段对象,排序条件对象。为了提供类似limit的操作,用于分页查询,提供了startIndex和pageSize。
函数querySolrResultCount是单纯为了获得搜索条数,配合分页使用。
以下是定义的接口:
- package org.uppower.tnt.biz.core.manager.blog;
- import java.util.List;
- import org.uppower.tnt.biz.core.manager.isearch.dataobject.SolrPropertyDO;
- /**
- * @author yingmu
- * @version 2010-7-20 下午03:51:15
- */
- public interface SolrjOperator {
- /**
- * 获得搜索结果
- *
- * @param propertyDO
- * @param compositorDO
- * @param startIndex
- * @param pageSize
- * @return
- * @throws Exception
- */
- public List<Object> querySolrResult(Object propertyDO,
- Object compositorDO, Long startIndex, Long pageSize)
- throws Exception;
- /**
- * 获得搜索结果条数
- *
- * @param propertyDO
- * @param compositorDO
- * @return
- * @throws Exception
- */
- public Long querySolrResultCount(SolrPropertyDO propertyDO,
- Object compositorDO) throws Exception;
- }
package org.uppower.tnt.biz.core.manager.blog; import java.util.List; import org.uppower.tnt.biz.core.manager.isearch.dataobject.SolrPropertyDO; /** * @author yingmu * @version 2010-7-20 下午03:51:15 */ public interface SolrjOperator { /** * 获得搜索结果 * * @param propertyDO * @param compositorDO * @param startIndex * @param pageSize * @return * @throws Exception */ public List<Object> querySolrResult(Object propertyDO, Object compositorDO, Long startIndex, Long pageSize) throws Exception; /** * 获得搜索结果条数 * * @param propertyDO * @param compositorDO * @return * @throws Exception */ public Long querySolrResultCount(SolrPropertyDO propertyDO, Object compositorDO) throws Exception; }
实现逻辑为,首先将传入的两个实体对象,解析为<K,V>结构的Map当中,将解析完成的Map放入solrj实际的搜索对象当中。返回的对象为solrj的API提供的SolrDocument,其中结果数量为直接返回SolrDocumentList对象的getNumFound()
具体实现类:
- package org.uppower.tnt.biz.core.manager.blog;
- import java.util.ArrayList;
- import java.util.HashMap;
- import java.util.List;
- import java.util.Map;
- import java.util.TreeMap;
- import org.apache.solr.common.SolrDocumentList;
- import org.uppower.tnt.biz.core.manager.isearch.common.SolrjCommonUtil;
- import org.uppower.tnt.biz.core.manager.isearch.dataobject.SolrPropertyDO;
- import org.uppower.tnt.biz.core.manager.isearch.solrj.SolrjQuery;
- /**
- * @author yingmu
- * @version 2010-7-20 下午03:51:15
- */
- public class DefaultSolrOperator implements SolrjOperator {
- private Logger logger = LoggerFactory.getLogger(this.getClass());
- private SolrjQuery solrjQuery;
- public void setSolrjQuery(SolrjQuery solrjQuery) {
- this.solrjQuery = solrjQuery;
- }
- @Override
- public List<Object> querySolrResult(Object propertyDO,
- Object compositorDO, Long startIndex, Long pageSize)
- throws Exception {
- Map<String, String> propertyMap = new TreeMap<String, String>();
- //排序有顺序,使用TreeMap
- Map<String, String> compositorMap = new TreeMap<String, String>();
- try {
- propertyMap = SolrjCommonUtil.getSearchProperty(propertyDO);
- compositorMap = SolrjCommonUtil.getSearchProperty(compositorDO);
- } catch (Exception e) {
- logger.error("SolrjCommonUtil.getSearchProperty() is error !"+ e);
- }
- SolrDocumentList solrDocumentList = solrjQuery.query(propertyMap, compositorMap,
- startIndex, pageSize);
- List<Object> resultList = new ArrayList<Object>();
- for (int i = 0; i < solrDocumentList.size(); i++) {
- resultList.add(solrDocumentList.get(i));
- }
- return resultList;
- }
- @Override
- public Long querySolrResultCount(SolrPropertyDO propertyDO,
- Object compositorDO) throws Exception {
- Map<String, String> propertyMap = new TreeMap<String, String>();
- Map<String, String> compositorMap = new TreeMap<String, String>();
- try {
- propertyMap = SolrjCommonUtil.getSearchProperty(propertyDO);
- compositorMap = SolrjCommonUtil.getSearchProperty(compositorDO);
- } catch (Exception e) {
- logger.error("SolrjCommonUtil.getSearchProperty() is error !" + e);
- }
- SolrDocumentList solrDocument = solrjQuery.query(propertyMap, compositorMap,
- null, null);
- return solrDocument.getNumFound();
- }
- }
package org.uppower.tnt.biz.core.manager.blog; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.TreeMap; import org.apache.solr.common.SolrDocumentList; import org.uppower.tnt.biz.core.manager.isearch.common.SolrjCommonUtil; import org.uppower.tnt.biz.core.manager.isearch.dataobject.SolrPropertyDO; import org.uppower.tnt.biz.core.manager.isearch.solrj.SolrjQuery; /** * @author yingmu * @version 2010-7-20 下午03:51:15 */ public class DefaultSolrOperator implements SolrjOperator { private Logger logger = LoggerFactory.getLogger(this.getClass()); private SolrjQuery solrjQuery; public void setSolrjQuery(SolrjQuery solrjQuery) { this.solrjQuery = solrjQuery; } @Override public List<Object> querySolrResult(Object propertyDO, Object compositorDO, Long startIndex, Long pageSize) throws Exception { Map<String, String> propertyMap = new TreeMap<String, String>(); //排序有顺序,使用TreeMap Map<String, String> compositorMap = new TreeMap<String, String>(); try { propertyMap = SolrjCommonUtil.getSearchProperty(propertyDO); compositorMap = SolrjCommonUtil.getSearchProperty(compositorDO); } catch (Exception e) { logger.error("SolrjCommonUtil.getSearchProperty() is error !"+ e); } SolrDocumentList solrDocumentList = solrjQuery.query(propertyMap, compositorMap, startIndex, pageSize); List<Object> resultList = new ArrayList<Object>(); for (int i = 0; i < solrDocumentList.size(); i++) { resultList.add(solrDocumentList.get(i)); } return resultList; } @Override public Long querySolrResultCount(SolrPropertyDO propertyDO, Object compositorDO) throws Exception { Map<String, String> propertyMap = new TreeMap<String, String>(); Map<String, String> compositorMap = new TreeMap<String, String>(); try { propertyMap = SolrjCommonUtil.getSearchProperty(propertyDO); compositorMap = SolrjCommonUtil.getSearchProperty(compositorDO); } catch (Exception e) { logger.error("SolrjCommonUtil.getSearchProperty() is error !" + e); } SolrDocumentList solrDocument = solrjQuery.query(propertyMap, compositorMap, null, null); return solrDocument.getNumFound(); } }
其中,对象的解析式利用反射原理,将实体对象中不为空的值,以映射的方式,转化为一个Map,其中排序对象在转化的过程中,使用TreeMap,保证其顺序性。
解析公共类实现如下:
- package org.uppower.tnt.biz.core.manager.blog.common;
- import java.lang.reflect.Field;
- import java.lang.reflect.InvocationTargetException;
- import java.lang.reflect.Method;
- import java.util.HashMap;
- import java.util.Map;
- /**
- * @author yingmu
- * @version 2010-7-20 下午01:07:15
- */
- public class SolrjCommonUtil {
- public static Map<String, String> getSearchProperty(Object model)
- throws NoSuchMethodException, IllegalAccessException,
- IllegalArgumentException, InvocationTargetException {
- Map<String, String> resultMap = new TreeMap<String, String>();
- // 获取实体类的所有属性,返回Field数组
- Field[] field = model.getClass().getDeclaredFields();
- for (int i = 0; i < field.length; i++) { // 遍历所有属性
- String name = field[i].getName(); // 获取属性的名字
- // 获取属性的类型
- String type = field[i].getGenericType().toString();
- if (type.equals("class java.lang.String")) { // 如果type是类类型,则前面包含"class ",后面跟类名
- Method m = model.getClass().getMethod(
- "get" + UpperCaseField(name));
- String value = (String) m.invoke(model); // 调用getter方法获取属性值
- if (value != null) {
- resultMap.put(name, value);
- }
- }
- }
- return resultMap;
- }
- // 转化字段首字母为大写
- private static String UpperCaseField(String fieldName) {
- fieldName = fieldName.replaceFirst(fieldName.substring(0, 1), fieldName
- .substring(0, 1).toUpperCase());
- return fieldName;
- }
- }
package org.uppower.tnt.biz.core.manager.blog.common; import java.lang.reflect.Field; import java.lang.reflect.InvocationTargetException; import java.lang.reflect.Method; import java.util.HashMap; import java.util.Map; /** * @author yingmu * @version 2010-7-20 下午01:07:15 */ public class SolrjCommonUtil { public static Map<String, String> getSearchProperty(Object model) throws NoSuchMethodException, IllegalAccessException, IllegalArgumentException, InvocationTargetException { Map<String, String> resultMap = new TreeMap<String, String>(); // 获取实体类的所有属性,返回Field数组 Field[] field = model.getClass().getDeclaredFields(); for (int i = 0; i < field.length; i++) { // 遍历所有属性 String name = field[i].getName(); // 获取属性的名字 // 获取属性的类型 String type = field[i].getGenericType().toString(); if (type.equals("class java.lang.String")) { // 如果type是类类型,则前面包含"class ",后面跟类名 Method m = model.getClass().getMethod( "get" + UpperCaseField(name)); String value = (String) m.invoke(model); // 调用getter方法获取属性值 if (value != null) { resultMap.put(name, value); } } } return resultMap; } // 转化字段首字母为大写 private static String UpperCaseField(String fieldName) { fieldName = fieldName.replaceFirst(fieldName.substring(0, 1), fieldName .substring(0, 1).toUpperCase()); return fieldName; } }
搜索直接调用solr客户端solrj,基本逻辑为循环两个解析之后的TreeMap,设置到SolrQuery当中,最后直接调用solrj的API,获得搜索结果。最终将搜索结构以List<Object>的形式返回。
具体实现:
- package org.uppower.tnt.biz.core.manager.blog.solrj;
- import java.net.MalformedURLException;
- import java.util.Map;
- import org.apache.solr.client.solrj.SolrQuery;
- import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer;
- import org.apache.solr.client.solrj.response.QueryResponse;
- import org.apache.solr.common.SolrDocumentList;
- /**
- * @author yingmu
- * @version 2010-7-20 下午02:57:04
- */
- public class SolrjQuery {
- private String url;
- private Integer soTimeOut;
- private Integer connectionTimeOut;
- private Integer maxConnectionsPerHost;
- private Integer maxTotalConnections;
- private Integer maxRetries;
- private CommonsHttpSolrServer solrServer = null;
- private final static String ASC = "asc";
- public void init() throws MalformedURLException {
- solrServer = new CommonsHttpSolrServer(url);
- solrServer.setSoTimeout(soTimeOut);
- solrServer.setConnectionTimeout(connectionTimeOut);
- solrServer.setDefaultMaxConnectionsPerHost(maxConnectionsPerHost);
- solrServer.setMaxTotalConnections(maxTotalConnections);
- solrServer.setFollowRedirects(false);
- solrServer.setAllowCompression(true);
- solrServer.setMaxRetries(maxRetries);
- }
- public SolrDocumentList query(Map<String, String> propertyMap,
- Map<String, String> compositorMap, Long startIndex, Long pageSize)
- throws Exception {
- SolrQuery query = new SolrQuery();
- // 设置搜索字段
- if (null == propertyMap) {
- throw new Exception("搜索字段不可为空!");
- } else {
- for (Object o : propertyMap.keySet()) {
- StringBuffer sb = new StringBuffer();
- sb.append(o.toString()).append(":");
- sb.append(propertyMap.get(o));
- String queryString = addBlank2Expression(sb.toString());
- query.setQuery(queryString);
- }
- }
- // 设置排序条件
- if (null != compositorMap) {
- for (Object co : compositorMap.keySet()) {
- if (ASC == compositorMap.get(co)
- || ASC.equals(compositorMap.get(co))) {
- query.addSortField(co.toString(), SolrQuery.ORDER.asc);
- } else {
- query.addSortField(co.toString(), SolrQuery.ORDER.desc);
- }
- }
- }
- if (null != startIndex) {
- query.setStart(Integer.parseInt(String.valueOf(startIndex)));
- }
- if (null != pageSize && 0L != pageSize.longValue()) {
- query.setRows(Integer.parseInt(String.valueOf(pageSize)));
- }
- try {
- QueryResponse qrsp = solrServer.query(query);
- SolrDocumentList docs = qrsp.getResults();
- return docs;
- } catch (Exception e) {
- throw new Exception(e);
- }
- }
- private String addBlank2Expression(String oldExpression) {
- String lastExpression;
- lastExpression = oldExpression.replace("AND", " AND ").replace("NOT",
- " NOT ").replace("OR", " OR ");
- return lastExpression;
- }
- public Integer getMaxRetries() {
- return maxRetries;
- }
- ……
- public void setMaxTotalConnections(Integer maxTotalConnections) {
- this.maxTotalConnections = maxTotalConnections;
- }
- }
package org.uppower.tnt.biz.core.manager.blog.solrj; import java.net.MalformedURLException; import java.util.Map; import org.apache.solr.client.solrj.SolrQuery; import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer; import org.apache.solr.client.solrj.response.QueryResponse; import org.apache.solr.common.SolrDocumentList; /** * @author yingmu * @version 2010-7-20 下午02:57:04 */ public class SolrjQuery { private String url; private Integer soTimeOut; private Integer connectionTimeOut; private Integer maxConnectionsPerHost; private Integer maxTotalConnections; private Integer maxRetries; private CommonsHttpSolrServer solrServer = null; private final static String ASC = "asc"; public void init() throws MalformedURLException { solrServer = new CommonsHttpSolrServer(url); solrServer.setSoTimeout(soTimeOut); solrServer.setConnectionTimeout(connectionTimeOut); solrServer.setDefaultMaxConnectionsPerHost(maxConnectionsPerHost); solrServer.setMaxTotalConnections(maxTotalConnections); solrServer.setFollowRedirects(false); solrServer.setAllowCompression(true); solrServer.setMaxRetries(maxRetries); } public SolrDocumentList query(Map<String, String> propertyMap, Map<String, String> compositorMap, Long startIndex, Long pageSize) throws Exception { SolrQuery query = new SolrQuery(); // 设置搜索字段 if (null == propertyMap) { throw new Exception("搜索字段不可为空!"); } else { for (Object o : propertyMap.keySet()) { StringBuffer sb = new StringBuffer(); sb.append(o.toString()).append(":"); sb.append(propertyMap.get(o)); String queryString = addBlank2Expression(sb.toString()); query.setQuery(queryString); } } // 设置排序条件 if (null != compositorMap) { for (Object co : compositorMap.keySet()) { if (ASC == compositorMap.get(co) || ASC.equals(compositorMap.get(co))) { query.addSortField(co.toString(), SolrQuery.ORDER.asc); } else { query.addSortField(co.toString(), SolrQuery.ORDER.desc); } } } if (null != startIndex) { query.setStart(Integer.parseInt(String.valueOf(startIndex))); } if (null != pageSize && 0L != pageSize.longValue()) { query.setRows(Integer.parseInt(String.valueOf(pageSize))); } try { QueryResponse qrsp = solrServer.query(query); SolrDocumentList docs = qrsp.getResults(); return docs; } catch (Exception e) { throw new Exception(e); } } private String addBlank2Expression(String oldExpression) { String lastExpression; lastExpression = oldExpression.replace("AND", " AND ").replace("NOT", " NOT ").replace("OR", " OR "); return lastExpression; } public Integer getMaxRetries() { return maxRetries; } …… public void setMaxTotalConnections(Integer maxTotalConnections) { this.maxTotalConnections = maxTotalConnections; } }
整个实现是在Spring的基础上完成的,其中SolrjQuery的init()方法在Spring容器启动是初始化。Init()方法内的属性,也是直接注入的。上层与下层之间也完全用注入的方式解决。具体配置就不贴不出来了,大家都会。
整个代码很简陋,但是几乎支持了你想要搜索的条件设置,而且不会暴露任何与solr相关的内容给上层调用,使整个搜索几乎以sql语言的思想在设置条件。
http://www.iteye.com/topic/315330
- package org.nstcrm.person.util;
- import java.lang.reflect.Field;
- import java.lang.reflect.Method;
- import java.net.MalformedURLException;
- import java.util.ArrayList;
- import java.util.List;
- import java.util.Map;
- import java.util.TreeMap;
- import org.apache.solr.client.solrj.SolrQuery;
- import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer;
- import org.apache.solr.client.solrj.response.QueryResponse;
- import org.apache.solr.common.SolrDocumentList;
- public class SolrHttpServer {
- //private Logger logger = LoggerFactory.getLogger(this.getClass());
- private final static String URL = "http://localhost:8080/solr";
- private final static Integer SOCKE_TTIMEOUT = 1000; // socket read timeout
- private final static Integer CONN_TIMEOUT = 100;
- private final static Integer MAXCONN_DEFAULT = 100;
- private final static Integer MAXCONN_TOTAL = 100;
- private final static Integer MAXRETRIES = 1;
- private static CommonsHttpSolrServer server = null;
- private final static String ASC = "asc";
- public void init() throws MalformedURLException {
- server = new CommonsHttpSolrServer( URL );
- //server.setParser(new XMLResponseParser());
- server.setSoTimeout(SOCKE_TTIMEOUT);
- server.setConnectionTimeout(CONN_TIMEOUT);
- server.setDefaultMaxConnectionsPerHost(MAXCONN_DEFAULT);
- server.setMaxTotalConnections(MAXCONN_TOTAL);
- server.setFollowRedirects(false);
- server.setAllowCompression(true);
- server.setMaxRetries(MAXRETRIES);
- }
- public static SolrDocumentList query(Map<String, String> property, Map<String, String> compositor, Integer pageSize) throws Exception {
- SolrQuery query = new SolrQuery();
- // 设置搜索字段
- if(null == property) {
- throw new Exception("搜索字段不可为空!");
- } else {
- for(Object obj : property.keySet()) {
- StringBuffer sb = new StringBuffer();
- sb.append(obj.toString()).append(":");
- sb.append(property.get(obj));
- String sql = (sb.toString()).replace("AND", " AND ").replace("OR", " OR ").replace("NOT", " NOT ");
- query.setQuery(sql);
- }
- }
- // 设置结果排序
- if(null != compositor) {
- for(Object obj : compositor.keySet()) {
- if(ASC == compositor.get(obj) || ASC.equals(compositor.get(obj))) {
- query.addSortField(obj.toString(), SolrQuery.ORDER.asc);
- } else {
- query.addSortField(obj.toString(), SolrQuery.ORDER.desc);
- }
- }
- }
- if(null != pageSize && 0 < pageSize) {
- query.setRows(pageSize);
- }
- QueryResponse qr = server.query(query);
- SolrDocumentList docList = qr.getResults();
- return docList;
- }
- public static Map<String, String> getQueryProperty(Object obj) throws Exception {
- Map<String, String> result = new TreeMap<String, String>();
- // 获取实体类的所有属性,返回Fields数组
- Field[] fields = obj.getClass().getDeclaredFields();
- for(Field f : fields) {
- String name = f.getName();// 获取属性的名字
- String type = f.getGenericType().toString();
- if("class java.lang.String".equals(type)) {// 如果type是类类型,则前面包含"class ",后面跟类名
- Method me = obj.getClass().getMethod("get" + UpperCaseField(name));
- String tem = (String) me.invoke(obj);
- if(null != tem) {
- result.put(name, tem);
- }
- }
- }
- return result;
- }
- public static List<Object> querySolrResult(Object propertyObj, Object compositorObj, Integer pageSize) throws Exception {
- Map<String, String> propertyMap = new TreeMap<String, String>();
- Map<String, String> compositorMap = new TreeMap<String, String>();
- propertyMap = getQueryProperty(propertyObj);
- compositorMap = getQueryProperty(compositorObj);
- SolrDocumentList docList = query(propertyMap, compositorMap, pageSize);
- List<Object> list = new ArrayList<Object>();
- for(Object obj : docList) {
- list.add(obj);
- }
- return list;
- }
- private static String UpperCaseField(String name) {
- return name.replaceFirst(name.substring(0, 1), name.substring(0, 1).toUpperCase());
- }
- public CommonsHttpSolrServer getServer() {
- return server;
- }
- public void setServer(CommonsHttpSolrServer server) {
- this.server = server;
- }
- }
package org.nstcrm.person.util; import java.lang.reflect.Field; import java.lang.reflect.Method; import java.net.MalformedURLException; import java.util.ArrayList; import java.util.List; import java.util.Map; import java.util.TreeMap; import org.apache.solr.client.solrj.SolrQuery; import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer; import org.apache.solr.client.solrj.response.QueryResponse; import org.apache.solr.common.SolrDocumentList; public class SolrHttpServer { //private Logger logger = LoggerFactory.getLogger(this.getClass()); private final static String URL = "http://localhost:8080/solr"; private final static Integer SOCKE_TTIMEOUT = 1000; // socket read timeout private final static Integer CONN_TIMEOUT = 100; private final static Integer MAXCONN_DEFAULT = 100; private final static Integer MAXCONN_TOTAL = 100; private final static Integer MAXRETRIES = 1; private static CommonsHttpSolrServer server = null; private final static String ASC = "asc"; public void init() throws MalformedURLException { server = new CommonsHttpSolrServer( URL ); //server.setParser(new XMLResponseParser()); server.setSoTimeout(SOCKE_TTIMEOUT); server.setConnectionTimeout(CONN_TIMEOUT); server.setDefaultMaxConnectionsPerHost(MAXCONN_DEFAULT); server.setMaxTotalConnections(MAXCONN_TOTAL); server.setFollowRedirects(false); server.setAllowCompression(true); server.setMaxRetries(MAXRETRIES); } public static SolrDocumentList query(Map<String, String> property, Map<String, String> compositor, Integer pageSize) throws Exception { SolrQuery query = new SolrQuery(); // 设置搜索字段 if(null == property) { throw new Exception("搜索字段不可为空!"); } else { for(Object obj : property.keySet()) { StringBuffer sb = new StringBuffer(); sb.append(obj.toString()).append(":"); sb.append(property.get(obj)); String sql = (sb.toString()).replace("AND", " AND ").replace("OR", " OR ").replace("NOT", " NOT "); query.setQuery(sql); } } // 设置结果排序 if(null != compositor) { for(Object obj : compositor.keySet()) { if(ASC == compositor.get(obj) || ASC.equals(compositor.get(obj))) { query.addSortField(obj.toString(), SolrQuery.ORDER.asc); } else { query.addSortField(obj.toString(), SolrQuery.ORDER.desc); } } } if(null != pageSize && 0 < pageSize) { query.setRows(pageSize); } QueryResponse qr = server.query(query); SolrDocumentList docList = qr.getResults(); return docList; } public static Map<String, String> getQueryProperty(Object obj) throws Exception { Map<String, String> result = new TreeMap<String, String>(); // 获取实体类的所有属性,返回Fields数组 Field[] fields = obj.getClass().getDeclaredFields(); for(Field f : fields) { String name = f.getName();// 获取属性的名字 String type = f.getGenericType().toString(); if("class java.lang.String".equals(type)) {// 如果type是类类型,则前面包含"class ",后面跟类名 Method me = obj.getClass().getMethod("get" + UpperCaseField(name)); String tem = (String) me.invoke(obj); if(null != tem) { result.put(name, tem); } } } return result; } public static List<Object> querySolrResult(Object propertyObj, Object compositorObj, Integer pageSize) throws Exception { Map<String, String> propertyMap = new TreeMap<String, String>(); Map<String, String> compositorMap = new TreeMap<String, String>(); propertyMap = getQueryProperty(propertyObj); compositorMap = getQueryProperty(compositorObj); SolrDocumentList docList = query(propertyMap, compositorMap, pageSize); List<Object> list = new ArrayList<Object>(); for(Object obj : docList) { list.add(obj); } return list; } private static String UpperCaseField(String name) { return name.replaceFirst(name.substring(0, 1), name.substring(0, 1).toUpperCase()); } public CommonsHttpSolrServer getServer() { return server; } public void setServer(CommonsHttpSolrServer server) { this.server = server; } }
- Solr 1.4.1配置和SolrJ的用法
- 一、 Solr基本安装和配置
- 1,从官网镜像服务器下载最新版本apache-solr-1.4.1。下载地址:
- http://apache.etoak.com//lucene/solr/,并解压缩
- 2,在D盘建立一个SolrHome文件夹来存放solr的配置文件等,例如:在D盘WORK目录下穿件一个SolrHome文件夹: D:\WORK\SolrHome,
- 3,在刚解压的apache-solr-1.4.1,找到apache-solr-1.4.1\example下找到solr文件夹,复制到SolrHome下.
- 4,将apache-solr-1.4.1\dist\apache-solr-1.4.1.war中的apache-solr-1.4.1.war复制到tomcat中的\webapps下并重命名为solr,启动tomcat,解压war包后,停止tomcat.
- 5,在解压后的solr中找到web.xml,打开:将<env-entry-value>的值设为SolrHome的目录地址
- <env-entry>
- <env-entry-name>solr/home</env-entry-name>
- <env-entry-value>D:\WORK\SolrHome\solr</env-entry-value>
- <env-entry-type>java.lang.String</env-entry-type>
- </env-entry>
- 6,在D:\WORK\SolrHome\solr\conf找到solrconfig.xml文件,打开,修改
- <dataDir>${solr.data.dir:./solr/data}</dataDir>
- 其中solr.data.dir存放的是索引目录.
- 7,添加中文支持,修改tomcat的配置文件server.xml,如下:
- <Connector port="80" protocol="HTTP/1.1"
- maxThreads="150" connectionTimeout="20000"
- redirectPort="8443" URIEncoding="UTF-8"/>
- 8,启动tomcat,IE中输入:http://localhost:80/solr 即可浏览solr服务器.
- 二、Solr服务器复制的配置
- 1,首先测试在本机上开启三个tomcat的服务器:一个端口是80,另一个是9888
- 2,按照标题一的配置对第二和第三个tomcat服务器进行类似的配置,注意SolrHome的目录不要相同即可,其他的配置不变. 例如:以本机为例
- tomcat命名 URL SolrHome目录URI web.xml配置
- tomcat0 (master) http://localhost:80/solr D:\WORK\SolrHome\solr <env-entry-value>D:\WORK\SolrHome\solr</env-entry-value>
- tomcat1 (slave) http://localhost:9888/solr E:\WORK\SolrHome\solr <env-entry-value>E:\WORK\SolrHome\solr</env-entry-value>
- tomcat2
- (salve) http://localhost:9008/solr F:\WORK\SolrHome\solr <env-entry-value>F:\WORK\SolrHome\solr</env-entry-value>
- 3,以上两步配置好之后,在主服务器tomcat0(master)的SolrHome中找到solrconfig.xml文件,加入如下配置.
- <requestHandler name="/replication" class="solr.ReplicationHandler" >
- <lst name="master">
- <str name="replicateAfter">commit</str>
- <str name="replicateAfter">startup</str>
- <str name="confFiles">schema.xml,stopwords.txt</str>
- </lst>
- </requestHandler>
- 在从服务器tomcat1(slave)和tomcat1(slave)的SolrHome中找到solrconfig.xml文件加入如下配置:
- <requestHandler name="/replication" class="solr.ReplicationHandler" >
- <lst name="slave">
- <str name="masterUrl">http://localhost/solr/replication</str>
- <str name="pollInterval">00:00:60</str>
- </lst>
- </requestHandler>
- 4,在tomcat0上创建索引,使用客户端solrj创建索引,jar包在apache-solr-1.4.1压缩包中.代码如下:
- public class SlorTest3 {
- private static CommonsHttpSolrServer server = null;
- // private static SolrServer server = null;
- public SlorTest3(){
- try {
- server = new CommonsHttpSolrServer("http://localhost/solr");
- server.setConnectionTimeout(100);
- server.setDefaultMaxConnectionsPerHost(100);
- server.setMaxTotalConnections(100);
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- @Test
- public void testIndexCreate(){
- List<SolrInputDocument> docs = new ArrayList<SolrInputDocument>();
- for(int i=300;i<500;i++){
- SolrInputDocument doc = new SolrInputDocument();
- doc.addField("zjid", i); //需要在sechma.xml中配置字段
- doc.addField("title", "云状空化多个气泡的生长和溃灭");
- doc.addField("ssid", "ss"+i);
- doc.addField("dxid", "dx"+i);
- docs.add(doc);
- }
- try {
- server.add(docs);
- server.commit();
- System.out.println("----索引创建完毕!!!----");
- } catch (SolrServerException e) {
- e.printStackTrace();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- }
- 5,分别启动三台tomcat服务器.打开IE浏览器分别输入:其中localhost=192.168.169.121
- http://localhost:9888/solr 点击 即可从主solr服务器上复制索引
- 三、Solr服务器分发(shard)配置
- 1, 开启四台tomcat服务器,其中三台在本机上,一台在远端.清单如下:
- 注意:四台服务器的配置要相同.其中的schema.xml字段配置一定要一致.
- Name URL SolrHome目录URI
- tomcatQuery http://localhost:80/solr D:\WORK\SolrHome\solr
- tomcat0 (shard) http://localhost:9888/solr E:\WORK\SolrHome\solr
- tomcat1 (shard) http://localhost:9008/solr F:\WORK\SolrHome\solr
- tomcat2 (shard) http://192.168.169.48:9888/solr D:\WORK\SolrHome\solr
- 2, 配置较简单,只需要在tomcatQuery上的SoleHome的solrconfig.xml文件中修改
- 其他的solr服务器不需要配置。
- <requestHandler name="standard" class="solr.SearchHandler" default="true">
- <!-- default values for query parameters -->
- <lst name="defaults">
- <str name="echoParams">explicit</str>
- <str name="shards">localhost:9088/solr,localhost:9888/solr,192.168.169.48:9888/solr</str>
- <!--
- <int name="rows">10</int>
- <str name="fl">*</str>
- <str name="version">2.1</str>
- -->
- </lst>
- </requestHandler>
- 3, 使用slorj的清除原有的索引.或者手动删除。
- 4, 编写代码,将lucene建立的索引(1G左右,874400条记录),按照比例通过solrj分发到三台solr(shard)服务器上,代码如下:
- public class IndexCreate{
- private static CommonsHttpSolrServer server;
- public CommonsHttpSolrServer getServer(String hostUrl){
- CommonsHttpSolrServer server = null;
- try {
- server = new CommonsHttpSolrServer(hostUrl);
- server.setConnectionTimeout(100);
- server.setDefaultMaxConnectionsPerHost(100);
- server.setMaxTotalConnections(100);
- } catch (IOException e) {
- System.out.println("请检查tomcat服务器或端口是否开启!");
- }
- return server;
- }
- @SuppressWarnings("deprecation")
- public void readerHostCreate(String[] hosts) throws CorruptIndexException, IOException{
- IndexReader reader = IndexReader.open("c:\\index");
- System.out.println("总记录数: "+reader.numDocs());
- int hostNum = hosts.length;
- int lengh = reader.numDocs()/hostNum; //根据主机数平分索引长度
- int j = reader.numDocs()%hostNum; //取余
- for(int i = 0;i<hosts.length;i++){
- long startTime = new Date().getTime();
- String url = hosts[i].substring(hosts[i].indexOf("//")+2,hosts[i].lastIndexOf("/"));
- System.out.println("第"+(i+1)+"次,在主机:"+url+" 上创建索引,创建时间"+new Date());
- if(i==(hosts.length-1)){
- hostlist(reader,lengh*i,lengh*(i+1)+j,hosts[i]);
- }else{
- hostlist(reader,lengh*i,lengh*(i+1),hosts[i]);
- }
- System.out.println("结束时间"+new Date());
- long endTime = new Date().getTime();
- long ms = (endTime-startTime)%60000-(((endTime-startTime)%60000)/1000)*1000;
- System.out.println("本次索引创建完毕,一共用了"+(endTime-startTime)/60000+"分" +
- ""+((endTime-startTime)%60000)/1000+"秒"+ms+"毫秒");
- System.out.println("****************************");
- }
- reader.close();
- }
- @SuppressWarnings("static-access")
- public void hostlist(IndexReader reader,int startLengh,int endLengh,String hostUrl) throws CorruptIndexException, IOException{
- List<BookIndex> beans = new LinkedList<BookIndex>();
- int count = 0;
- this.server = getServer(hostUrl);
- for(int i=startLengh;i<endLengh;i++){
- Document doc = reader.document(i);
- BookIndex book = new BookIndex();
- book.setZjid(doc.getField("zjid").stringValue());
- book.setTitle(doc.getField("title").stringValue());
- book.setSsid(doc.getField("ssid").stringValue());
- book.setDxid(doc.getField("dxid").stringValue());
- book.setBookname(doc.getField("bookname").stringValue());
- book.setAuthor(doc.getField("author").stringValue());
- book.setPublisher(doc.getField("publisher").stringValue());
- book.setPubdate(doc.getField("pubdate").stringValue());
- book.setYear(doc.getField("year").stringValue());
- book.setFenlei(doc.getField("fenlei").stringValue());
- book.setscore1(doc.getField("score").stringValue());
- book.setIsbn(doc.getField("isbn").stringValue());
- book.setFenleiurl(doc.getField("fenleiurl").stringValue());
- book.setMulu(doc.getField("mulu").stringValue());
- book.setIsp(doc.getField("isp").stringValue());
- book.setIep(doc.getField("iep").stringValue());
- beans.add(book);
- if(beans.size()%3000==0){
- createIndex(beans,hostUrl,server);
- beans.clear();
- System.out.println("---索引次数:"+(count+1)+"---");
- count++;
- }
- }
- System.out.println("beansSize 的大小 "+beans.size());
- if(beans.size()>0){
- createIndex(beans,hostUrl,server);
- beans.clear();
- }
- }
- public void createIndex(List<BookIndex> beans,String hostUrl,CommonsHttpSolrServer server){
- try {
- server.addBeans(beans);
- server.commit();
- } catch (SolrServerException e) {
- e.printStackTrace();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- public static void main(String[] args) throws CorruptIndexException, IOException{
- IndexCreate as = new IndexCreate();
- String[] hosts = new String[] {"http://192.168.169.121:9888/solr","http://192.168.169.121:9088/solr","http://192.168.169.48:9888/solr"};
- long startTime = new Date().getTime();
- as.readerHostCreate(hosts);
- long endTime = new Date().getTime();
- System.out.println("-------------------");
- long ms = (endTime-startTime)%60000-(((endTime-startTime)%60000)/1000)*1000;
- System.out.println("全部索引创建完毕,一共用了"+(endTime-startTime)/60000+"分" +
- ""+((endTime-startTime)%60000)/1000+"秒"+ms+"毫秒");
- }
- }
- JavaBean类BookIndex.java代码如下:
- 说明变量名与sechma.xml中的配置要相同.注意:不能使用score这个变量或者字段,与slor配置冲突,报exception。
- import org.apache.solr.client.solrj.beans.Field;
- public class BookIndex {
- @Field
- private String zjid ;
- @Field
- private String title;
- @Field
- private String ssid;
- @Field
- private String dxid;
- @Field
- private String bookname;
- @Field
- private String author;
- @Field
- private String publisher;
- @Field
- private String pubdate;
- @Field
- private String year;
- @Field
- private String fenlei;
- @Field
- private String score1;
- @Field
- private String isbn;
- @Field
- private String fenleiurl;
- @Field
- private String mulu;
- @Field
- private String isp;
- @Field
- private String iep;
- public getters();//get方法
- public setters();//set方法
- }
- 5, 同时开启四台服务器,运行上面代码:
- 6, 打开IE查询
- 打开:http://localhost/solr
- 打开:http://localhost:9888/solr
- 打开http://localhost:9008/solr
- 打开http://192.168.168.48:9888/solr
- 四、Solr的Multicore(分片)配置
- 第一步,将apache-solr-1.4.1\example下的multicore复制到先前配置的solr/home下。
- 其中multicore下的solr.xml配置如下:
- <solr persistent="false">
- <cores adminPath="/admin/cores">
- <core name="core0" instanceDir="core0">
- <property name="dataDir" value="/data/core0" />
- </core>
- <core name="core1" instanceDir="core1">
- <property name="dataDir" value="/data/core1" />
- </core>
- <core name="core2" instanceDir="core2">
- <property name="dataDir" value="/data/core2" />
- </core>
- </cores>
- </solr>
- 第二步,修改Tomcat 6.0\webapps\solr\WEB-INF下的web.xml文件如下
- <env-entry-value>D:\WORK\SolrHome\multicore</env-entry-value>
- 第三步,开启服务器,打开IEhttp://localhost/solr
- 五、SolrJ的用法
- 1,SolrJ的用法
- solrJ是与solr服务器交互的客户端工具.使用solrJ不必考虑solr服务器的输出格式,以及对文档的解析。solrJ会根据用户发送的请求将数据结果集(collection)返回给用户.
- 2,使用SolrJ建立索引
- 事例代码:
- //得到连接solr服务器的CommonsHttpSolrServer对象.通过这个对象处理用户提交的请求:
- CommonsHttpSolrServer server = new CommonsHttpSolrServer("http://localhost:9888/solr");
- public void testIndexCreate(){
- List<SolrInputDocument> docs = new ArrayList<SolrInputDocument>();
- // SolrInputDocumentle类似与Document类,用于创建索引文档,和向文档中添加字段
- for(int i=300;i<500;i++){
- SolrInputDocument doc = new SolrInputDocument();
- doc.addField("zjid", i+"_id");
- doc.addField("title", i+"_title");
- doc.addField("ssid", "ss_"+i);
- doc.addField("dxid", "dx_"+i);
- docs.add(doc);
- }
- try {
- server.add(docs);
- server.commit();//以更新(update)的方式提交
- System.out.println("----索引创建完毕!!!----");
- } catch (SolrServerException e) {
- e.printStackTrace();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- 以javaBean的方式创建索引:
- public class BookIndex {
- @Field
- private String zjid;
- @Field
- private String zhangjie;
- @Field
- private String ssid;
- @Field
- private String qwpos;
- @Field
- private String publishDate;
- @Field
- private String mulu;
- @Field
- private String fenleiurl;
- @Field
- private String fenlei;
- @Field
- private String dxid;
- @Field
- private String author;
- @Field
- private String address;
- @Field
- private String bookname;
- …………………
- }
- public void testBean(){
- List<BookIndex> beans = new ArrayList<BookIndex>();
- for(int i=0;i<10;i++){
- BookIndex book = new BookIndex();
- book.setZjid(i+"id");
- book.setTitle(i+"title");
- //set方法
- beans.add(book);
- }
- try {
- server.addBeans(beans);
- server.commit();
- System.out.println("----索引创建完毕!!!----");
- } catch (SolrServerException e) {
- e.printStackTrace();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- 3, SolrJ常见的查询
- a, 查询索引中的全部字段的内容: SolrQuery query = new SolrQuery("*:*");
- 事例代码:
- public void testQuery1(){
- SolrQuery query = new SolrQuery("*:*");
- query.setStart(20);//设置起始位置
- query.setRows(10); //查询组数
- QueryResponse response = null;
- try {
- response = server.query(query);//响应向服务器提交的查询请求
- System.out.println(response);
- } catch (SolrServerException e) {
- e.printStackTrace();
- }
- List<SolrDocument> docs = response.getResults();//得到结果集
- for(SolrDocument doc:docs){//遍历结果集
- for(Iterator iter = doc.iterator();iter.hasNext();){
- Map.Entry<String, Object> entry = (Entry<String, Object>)iter.next();
- System.out.print("Key :"+entry.getKey()+" ");
- System.out.println("Value :"+entry.getValue());
- }
- System.out.println("------------");
- }
- }
- b, 查询某一字段的内容
- String queryString = “zjid:5_id”;//写法为字段名:查询的内容
- SolrQuery query = new SolrQuery(queryString);
- c, 查询copyField的内容
- copyField是查询的默认字段,当不指明查询字段的field的时。查询的请求会在copyField匹配: 详细参见schema..xml的配置.
- String queryString = “XXX”
- SolrQuery query = new SolrQuery(queryString);
- 4, 索引删除
- 事例代码 :
- public void testClear() {
- server.setRequestWriter(new BinaryRequestWriter());//提高性能采用流输出方式
- try {
- server.deleteByQuery("*:*");
- server.commit();
- System.out.println("----清除索引---");
- } catch (SolrServerException e) {
- e.printStackTrace();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- 5, 高亮的使用Highlight
- public List<Book> getQueryString(String queryString,int start,int pageSize) {
- SolrQuery query = new SolrQuery(queryString);
- query.setHighlight(true); //开启高亮组件
- query.addHighlightField("mulu");//高亮字段
- query.setHighlightSimplePre("<font color=\"red\">");//标记
- query.setHighlightSimplePost("</font>");
- query.set("hl.usePhraseHighlighter", true);
- query.set("hl.highlightMultiTerm", true);
- query.set("hl.snippets", 3);//三个片段,默认是1
- query.set("hl.fragsize", 50);//每个片段50个字,默认是100
- //
- query.setStart(start); //起始位置 …分页
- query.setRows(pageSize);//文档数
- try {
- response = server.query(query);
- } catch (SolrServerException e) {
- e.printStackTrace();
- }
- List<BookIndex> bookLists = response.getBeans(BookIndex.class);
- Map<String,Map<String,List<String>>> h1 = response.getHighlighting();
- List<Book> books = new ArrayList<Book>();
- for(BookIndex bookIndex : bookLists){
- Map<String,List<String>> map = h1.get(bookIndex.getZjid());
- //以文档的唯一id作为Map<String,Map<String,List<String>>>的key值.
- Book book = new Book();
- //copy字段
- book.setBookname(bookIndex.getBookname());
- book.setZjid(bookIndex.getZjid());
- if(map.get("mulu")!=null){
- List<String> strMulu = map.get("mulu");
- StringBuffer buf = new StringBuffer();
- for(int i =0;i<strMulu.size();i++){
- buf.append(strMulu.get(i));
- buf.append("...");
- if(i>3){
- break;
- }
- }
- book.setSummary(buf.toString());
- }else{
- if(bookIndex.getMulu().length()>100){
- book.setSummary(bookIndex.getMulu().substring(0,100)+"...");
- }else{
- book.setSummary(bookIndex.getMulu()+"...");
- }
- }
- books.add(book);
- }
- return books;
- }
- 6, 分组Fact
- //需要注意的是参与分组的字段是不需要分词的,比如:产品的类别.kind
- public void testFact(){
- String queryString = "kind:儿童图书";
- SolrQuery query = new SolrQuery().setQuery(queryString);
- query.setFacet(true); //开启分组
- query.addFacetField("bookname");//分组字段
- query.addFacetField("title");
- query.setFacetMinCount(1);
- query.addSortField( "zjid", SolrQuery.ORDER.asc );//排序字段
- query.setRows(10);
- QueryResponse response = null;
- try {
- response = server.query(query);
- System.out.println(response);
- } catch (SolrServerException e) {
- e.printStackTrace();
- }
- List<FacetField> facets = response.getFacetFields();
- for (FacetField facet : facets) {
- System.out.println("Facet:" + facet);
- }
- }
- 六、一个简单的web引用:
- 首先说明的是索引来源,是根据已有的lucene索引上开发的,又因为lucene的索引直接用solrJ应用效果不好,会出现很多问题,找不到类似的解决办法,比如全字段查询,和高亮分组等,和多级索引目录…。但用solrJ创建的索引不存在类似的问题.
- 大致的思路是,读取已有的lucene索引 ,再用solrJ创建索引并分发到多台机器上,最后再做开发.
- 第一步:读取lucene的多级索引目录,用solrJ创建和分发索引;
- 需注意的是:要加大虚拟机的内存,因为采用的map做为缓存,理论上虚拟机的内存大,map的存储的索引文档数也就多.主要是针对内存溢出.
- 代码 :
- package org.readerIndex;
- import org.apache.solr.client.solrj.beans.Field;
- public class BookIndex2 {
- @Field
- private String zjid;
- @Field
- private String zhangjie;
- @Field
- private String ssid;
- @Field
- private String qwpos;
- @Field
- private String publishDate;
- @Field
- private String mulu;
- @Field
- private String fenleiurl;
- @Field
- private String fenlei;
- @Field
- private String dxid;
- @Field
- private String author;
- @Field
- private String address;
- @Field
- private String bookname;
- public String getZjid() {
- return zjid;
- …………………………………………
- }
- package org.readerIndex;
- import java.io.File;
- import java.io.IOException;
- import java.text.SimpleDateFormat;
- import java.util.ArrayList;
- import java.util.Date;
- import java.util.List;
- import org.apache.lucene.document.Document;
- import org.apache.lucene.index.CorruptIndexException;
- import org.apache.lucene.index.IndexReader;
- import org.apache.lucene.store.LockObtainFailedException;
- import org.apache.solr.client.solrj.SolrServerException;
- import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer;
- public class ReaderIndex {
- public CommonsHttpSolrServer getServer(String hostUrl){
- CommonsHttpSolrServer server = null;
- try {
- server = new CommonsHttpSolrServer(hostUrl);
- server.setConnectionTimeout(100);
- server.setDefaultMaxConnectionsPerHost(100);
- server.setMaxTotalConnections(100);
- } catch (IOException e) {
- System.out.println("请检查tomcat服务器或端口是否开启!");
- }
- return server;
- }
- public void indexDocuements(String path,String[] hostUrls) throws CorruptIndexException, LockObtainFailedException, IOException{
- File pareFile = new File(path);
- List<String> list = new ArrayList<String>();
- getFile(pareFile,list); //递归方法得到路径保存到list中
- System.out.println("***程序一共递归到"+list.size()+"个索引目录***");
- int arevageSize = list.size()/hostUrls.length;// 根据主机数平分目录
- int remainSize = list.size()%hostUrls.length;//取余
- SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
- for(int i=0;i<hostUrls.length;i++){
- Date startDate = new Date();
- String url = hostUrls[i].substring(hostUrls[i].indexOf("//")+2,hostUrls[i].lastIndexOf("/"));
- System.out.println("第"+(i+1)+"次,在主机:"+url+" 上创建索引,创建时间 "+sdf.format(startDate));
- if(i==(hostUrls.length-1)){
- list(list,arevageSize*i,arevageSize*(i+1)+remainSize,hostUrls[i]);
- }else{
- list(list,arevageSize*i,arevageSize*(i+1),hostUrls[i]);
- }
- Date endDate = new Date();
- System.out.println("本次索引结束时间为:"+sdf.format(endDate));
- }
- }
- public void list(List<String> list,int start,int end,String url){
- CommonsHttpSolrServer server = getServer(url);
- for(int j=start;j<end;j++){
- try {
- long startMs = System.currentTimeMillis();
- hostCreate(list.get(j),server);
- long endMs = System.currentTimeMillis();
- System.out.println("程序第"+(j+1)+"个目录处理完毕,目录路径:"+list.get(j)+", 耗时"+(endMs-startMs)+"ms");
- } catch (CorruptIndexException e) {
- e.printStackTrace();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- }
- public void getFile(File fileDirectory,List<String> list){
- if(fileDirectory.isDirectory()){
- File[] files = fileDirectory.listFiles();
- for(File file :files){
- getFile(file,list);
- }
- }else if(fileDirectory.isFile()){
- String filePath = fileDirectory.getPath();
- String path = filePath.replace('\\', '/');
- if(path.endsWith(".cfs")){
- int lastIndex = path.lastIndexOf("/");
- String directory = path.substring(0,lastIndex);
- list.add(directory);
- }
- }
- }
- @SuppressWarnings("deprecation")
- public void hostCreate(String directory,CommonsHttpSolrServer server) throws CorruptIndexException, IOException{
- IndexReader reader = IndexReader.open(directory);
- List<BookIndex2> beans = new ArrayList<BookIndex2>();
- for(int i=0;i<reader.numDocs();i++){
- Document doc = reader.document(i);
- BookIndex2 book = new BookIndex2();
- book.setZjid(doc.getField("zjid").stringValue());
- book.setAddress(doc.getField("address").stringValue());
- book.setAuthor(doc.getField("author").stringValue());
- book.setbookname(doc.getField("bookname").stringValue());
- book.setDxid(doc.getField("dxid").stringValue());
- book.setFenlei(doc.getField("fenlei").stringValue());
- book.setFenleiurl(doc.getField("fenleiurl").stringValue());
- book.setMulu(doc.getField("mulu").stringValue());
- book.setPublishDate(doc.getField("publishDate").stringValue());
- book.setQwpos(doc.getField("qwpos").stringValue());
- book.setSsid(doc.getField("ssid").stringValue());
- book.setZhangjie(doc.getField("zhangjie").stringValue());
- beans.add(book);
- }
- createIndex(beans,server);
- beans.clear();
- reader.close();
- }
- public void createIndex(List<BookIndex2> beans,CommonsHttpSolrServer server){
- try {
- server.addBeans(beans);
- server.commit();
- // server.optimize();
- } catch (SolrServerException e) {
- e.printStackTrace();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- public static void main(String[] args) throws CorruptIndexException, IOException{
- ReaderIndex reader = new ReaderIndex();
- String path = "D:\\91";
- String[] hosts = new String[] {"http://192.168.169.121:9888/solr","http://192.168.169.121:9088/solr","http://192.168.169.48:9888/solr"};
- long startTime = new Date().getTime();
- reader.indexDocuements(path,hosts);
- long endTime = new Date().getTime();
- System.out.println("-------------------");
- long ms = (endTime-startTime)%60000-(((endTime-startTime)%60000)/1000)*1000;
- System.out.println("全部文档索引完毕,一共用了"+(endTime-startTime)/60000+"分" +
- ""+((endTime-startTime)%60000)/1000+"秒"+ms+"毫秒");
- }
- }
- 第二步,进行简单的web应用.
评论
1 楼
bobbell
2014-12-24
I have setup a test liferay environment and a Solr search engine. I installed the plugin according to the wiki and Solr is indexing all the content from Wiki, Blogs and forums.
When I do the search on wiki and blogs the results are correct, but when doing a search on the forums I get no results.
Running a sniffer between the Solr server and liferay I see the search query and Solr is returning valid results but for some reason the search function in the forums is not showing any results.
Is anybody else seeing this behavior? Anybody with some pointers on how to solve this or what other information will be useful to solve it or to open a ticket in liferay bug tracking tool?
Thank you very much.
java barcode generator
When I do the search on wiki and blogs the results are correct, but when doing a search on the forums I get no results.
Running a sniffer between the Solr server and liferay I see the search query and Solr is returning valid results but for some reason the search function in the forums is not showing any results.
Is anybody else seeing this behavior? Anybody with some pointers on how to solve this or what other information will be useful to solve it or to open a ticket in liferay bug tracking tool?
Thank you very much.
java barcode generator
发表评论
-
Solr4.0+IKAnalyzer中文分词安装
2012-11-29 19:14 1592有近2年没接触Solr跟Lucene ... -
solr搜索打分规制排序
2012-09-26 21:58 2412solr使用了Lucene的内核,也继承了Luce ... -
solr DataimportHanler
2012-09-22 17:01 1258大多数的应用程序将数据存储在关系数据库、xml文件 ... -
solr第一弹 autocomplete(自动补全)
2012-09-22 16:38 1470百度和google中都有 ... -
Solr Data Import 快速入门
2012-09-20 14:32 841原文出处:http://blog.chenl ... -
JAVA环境下利用solrj二次开发SOlR搜索的环境部署常见错误
2012-09-20 11:36 1797问题一:出现控制台坏的响应错误一Bad reque ... -
Solr学习总结
2012-09-20 10:06 6445一、 SOLR搭建企业搜索平台 运行环境: 运行容器:Tomc ... -
olr 的客户端调用solrj 建索引+分页查询
2012-09-20 08:54 1941在 solr 3.5 配置及应用(一) 讲过一了 sol ... -
Solr笔记
2012-09-19 23:07 1295... -
Apache Solr 初级教程(介绍、安装部署、Java接口、中文分词)
2012-09-19 22:56 1777Apache Solr 介绍 Solr 是 ... -
lucene3.0 分页显示与高亮显示(转)
2012-09-19 11:44 1740分页类 Java代码 pac ... -
lucene3 中文IKAnalyzer分词例子
2012-09-10 13:37 1188import java.io.IOException; im ... -
Lucene3.0.1 学习笔记
2012-09-08 08:57 958不管怎么说,搜索都是非 ... -
Compass2.0.2自带例子解析
2012-09-05 08:47 1472Compass2.0.2自带例子解析: 下面的代码来自com ... -
compass站内搜索
2012-09-05 08:49 1007compass站内搜索: 1.去官方网站下载compass的 ... -
Spring + Compass + paoding配置
2012-09-05 08:50 1060Spring + Compass + paoding配置: ... -
配置compass的索引位置为相对路径
2012-09-01 10:49 1375配置compass的索引位置为相对路径: Compass是对 ... -
lucene创建索引
2012-09-01 10:48 1096lucene创建索引: import java.io.Fi ... -
Lucene demo调试运行:
2012-09-01 10:47 2034Lucene demo调试运行: 运行环境: JDK ... -
SSH + Lucene + 分页 + 排序 + 高亮 模拟简单新闻网站搜索引擎
2012-09-01 10:43 3465前两天看到了一个中国新闻网,这个网站的搜索form的a ...
相关推荐
apache-solr-solrj-3.5.0.jar
solr-solrj-4.9.0.jar
SolrJ是Apache Solr项目的Java客户端库,它为与Solr服务器进行交互提供了便利的API。这个压缩包包含了两个版本的SolrJ库:solr-solrj-4.10.3.jar和solr-solrj-5.0.0.jar。这两个版本的差异主要在于对Solr服务器的...
solr-solrj-4.4.0.jar
solr-solrj-6.6.0.jar
Solr-Solrj是Apache Lucene项目下的一个子项目,专门为Apache Solr搜索引擎提供Java客户端库。Solr是一款强大的全文检索服务器,而Solrj则是与之交互的Java API,使得开发人员能够轻松地在Java应用程序中集成Solr的...
下载后会获得名为:solr_core.4.6.0 的zip包,解压后会获得solr-core-4.6.0.jar和 solr-solrj-4.6.0.jar两个文件,搭建solr全文检索环境必须要添加的包
solr的核心jar,大家可以一起好好学习一下,还是很优秀的
jar包,亲测可用
solr-solrj-4.10.3.jar。
solrJ是Java连接solr进行查询检索和索引更新维护的jar包。
Solr-Solrj 5.0.0 是一个用于与Apache Solr进行交互的Java客户端库。在本文中,我们将深入探讨Solr-Solrj的使用、功能及其与自建Solr服务的集成,特别是涉及到中文分词的场景。 Apache Solr是一款流行的开源全文...
2. **solr-solrj-6.3.0.jar**:这个就是 SolrJ 的库文件,提供了与 Solr 服务器通信的接口和类。开发者可以使用这个库来创建 Solr 客户端实例,发送请求并接收响应,进行索引管理和查询操作。 3. **solr-langid-...
jar包,亲测可用
solr-mongo-importer-1.1.0.jar solr-mongo-importer-1.1.0.jar solr-mongo-importer-1.1.0.jar
solr5.5的jar包,solr4、solr5、solr6由于solr-core-x.jar的源码的调整,不能使用同一个jar包。solr4由于版本过老就暂时不做介绍,solr5所需jar包下载
Solr是Apache软件基金会的一个开源项目,它是基于Java的全文搜索服务器,被广泛应用于企业级搜索引擎的构建。源码分析是深入理解一个软件系统工作原理的重要途径,对于Solr这样的复杂系统尤其如此。这里我们将围绕...
SolrJ是Apache Solr的Java客户端库,它允许开发者在Java应用程序中与Solr搜索引擎进行交互,包括创建、更新、查询索引以及管理Solr服务器。 描述中提到的 "maven-parent.zip" 暗示这个项目采用了Maven作为构建工具...
Solr,全称为Apache Solr,是一款开源的企业级全文搜索引擎,由Apache软件基金会开发并维护。它是基于Java的,因此在使用Solr之前,确保你的系统已经安装了Java 8或更高版本是至关重要的。标题"solr-7.4.0.zip"表明...