- 浏览: 861915 次
-
文章分类
- 全部博客 (365)
- java (124)
- spring mvc (21)
- spring (22)
- struts2 (6)
- jquery (27)
- javascript (24)
- mybatis/ibatis (8)
- hibernate (7)
- compass (11)
- lucene (26)
- flex (0)
- actionscript (0)
- webservice (8)
- rabbitMQ/Socket (15)
- jsp/freemaker (5)
- 数据库 (27)
- 应用服务器 (21)
- Hadoop (1)
- PowerDesigner (3)
- EJB (0)
- JPA (0)
- PHP (2)
- C# (0)
- .NET (0)
- html (2)
- xml (5)
- android (7)
- flume (1)
- zookeeper (0)
- 证书加密 (2)
- maven (1)
- redis (2)
- cas (11)
最新评论
-
zuxianghuang:
通过pom上传报错 Artifact upload faile ...
nexus上传了jar包.通过maven引用当前jar,不能取得jar的依赖 -
流年末年:
百度网盘的挂了吧???
SSO单点登录系列3:cas-server端配置认证方式实践(数据源+自定义java类认证) -
953434367:
UfgovDBUtil 是什么类
Java发HTTP POST请求(内容为xml格式) -
smilease:
帮大忙了,非常感谢
freemaker自动生成源代码 -
syd505:
十分感谢作者无私的分享,仔细阅读后很多地方得以解惑。
Nginx 反向代理、负载均衡、页面缓存、URL重写及读写分离详解
Solrj已经是很强大的solr客户端了。它本身就包装了httpCliet,以完全对象的方式对solr进行交互。很小很好很强大。
不过在实际使用中,设置SolrQuery 的过程中,为了设置多个搜索条件和排序规则等等参数,我们往往会陷入并接字符串的地步,实在是很丑陋,不符合面向对象的思想。扩展性几乎为0,。基于这点,开发了一个小东西,我们只需要设置搜索对象,将对象扔给后台就可以了。
比如,我们搭建的solr服务支持某10个字段的搜索,我们要搜索其中的一些,那么我们只需要传入要搜索的对象POJO,将要搜索的字段内容,set到POJO对象对应额字段即可。
比如如下一个类:
- package org.uppower.tnt.biz.core.manager.blog.dataobject;
- /**
- * @author yingmu
- * @version 2010-7-20 下午01:00:55
- */
- public class SolrPropertyDO {
- private String auction_id;
- private String opt_tag;
- private String exp_tag;
- private String title;
- private String desc;
- private String brand;
- private String category;
- private String price;
- private String add_prov;
- private String add_city;
- private String quality;
- private String flag;
- private String sales;
- private String sellerrate;
- private String selleruid;
- private String ipv15;
- public String getAuction_id() {
- return auction_id;
- }
- public void setAuction_id(String auctionId) {
- auction_id = auctionId;
- }
- ……
- public String getExp_tag() {
- return exp_tag;
- }
- public void setExp_tag(String expTag) {
- exp_tag = expTag;
- }
- }
package org.uppower.tnt.biz.core.manager.blog.dataobject; /** * @author yingmu * @version 2010-7-20 下午01:00:55 */ public class SolrPropertyDO { private String auction_id; private String opt_tag; private String exp_tag; private String title; private String desc; private String brand; private String category; private String price; private String add_prov; private String add_city; private String quality; private String flag; private String sales; private String sellerrate; private String selleruid; private String ipv15; public String getAuction_id() { return auction_id; } public void setAuction_id(String auctionId) { auction_id = auctionId; } …… public String getExp_tag() { return exp_tag; } public void setExp_tag(String expTag) { exp_tag = expTag; } }
那么我们在定义搜索对象时候,就按照如下设置:
- SolrPropertyDO propertyDO = new SolrPropertyDO();
- propertyDO.setAdd_city("(杭州AND成都)OR北京");
- propertyDO.setTitle("丝绸OR剪刀");
- ……
SolrPropertyDO propertyDO = new SolrPropertyDO(); propertyDO.setAdd_city("(杭州AND成都)OR北京"); propertyDO.setTitle("丝绸OR剪刀"); ……
设置排序条件,也是类似的做法:
- SolrPropertyDO compositorDO = new SolrPropertyDO();
- compositorDO.setPrice ("desc");
- compositorDO.setQuality ("asc");
- ……
SolrPropertyDO compositorDO = new SolrPropertyDO(); compositorDO.setPrice ("desc"); compositorDO.setQuality ("asc"); ……
将定义好的两个对象扔给后面的接口就可以了。
接口函数querySolrResult传入四个参数,其中包含搜索字段对象,排序条件对象。为了提供类似limit的操作,用于分页查询,提供了startIndex和pageSize。
函数querySolrResultCount是单纯为了获得搜索条数,配合分页使用。
以下是定义的接口:
- package org.uppower.tnt.biz.core.manager.blog;
- import java.util.List;
- import org.uppower.tnt.biz.core.manager.isearch.dataobject.SolrPropertyDO;
- /**
- * @author yingmu
- * @version 2010-7-20 下午03:51:15
- */
- public interface SolrjOperator {
- /**
- * 获得搜索结果
- *
- * @param propertyDO
- * @param compositorDO
- * @param startIndex
- * @param pageSize
- * @return
- * @throws Exception
- */
- public List<Object> querySolrResult(Object propertyDO,
- Object compositorDO, Long startIndex, Long pageSize)
- throws Exception;
- /**
- * 获得搜索结果条数
- *
- * @param propertyDO
- * @param compositorDO
- * @return
- * @throws Exception
- */
- public Long querySolrResultCount(SolrPropertyDO propertyDO,
- Object compositorDO) throws Exception;
- }
package org.uppower.tnt.biz.core.manager.blog; import java.util.List; import org.uppower.tnt.biz.core.manager.isearch.dataobject.SolrPropertyDO; /** * @author yingmu * @version 2010-7-20 下午03:51:15 */ public interface SolrjOperator { /** * 获得搜索结果 * * @param propertyDO * @param compositorDO * @param startIndex * @param pageSize * @return * @throws Exception */ public List<Object> querySolrResult(Object propertyDO, Object compositorDO, Long startIndex, Long pageSize) throws Exception; /** * 获得搜索结果条数 * * @param propertyDO * @param compositorDO * @return * @throws Exception */ public Long querySolrResultCount(SolrPropertyDO propertyDO, Object compositorDO) throws Exception; }
实现逻辑为,首先将传入的两个实体对象,解析为<K,V>结构的Map当中,将解析完成的Map放入solrj实际的搜索对象当中。返回的对象为solrj的API提供的SolrDocument,其中结果数量为直接返回SolrDocumentList对象的getNumFound()
具体实现类:
- package org.uppower.tnt.biz.core.manager.blog;
- import java.util.ArrayList;
- import java.util.HashMap;
- import java.util.List;
- import java.util.Map;
- import java.util.TreeMap;
- import org.apache.solr.common.SolrDocumentList;
- import org.uppower.tnt.biz.core.manager.isearch.common.SolrjCommonUtil;
- import org.uppower.tnt.biz.core.manager.isearch.dataobject.SolrPropertyDO;
- import org.uppower.tnt.biz.core.manager.isearch.solrj.SolrjQuery;
- /**
- * @author yingmu
- * @version 2010-7-20 下午03:51:15
- */
- public class DefaultSolrOperator implements SolrjOperator {
- private Logger logger = LoggerFactory.getLogger(this.getClass());
- private SolrjQuery solrjQuery;
- public void setSolrjQuery(SolrjQuery solrjQuery) {
- this.solrjQuery = solrjQuery;
- }
- @Override
- public List<Object> querySolrResult(Object propertyDO,
- Object compositorDO, Long startIndex, Long pageSize)
- throws Exception {
- Map<String, String> propertyMap = new TreeMap<String, String>();
- //排序有顺序,使用TreeMap
- Map<String, String> compositorMap = new TreeMap<String, String>();
- try {
- propertyMap = SolrjCommonUtil.getSearchProperty(propertyDO);
- compositorMap = SolrjCommonUtil.getSearchProperty(compositorDO);
- } catch (Exception e) {
- logger.error("SolrjCommonUtil.getSearchProperty() is error !"+ e);
- }
- SolrDocumentList solrDocumentList = solrjQuery.query(propertyMap, compositorMap,
- startIndex, pageSize);
- List<Object> resultList = new ArrayList<Object>();
- for (int i = 0; i < solrDocumentList.size(); i++) {
- resultList.add(solrDocumentList.get(i));
- }
- return resultList;
- }
- @Override
- public Long querySolrResultCount(SolrPropertyDO propertyDO,
- Object compositorDO) throws Exception {
- Map<String, String> propertyMap = new TreeMap<String, String>();
- Map<String, String> compositorMap = new TreeMap<String, String>();
- try {
- propertyMap = SolrjCommonUtil.getSearchProperty(propertyDO);
- compositorMap = SolrjCommonUtil.getSearchProperty(compositorDO);
- } catch (Exception e) {
- logger.error("SolrjCommonUtil.getSearchProperty() is error !" + e);
- }
- SolrDocumentList solrDocument = solrjQuery.query(propertyMap, compositorMap,
- null, null);
- return solrDocument.getNumFound();
- }
- }
package org.uppower.tnt.biz.core.manager.blog; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.TreeMap; import org.apache.solr.common.SolrDocumentList; import org.uppower.tnt.biz.core.manager.isearch.common.SolrjCommonUtil; import org.uppower.tnt.biz.core.manager.isearch.dataobject.SolrPropertyDO; import org.uppower.tnt.biz.core.manager.isearch.solrj.SolrjQuery; /** * @author yingmu * @version 2010-7-20 下午03:51:15 */ public class DefaultSolrOperator implements SolrjOperator { private Logger logger = LoggerFactory.getLogger(this.getClass()); private SolrjQuery solrjQuery; public void setSolrjQuery(SolrjQuery solrjQuery) { this.solrjQuery = solrjQuery; } @Override public List<Object> querySolrResult(Object propertyDO, Object compositorDO, Long startIndex, Long pageSize) throws Exception { Map<String, String> propertyMap = new TreeMap<String, String>(); //排序有顺序,使用TreeMap Map<String, String> compositorMap = new TreeMap<String, String>(); try { propertyMap = SolrjCommonUtil.getSearchProperty(propertyDO); compositorMap = SolrjCommonUtil.getSearchProperty(compositorDO); } catch (Exception e) { logger.error("SolrjCommonUtil.getSearchProperty() is error !"+ e); } SolrDocumentList solrDocumentList = solrjQuery.query(propertyMap, compositorMap, startIndex, pageSize); List<Object> resultList = new ArrayList<Object>(); for (int i = 0; i < solrDocumentList.size(); i++) { resultList.add(solrDocumentList.get(i)); } return resultList; } @Override public Long querySolrResultCount(SolrPropertyDO propertyDO, Object compositorDO) throws Exception { Map<String, String> propertyMap = new TreeMap<String, String>(); Map<String, String> compositorMap = new TreeMap<String, String>(); try { propertyMap = SolrjCommonUtil.getSearchProperty(propertyDO); compositorMap = SolrjCommonUtil.getSearchProperty(compositorDO); } catch (Exception e) { logger.error("SolrjCommonUtil.getSearchProperty() is error !" + e); } SolrDocumentList solrDocument = solrjQuery.query(propertyMap, compositorMap, null, null); return solrDocument.getNumFound(); } }
其中,对象的解析式利用反射原理,将实体对象中不为空的值,以映射的方式,转化为一个Map,其中排序对象在转化的过程中,使用TreeMap,保证其顺序性。
解析公共类实现如下:
- package org.uppower.tnt.biz.core.manager.blog.common;
- import java.lang.reflect.Field;
- import java.lang.reflect.InvocationTargetException;
- import java.lang.reflect.Method;
- import java.util.HashMap;
- import java.util.Map;
- /**
- * @author yingmu
- * @version 2010-7-20 下午01:07:15
- */
- public class SolrjCommonUtil {
- public static Map<String, String> getSearchProperty(Object model)
- throws NoSuchMethodException, IllegalAccessException,
- IllegalArgumentException, InvocationTargetException {
- Map<String, String> resultMap = new TreeMap<String, String>();
- // 获取实体类的所有属性,返回Field数组
- Field[] field = model.getClass().getDeclaredFields();
- for (int i = 0; i < field.length; i++) { // 遍历所有属性
- String name = field[i].getName(); // 获取属性的名字
- // 获取属性的类型
- String type = field[i].getGenericType().toString();
- if (type.equals("class java.lang.String")) { // 如果type是类类型,则前面包含"class ",后面跟类名
- Method m = model.getClass().getMethod(
- "get" + UpperCaseField(name));
- String value = (String) m.invoke(model); // 调用getter方法获取属性值
- if (value != null) {
- resultMap.put(name, value);
- }
- }
- }
- return resultMap;
- }
- // 转化字段首字母为大写
- private static String UpperCaseField(String fieldName) {
- fieldName = fieldName.replaceFirst(fieldName.substring(0, 1), fieldName
- .substring(0, 1).toUpperCase());
- return fieldName;
- }
- }
package org.uppower.tnt.biz.core.manager.blog.common; import java.lang.reflect.Field; import java.lang.reflect.InvocationTargetException; import java.lang.reflect.Method; import java.util.HashMap; import java.util.Map; /** * @author yingmu * @version 2010-7-20 下午01:07:15 */ public class SolrjCommonUtil { public static Map<String, String> getSearchProperty(Object model) throws NoSuchMethodException, IllegalAccessException, IllegalArgumentException, InvocationTargetException { Map<String, String> resultMap = new TreeMap<String, String>(); // 获取实体类的所有属性,返回Field数组 Field[] field = model.getClass().getDeclaredFields(); for (int i = 0; i < field.length; i++) { // 遍历所有属性 String name = field[i].getName(); // 获取属性的名字 // 获取属性的类型 String type = field[i].getGenericType().toString(); if (type.equals("class java.lang.String")) { // 如果type是类类型,则前面包含"class ",后面跟类名 Method m = model.getClass().getMethod( "get" + UpperCaseField(name)); String value = (String) m.invoke(model); // 调用getter方法获取属性值 if (value != null) { resultMap.put(name, value); } } } return resultMap; } // 转化字段首字母为大写 private static String UpperCaseField(String fieldName) { fieldName = fieldName.replaceFirst(fieldName.substring(0, 1), fieldName .substring(0, 1).toUpperCase()); return fieldName; } }
搜索直接调用solr客户端solrj,基本逻辑为循环两个解析之后的TreeMap,设置到SolrQuery当中,最后直接调用solrj的API,获得搜索结果。最终将搜索结构以List<Object>的形式返回。
具体实现:
- package org.uppower.tnt.biz.core.manager.blog.solrj;
- import java.net.MalformedURLException;
- import java.util.Map;
- import org.apache.solr.client.solrj.SolrQuery;
- import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer;
- import org.apache.solr.client.solrj.response.QueryResponse;
- import org.apache.solr.common.SolrDocumentList;
- /**
- * @author yingmu
- * @version 2010-7-20 下午02:57:04
- */
- public class SolrjQuery {
- private String url;
- private Integer soTimeOut;
- private Integer connectionTimeOut;
- private Integer maxConnectionsPerHost;
- private Integer maxTotalConnections;
- private Integer maxRetries;
- private CommonsHttpSolrServer solrServer = null;
- private final static String ASC = "asc";
- public void init() throws MalformedURLException {
- solrServer = new CommonsHttpSolrServer(url);
- solrServer.setSoTimeout(soTimeOut);
- solrServer.setConnectionTimeout(connectionTimeOut);
- solrServer.setDefaultMaxConnectionsPerHost(maxConnectionsPerHost);
- solrServer.setMaxTotalConnections(maxTotalConnections);
- solrServer.setFollowRedirects(false);
- solrServer.setAllowCompression(true);
- solrServer.setMaxRetries(maxRetries);
- }
- public SolrDocumentList query(Map<String, String> propertyMap,
- Map<String, String> compositorMap, Long startIndex, Long pageSize)
- throws Exception {
- SolrQuery query = new SolrQuery();
- // 设置搜索字段
- if (null == propertyMap) {
- throw new Exception("搜索字段不可为空!");
- } else {
- for (Object o : propertyMap.keySet()) {
- StringBuffer sb = new StringBuffer();
- sb.append(o.toString()).append(":");
- sb.append(propertyMap.get(o));
- String queryString = addBlank2Expression(sb.toString());
- query.setQuery(queryString);
- }
- }
- // 设置排序条件
- if (null != compositorMap) {
- for (Object co : compositorMap.keySet()) {
- if (ASC == compositorMap.get(co)
- || ASC.equals(compositorMap.get(co))) {
- query.addSortField(co.toString(), SolrQuery.ORDER.asc);
- } else {
- query.addSortField(co.toString(), SolrQuery.ORDER.desc);
- }
- }
- }
- if (null != startIndex) {
- query.setStart(Integer.parseInt(String.valueOf(startIndex)));
- }
- if (null != pageSize && 0L != pageSize.longValue()) {
- query.setRows(Integer.parseInt(String.valueOf(pageSize)));
- }
- try {
- QueryResponse qrsp = solrServer.query(query);
- SolrDocumentList docs = qrsp.getResults();
- return docs;
- } catch (Exception e) {
- throw new Exception(e);
- }
- }
- private String addBlank2Expression(String oldExpression) {
- String lastExpression;
- lastExpression = oldExpression.replace("AND", " AND ").replace("NOT",
- " NOT ").replace("OR", " OR ");
- return lastExpression;
- }
- public Integer getMaxRetries() {
- return maxRetries;
- }
- ……
- public void setMaxTotalConnections(Integer maxTotalConnections) {
- this.maxTotalConnections = maxTotalConnections;
- }
- }
package org.uppower.tnt.biz.core.manager.blog.solrj; import java.net.MalformedURLException; import java.util.Map; import org.apache.solr.client.solrj.SolrQuery; import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer; import org.apache.solr.client.solrj.response.QueryResponse; import org.apache.solr.common.SolrDocumentList; /** * @author yingmu * @version 2010-7-20 下午02:57:04 */ public class SolrjQuery { private String url; private Integer soTimeOut; private Integer connectionTimeOut; private Integer maxConnectionsPerHost; private Integer maxTotalConnections; private Integer maxRetries; private CommonsHttpSolrServer solrServer = null; private final static String ASC = "asc"; public void init() throws MalformedURLException { solrServer = new CommonsHttpSolrServer(url); solrServer.setSoTimeout(soTimeOut); solrServer.setConnectionTimeout(connectionTimeOut); solrServer.setDefaultMaxConnectionsPerHost(maxConnectionsPerHost); solrServer.setMaxTotalConnections(maxTotalConnections); solrServer.setFollowRedirects(false); solrServer.setAllowCompression(true); solrServer.setMaxRetries(maxRetries); } public SolrDocumentList query(Map<String, String> propertyMap, Map<String, String> compositorMap, Long startIndex, Long pageSize) throws Exception { SolrQuery query = new SolrQuery(); // 设置搜索字段 if (null == propertyMap) { throw new Exception("搜索字段不可为空!"); } else { for (Object o : propertyMap.keySet()) { StringBuffer sb = new StringBuffer(); sb.append(o.toString()).append(":"); sb.append(propertyMap.get(o)); String queryString = addBlank2Expression(sb.toString()); query.setQuery(queryString); } } // 设置排序条件 if (null != compositorMap) { for (Object co : compositorMap.keySet()) { if (ASC == compositorMap.get(co) || ASC.equals(compositorMap.get(co))) { query.addSortField(co.toString(), SolrQuery.ORDER.asc); } else { query.addSortField(co.toString(), SolrQuery.ORDER.desc); } } } if (null != startIndex) { query.setStart(Integer.parseInt(String.valueOf(startIndex))); } if (null != pageSize && 0L != pageSize.longValue()) { query.setRows(Integer.parseInt(String.valueOf(pageSize))); } try { QueryResponse qrsp = solrServer.query(query); SolrDocumentList docs = qrsp.getResults(); return docs; } catch (Exception e) { throw new Exception(e); } } private String addBlank2Expression(String oldExpression) { String lastExpression; lastExpression = oldExpression.replace("AND", " AND ").replace("NOT", " NOT ").replace("OR", " OR "); return lastExpression; } public Integer getMaxRetries() { return maxRetries; } …… public void setMaxTotalConnections(Integer maxTotalConnections) { this.maxTotalConnections = maxTotalConnections; } }
整个实现是在Spring的基础上完成的,其中SolrjQuery的init()方法在Spring容器启动是初始化。Init()方法内的属性,也是直接注入的。上层与下层之间也完全用注入的方式解决。具体配置就不贴不出来了,大家都会。
整个代码很简陋,但是几乎支持了你想要搜索的条件设置,而且不会暴露任何与solr相关的内容给上层调用,使整个搜索几乎以sql语言的思想在设置条件。
http://www.iteye.com/topic/315330
- package org.nstcrm.person.util;
- import java.lang.reflect.Field;
- import java.lang.reflect.Method;
- import java.net.MalformedURLException;
- import java.util.ArrayList;
- import java.util.List;
- import java.util.Map;
- import java.util.TreeMap;
- import org.apache.solr.client.solrj.SolrQuery;
- import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer;
- import org.apache.solr.client.solrj.response.QueryResponse;
- import org.apache.solr.common.SolrDocumentList;
- public class SolrHttpServer {
- //private Logger logger = LoggerFactory.getLogger(this.getClass());
- private final static String URL = "http://localhost:8080/solr";
- private final static Integer SOCKE_TTIMEOUT = 1000; // socket read timeout
- private final static Integer CONN_TIMEOUT = 100;
- private final static Integer MAXCONN_DEFAULT = 100;
- private final static Integer MAXCONN_TOTAL = 100;
- private final static Integer MAXRETRIES = 1;
- private static CommonsHttpSolrServer server = null;
- private final static String ASC = "asc";
- public void init() throws MalformedURLException {
- server = new CommonsHttpSolrServer( URL );
- //server.setParser(new XMLResponseParser());
- server.setSoTimeout(SOCKE_TTIMEOUT);
- server.setConnectionTimeout(CONN_TIMEOUT);
- server.setDefaultMaxConnectionsPerHost(MAXCONN_DEFAULT);
- server.setMaxTotalConnections(MAXCONN_TOTAL);
- server.setFollowRedirects(false);
- server.setAllowCompression(true);
- server.setMaxRetries(MAXRETRIES);
- }
- public static SolrDocumentList query(Map<String, String> property, Map<String, String> compositor, Integer pageSize) throws Exception {
- SolrQuery query = new SolrQuery();
- // 设置搜索字段
- if(null == property) {
- throw new Exception("搜索字段不可为空!");
- } else {
- for(Object obj : property.keySet()) {
- StringBuffer sb = new StringBuffer();
- sb.append(obj.toString()).append(":");
- sb.append(property.get(obj));
- String sql = (sb.toString()).replace("AND", " AND ").replace("OR", " OR ").replace("NOT", " NOT ");
- query.setQuery(sql);
- }
- }
- // 设置结果排序
- if(null != compositor) {
- for(Object obj : compositor.keySet()) {
- if(ASC == compositor.get(obj) || ASC.equals(compositor.get(obj))) {
- query.addSortField(obj.toString(), SolrQuery.ORDER.asc);
- } else {
- query.addSortField(obj.toString(), SolrQuery.ORDER.desc);
- }
- }
- }
- if(null != pageSize && 0 < pageSize) {
- query.setRows(pageSize);
- }
- QueryResponse qr = server.query(query);
- SolrDocumentList docList = qr.getResults();
- return docList;
- }
- public static Map<String, String> getQueryProperty(Object obj) throws Exception {
- Map<String, String> result = new TreeMap<String, String>();
- // 获取实体类的所有属性,返回Fields数组
- Field[] fields = obj.getClass().getDeclaredFields();
- for(Field f : fields) {
- String name = f.getName();// 获取属性的名字
- String type = f.getGenericType().toString();
- if("class java.lang.String".equals(type)) {// 如果type是类类型,则前面包含"class ",后面跟类名
- Method me = obj.getClass().getMethod("get" + UpperCaseField(name));
- String tem = (String) me.invoke(obj);
- if(null != tem) {
- result.put(name, tem);
- }
- }
- }
- return result;
- }
- public static List<Object> querySolrResult(Object propertyObj, Object compositorObj, Integer pageSize) throws Exception {
- Map<String, String> propertyMap = new TreeMap<String, String>();
- Map<String, String> compositorMap = new TreeMap<String, String>();
- propertyMap = getQueryProperty(propertyObj);
- compositorMap = getQueryProperty(compositorObj);
- SolrDocumentList docList = query(propertyMap, compositorMap, pageSize);
- List<Object> list = new ArrayList<Object>();
- for(Object obj : docList) {
- list.add(obj);
- }
- return list;
- }
- private static String UpperCaseField(String name) {
- return name.replaceFirst(name.substring(0, 1), name.substring(0, 1).toUpperCase());
- }
- public CommonsHttpSolrServer getServer() {
- return server;
- }
- public void setServer(CommonsHttpSolrServer server) {
- this.server = server;
- }
- }
package org.nstcrm.person.util; import java.lang.reflect.Field; import java.lang.reflect.Method; import java.net.MalformedURLException; import java.util.ArrayList; import java.util.List; import java.util.Map; import java.util.TreeMap; import org.apache.solr.client.solrj.SolrQuery; import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer; import org.apache.solr.client.solrj.response.QueryResponse; import org.apache.solr.common.SolrDocumentList; public class SolrHttpServer { //private Logger logger = LoggerFactory.getLogger(this.getClass()); private final static String URL = "http://localhost:8080/solr"; private final static Integer SOCKE_TTIMEOUT = 1000; // socket read timeout private final static Integer CONN_TIMEOUT = 100; private final static Integer MAXCONN_DEFAULT = 100; private final static Integer MAXCONN_TOTAL = 100; private final static Integer MAXRETRIES = 1; private static CommonsHttpSolrServer server = null; private final static String ASC = "asc"; public void init() throws MalformedURLException { server = new CommonsHttpSolrServer( URL ); //server.setParser(new XMLResponseParser()); server.setSoTimeout(SOCKE_TTIMEOUT); server.setConnectionTimeout(CONN_TIMEOUT); server.setDefaultMaxConnectionsPerHost(MAXCONN_DEFAULT); server.setMaxTotalConnections(MAXCONN_TOTAL); server.setFollowRedirects(false); server.setAllowCompression(true); server.setMaxRetries(MAXRETRIES); } public static SolrDocumentList query(Map<String, String> property, Map<String, String> compositor, Integer pageSize) throws Exception { SolrQuery query = new SolrQuery(); // 设置搜索字段 if(null == property) { throw new Exception("搜索字段不可为空!"); } else { for(Object obj : property.keySet()) { StringBuffer sb = new StringBuffer(); sb.append(obj.toString()).append(":"); sb.append(property.get(obj)); String sql = (sb.toString()).replace("AND", " AND ").replace("OR", " OR ").replace("NOT", " NOT "); query.setQuery(sql); } } // 设置结果排序 if(null != compositor) { for(Object obj : compositor.keySet()) { if(ASC == compositor.get(obj) || ASC.equals(compositor.get(obj))) { query.addSortField(obj.toString(), SolrQuery.ORDER.asc); } else { query.addSortField(obj.toString(), SolrQuery.ORDER.desc); } } } if(null != pageSize && 0 < pageSize) { query.setRows(pageSize); } QueryResponse qr = server.query(query); SolrDocumentList docList = qr.getResults(); return docList; } public static Map<String, String> getQueryProperty(Object obj) throws Exception { Map<String, String> result = new TreeMap<String, String>(); // 获取实体类的所有属性,返回Fields数组 Field[] fields = obj.getClass().getDeclaredFields(); for(Field f : fields) { String name = f.getName();// 获取属性的名字 String type = f.getGenericType().toString(); if("class java.lang.String".equals(type)) {// 如果type是类类型,则前面包含"class ",后面跟类名 Method me = obj.getClass().getMethod("get" + UpperCaseField(name)); String tem = (String) me.invoke(obj); if(null != tem) { result.put(name, tem); } } } return result; } public static List<Object> querySolrResult(Object propertyObj, Object compositorObj, Integer pageSize) throws Exception { Map<String, String> propertyMap = new TreeMap<String, String>(); Map<String, String> compositorMap = new TreeMap<String, String>(); propertyMap = getQueryProperty(propertyObj); compositorMap = getQueryProperty(compositorObj); SolrDocumentList docList = query(propertyMap, compositorMap, pageSize); List<Object> list = new ArrayList<Object>(); for(Object obj : docList) { list.add(obj); } return list; } private static String UpperCaseField(String name) { return name.replaceFirst(name.substring(0, 1), name.substring(0, 1).toUpperCase()); } public CommonsHttpSolrServer getServer() { return server; } public void setServer(CommonsHttpSolrServer server) { this.server = server; } }
- Solr 1.4.1配置和SolrJ的用法
- 一、 Solr基本安装和配置
- 1,从官网镜像服务器下载最新版本apache-solr-1.4.1。下载地址:
- http://apache.etoak.com//lucene/solr/,并解压缩
- 2,在D盘建立一个SolrHome文件夹来存放solr的配置文件等,例如:在D盘WORK目录下穿件一个SolrHome文件夹: D:\WORK\SolrHome,
- 3,在刚解压的apache-solr-1.4.1,找到apache-solr-1.4.1\example下找到solr文件夹,复制到SolrHome下.
- 4,将apache-solr-1.4.1\dist\apache-solr-1.4.1.war中的apache-solr-1.4.1.war复制到tomcat中的\webapps下并重命名为solr,启动tomcat,解压war包后,停止tomcat.
- 5,在解压后的solr中找到web.xml,打开:将<env-entry-value>的值设为SolrHome的目录地址
- <env-entry>
- <env-entry-name>solr/home</env-entry-name>
- <env-entry-value>D:\WORK\SolrHome\solr</env-entry-value>
- <env-entry-type>java.lang.String</env-entry-type>
- </env-entry>
- 6,在D:\WORK\SolrHome\solr\conf找到solrconfig.xml文件,打开,修改
- <dataDir>${solr.data.dir:./solr/data}</dataDir>
- 其中solr.data.dir存放的是索引目录.
- 7,添加中文支持,修改tomcat的配置文件server.xml,如下:
- <Connector port="80" protocol="HTTP/1.1"
- maxThreads="150" connectionTimeout="20000"
- redirectPort="8443" URIEncoding="UTF-8"/>
- 8,启动tomcat,IE中输入:http://localhost:80/solr 即可浏览solr服务器.
- 二、Solr服务器复制的配置
- 1,首先测试在本机上开启三个tomcat的服务器:一个端口是80,另一个是9888
- 2,按照标题一的配置对第二和第三个tomcat服务器进行类似的配置,注意SolrHome的目录不要相同即可,其他的配置不变. 例如:以本机为例
- tomcat命名 URL SolrHome目录URI web.xml配置
- tomcat0 (master) http://localhost:80/solr D:\WORK\SolrHome\solr <env-entry-value>D:\WORK\SolrHome\solr</env-entry-value>
- tomcat1 (slave) http://localhost:9888/solr E:\WORK\SolrHome\solr <env-entry-value>E:\WORK\SolrHome\solr</env-entry-value>
- tomcat2
- (salve) http://localhost:9008/solr F:\WORK\SolrHome\solr <env-entry-value>F:\WORK\SolrHome\solr</env-entry-value>
- 3,以上两步配置好之后,在主服务器tomcat0(master)的SolrHome中找到solrconfig.xml文件,加入如下配置.
- <requestHandler name="/replication" class="solr.ReplicationHandler" >
- <lst name="master">
- <str name="replicateAfter">commit</str>
- <str name="replicateAfter">startup</str>
- <str name="confFiles">schema.xml,stopwords.txt</str>
- </lst>
- </requestHandler>
- 在从服务器tomcat1(slave)和tomcat1(slave)的SolrHome中找到solrconfig.xml文件加入如下配置:
- <requestHandler name="/replication" class="solr.ReplicationHandler" >
- <lst name="slave">
- <str name="masterUrl">http://localhost/solr/replication</str>
- <str name="pollInterval">00:00:60</str>
- </lst>
- </requestHandler>
- 4,在tomcat0上创建索引,使用客户端solrj创建索引,jar包在apache-solr-1.4.1压缩包中.代码如下:
- public class SlorTest3 {
- private static CommonsHttpSolrServer server = null;
- // private static SolrServer server = null;
- public SlorTest3(){
- try {
- server = new CommonsHttpSolrServer("http://localhost/solr");
- server.setConnectionTimeout(100);
- server.setDefaultMaxConnectionsPerHost(100);
- server.setMaxTotalConnections(100);
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- @Test
- public void testIndexCreate(){
- List<SolrInputDocument> docs = new ArrayList<SolrInputDocument>();
- for(int i=300;i<500;i++){
- SolrInputDocument doc = new SolrInputDocument();
- doc.addField("zjid", i); //需要在sechma.xml中配置字段
- doc.addField("title", "云状空化多个气泡的生长和溃灭");
- doc.addField("ssid", "ss"+i);
- doc.addField("dxid", "dx"+i);
- docs.add(doc);
- }
- try {
- server.add(docs);
- server.commit();
- System.out.println("----索引创建完毕!!!----");
- } catch (SolrServerException e) {
- e.printStackTrace();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- }
- 5,分别启动三台tomcat服务器.打开IE浏览器分别输入:其中localhost=192.168.169.121
- http://localhost:9888/solr 点击 即可从主solr服务器上复制索引
- 三、Solr服务器分发(shard)配置
- 1, 开启四台tomcat服务器,其中三台在本机上,一台在远端.清单如下:
- 注意:四台服务器的配置要相同.其中的schema.xml字段配置一定要一致.
- Name URL SolrHome目录URI
- tomcatQuery http://localhost:80/solr D:\WORK\SolrHome\solr
- tomcat0 (shard) http://localhost:9888/solr E:\WORK\SolrHome\solr
- tomcat1 (shard) http://localhost:9008/solr F:\WORK\SolrHome\solr
- tomcat2 (shard) http://192.168.169.48:9888/solr D:\WORK\SolrHome\solr
- 2, 配置较简单,只需要在tomcatQuery上的SoleHome的solrconfig.xml文件中修改
- 其他的solr服务器不需要配置。
- <requestHandler name="standard" class="solr.SearchHandler" default="true">
- <!-- default values for query parameters -->
- <lst name="defaults">
- <str name="echoParams">explicit</str>
- <str name="shards">localhost:9088/solr,localhost:9888/solr,192.168.169.48:9888/solr</str>
- <!--
- <int name="rows">10</int>
- <str name="fl">*</str>
- <str name="version">2.1</str>
- -->
- </lst>
- </requestHandler>
- 3, 使用slorj的清除原有的索引.或者手动删除。
- 4, 编写代码,将lucene建立的索引(1G左右,874400条记录),按照比例通过solrj分发到三台solr(shard)服务器上,代码如下:
- public class IndexCreate{
- private static CommonsHttpSolrServer server;
- public CommonsHttpSolrServer getServer(String hostUrl){
- CommonsHttpSolrServer server = null;
- try {
- server = new CommonsHttpSolrServer(hostUrl);
- server.setConnectionTimeout(100);
- server.setDefaultMaxConnectionsPerHost(100);
- server.setMaxTotalConnections(100);
- } catch (IOException e) {
- System.out.println("请检查tomcat服务器或端口是否开启!");
- }
- return server;
- }
- @SuppressWarnings("deprecation")
- public void readerHostCreate(String[] hosts) throws CorruptIndexException, IOException{
- IndexReader reader = IndexReader.open("c:\\index");
- System.out.println("总记录数: "+reader.numDocs());
- int hostNum = hosts.length;
- int lengh = reader.numDocs()/hostNum; //根据主机数平分索引长度
- int j = reader.numDocs()%hostNum; //取余
- for(int i = 0;i<hosts.length;i++){
- long startTime = new Date().getTime();
- String url = hosts[i].substring(hosts[i].indexOf("//")+2,hosts[i].lastIndexOf("/"));
- System.out.println("第"+(i+1)+"次,在主机:"+url+" 上创建索引,创建时间"+new Date());
- if(i==(hosts.length-1)){
- hostlist(reader,lengh*i,lengh*(i+1)+j,hosts[i]);
- }else{
- hostlist(reader,lengh*i,lengh*(i+1),hosts[i]);
- }
- System.out.println("结束时间"+new Date());
- long endTime = new Date().getTime();
- long ms = (endTime-startTime)%60000-(((endTime-startTime)%60000)/1000)*1000;
- System.out.println("本次索引创建完毕,一共用了"+(endTime-startTime)/60000+"分" +
- ""+((endTime-startTime)%60000)/1000+"秒"+ms+"毫秒");
- System.out.println("****************************");
- }
- reader.close();
- }
- @SuppressWarnings("static-access")
- public void hostlist(IndexReader reader,int startLengh,int endLengh,String hostUrl) throws CorruptIndexException, IOException{
- List<BookIndex> beans = new LinkedList<BookIndex>();
- int count = 0;
- this.server = getServer(hostUrl);
- for(int i=startLengh;i<endLengh;i++){
- Document doc = reader.document(i);
- BookIndex book = new BookIndex();
- book.setZjid(doc.getField("zjid").stringValue());
- book.setTitle(doc.getField("title").stringValue());
- book.setSsid(doc.getField("ssid").stringValue());
- book.setDxid(doc.getField("dxid").stringValue());
- book.setBookname(doc.getField("bookname").stringValue());
- book.setAuthor(doc.getField("author").stringValue());
- book.setPublisher(doc.getField("publisher").stringValue());
- book.setPubdate(doc.getField("pubdate").stringValue());
- book.setYear(doc.getField("year").stringValue());
- book.setFenlei(doc.getField("fenlei").stringValue());
- book.setscore1(doc.getField("score").stringValue());
- book.setIsbn(doc.getField("isbn").stringValue());
- book.setFenleiurl(doc.getField("fenleiurl").stringValue());
- book.setMulu(doc.getField("mulu").stringValue());
- book.setIsp(doc.getField("isp").stringValue());
- book.setIep(doc.getField("iep").stringValue());
- beans.add(book);
- if(beans.size()%3000==0){
- createIndex(beans,hostUrl,server);
- beans.clear();
- System.out.println("---索引次数:"+(count+1)+"---");
- count++;
- }
- }
- System.out.println("beansSize 的大小 "+beans.size());
- if(beans.size()>0){
- createIndex(beans,hostUrl,server);
- beans.clear();
- }
- }
- public void createIndex(List<BookIndex> beans,String hostUrl,CommonsHttpSolrServer server){
- try {
- server.addBeans(beans);
- server.commit();
- } catch (SolrServerException e) {
- e.printStackTrace();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- public static void main(String[] args) throws CorruptIndexException, IOException{
- IndexCreate as = new IndexCreate();
- String[] hosts = new String[] {"http://192.168.169.121:9888/solr","http://192.168.169.121:9088/solr","http://192.168.169.48:9888/solr"};
- long startTime = new Date().getTime();
- as.readerHostCreate(hosts);
- long endTime = new Date().getTime();
- System.out.println("-------------------");
- long ms = (endTime-startTime)%60000-(((endTime-startTime)%60000)/1000)*1000;
- System.out.println("全部索引创建完毕,一共用了"+(endTime-startTime)/60000+"分" +
- ""+((endTime-startTime)%60000)/1000+"秒"+ms+"毫秒");
- }
- }
- JavaBean类BookIndex.java代码如下:
- 说明变量名与sechma.xml中的配置要相同.注意:不能使用score这个变量或者字段,与slor配置冲突,报exception。
- import org.apache.solr.client.solrj.beans.Field;
- public class BookIndex {
- @Field
- private String zjid ;
- @Field
- private String title;
- @Field
- private String ssid;
- @Field
- private String dxid;
- @Field
- private String bookname;
- @Field
- private String author;
- @Field
- private String publisher;
- @Field
- private String pubdate;
- @Field
- private String year;
- @Field
- private String fenlei;
- @Field
- private String score1;
- @Field
- private String isbn;
- @Field
- private String fenleiurl;
- @Field
- private String mulu;
- @Field
- private String isp;
- @Field
- private String iep;
- public getters();//get方法
- public setters();//set方法
- }
- 5, 同时开启四台服务器,运行上面代码:
- 6, 打开IE查询
- 打开:http://localhost/solr
- 打开:http://localhost:9888/solr
- 打开http://localhost:9008/solr
- 打开http://192.168.168.48:9888/solr
- 四、Solr的Multicore(分片)配置
- 第一步,将apache-solr-1.4.1\example下的multicore复制到先前配置的solr/home下。
- 其中multicore下的solr.xml配置如下:
- <solr persistent="false">
- <cores adminPath="/admin/cores">
- <core name="core0" instanceDir="core0">
- <property name="dataDir" value="/data/core0" />
- </core>
- <core name="core1" instanceDir="core1">
- <property name="dataDir" value="/data/core1" />
- </core>
- <core name="core2" instanceDir="core2">
- <property name="dataDir" value="/data/core2" />
- </core>
- </cores>
- </solr>
- 第二步,修改Tomcat 6.0\webapps\solr\WEB-INF下的web.xml文件如下
- <env-entry-value>D:\WORK\SolrHome\multicore</env-entry-value>
- 第三步,开启服务器,打开IEhttp://localhost/solr
- 五、SolrJ的用法
- 1,SolrJ的用法
- solrJ是与solr服务器交互的客户端工具.使用solrJ不必考虑solr服务器的输出格式,以及对文档的解析。solrJ会根据用户发送的请求将数据结果集(collection)返回给用户.
- 2,使用SolrJ建立索引
- 事例代码:
- //得到连接solr服务器的CommonsHttpSolrServer对象.通过这个对象处理用户提交的请求:
- CommonsHttpSolrServer server = new CommonsHttpSolrServer("http://localhost:9888/solr");
- public void testIndexCreate(){
- List<SolrInputDocument> docs = new ArrayList<SolrInputDocument>();
- // SolrInputDocumentle类似与Document类,用于创建索引文档,和向文档中添加字段
- for(int i=300;i<500;i++){
- SolrInputDocument doc = new SolrInputDocument();
- doc.addField("zjid", i+"_id");
- doc.addField("title", i+"_title");
- doc.addField("ssid", "ss_"+i);
- doc.addField("dxid", "dx_"+i);
- docs.add(doc);
- }
- try {
- server.add(docs);
- server.commit();//以更新(update)的方式提交
- System.out.println("----索引创建完毕!!!----");
- } catch (SolrServerException e) {
- e.printStackTrace();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- 以javaBean的方式创建索引:
- public class BookIndex {
- @Field
- private String zjid;
- @Field
- private String zhangjie;
- @Field
- private String ssid;
- @Field
- private String qwpos;
- @Field
- private String publishDate;
- @Field
- private String mulu;
- @Field
- private String fenleiurl;
- @Field
- private String fenlei;
- @Field
- private String dxid;
- @Field
- private String author;
- @Field
- private String address;
- @Field
- private String bookname;
- …………………
- }
- public void testBean(){
- List<BookIndex> beans = new ArrayList<BookIndex>();
- for(int i=0;i<10;i++){
- BookIndex book = new BookIndex();
- book.setZjid(i+"id");
- book.setTitle(i+"title");
- //set方法
- beans.add(book);
- }
- try {
- server.addBeans(beans);
- server.commit();
- System.out.println("----索引创建完毕!!!----");
- } catch (SolrServerException e) {
- e.printStackTrace();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- 3, SolrJ常见的查询
- a, 查询索引中的全部字段的内容: SolrQuery query = new SolrQuery("*:*");
- 事例代码:
- public void testQuery1(){
- SolrQuery query = new SolrQuery("*:*");
- query.setStart(20);//设置起始位置
- query.setRows(10); //查询组数
- QueryResponse response = null;
- try {
- response = server.query(query);//响应向服务器提交的查询请求
- System.out.println(response);
- } catch (SolrServerException e) {
- e.printStackTrace();
- }
- List<SolrDocument> docs = response.getResults();//得到结果集
- for(SolrDocument doc:docs){//遍历结果集
- for(Iterator iter = doc.iterator();iter.hasNext();){
- Map.Entry<String, Object> entry = (Entry<String, Object>)iter.next();
- System.out.print("Key :"+entry.getKey()+" ");
- System.out.println("Value :"+entry.getValue());
- }
- System.out.println("------------");
- }
- }
- b, 查询某一字段的内容
- String queryString = “zjid:5_id”;//写法为字段名:查询的内容
- SolrQuery query = new SolrQuery(queryString);
- c, 查询copyField的内容
- copyField是查询的默认字段,当不指明查询字段的field的时。查询的请求会在copyField匹配: 详细参见schema..xml的配置.
- String queryString = “XXX”
- SolrQuery query = new SolrQuery(queryString);
- 4, 索引删除
- 事例代码 :
- public void testClear() {
- server.setRequestWriter(new BinaryRequestWriter());//提高性能采用流输出方式
- try {
- server.deleteByQuery("*:*");
- server.commit();
- System.out.println("----清除索引---");
- } catch (SolrServerException e) {
- e.printStackTrace();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- 5, 高亮的使用Highlight
- public List<Book> getQueryString(String queryString,int start,int pageSize) {
- SolrQuery query = new SolrQuery(queryString);
- query.setHighlight(true); //开启高亮组件
- query.addHighlightField("mulu");//高亮字段
- query.setHighlightSimplePre("<font color=\"red\">");//标记
- query.setHighlightSimplePost("</font>");
- query.set("hl.usePhraseHighlighter", true);
- query.set("hl.highlightMultiTerm", true);
- query.set("hl.snippets", 3);//三个片段,默认是1
- query.set("hl.fragsize", 50);//每个片段50个字,默认是100
- //
- query.setStart(start); //起始位置 …分页
- query.setRows(pageSize);//文档数
- try {
- response = server.query(query);
- } catch (SolrServerException e) {
- e.printStackTrace();
- }
- List<BookIndex> bookLists = response.getBeans(BookIndex.class);
- Map<String,Map<String,List<String>>> h1 = response.getHighlighting();
- List<Book> books = new ArrayList<Book>();
- for(BookIndex bookIndex : bookLists){
- Map<String,List<String>> map = h1.get(bookIndex.getZjid());
- //以文档的唯一id作为Map<String,Map<String,List<String>>>的key值.
- Book book = new Book();
- //copy字段
- book.setBookname(bookIndex.getBookname());
- book.setZjid(bookIndex.getZjid());
- if(map.get("mulu")!=null){
- List<String> strMulu = map.get("mulu");
- StringBuffer buf = new StringBuffer();
- for(int i =0;i<strMulu.size();i++){
- buf.append(strMulu.get(i));
- buf.append("...");
- if(i>3){
- break;
- }
- }
- book.setSummary(buf.toString());
- }else{
- if(bookIndex.getMulu().length()>100){
- book.setSummary(bookIndex.getMulu().substring(0,100)+"...");
- }else{
- book.setSummary(bookIndex.getMulu()+"...");
- }
- }
- books.add(book);
- }
- return books;
- }
- 6, 分组Fact
- //需要注意的是参与分组的字段是不需要分词的,比如:产品的类别.kind
- public void testFact(){
- String queryString = "kind:儿童图书";
- SolrQuery query = new SolrQuery().setQuery(queryString);
- query.setFacet(true); //开启分组
- query.addFacetField("bookname");//分组字段
- query.addFacetField("title");
- query.setFacetMinCount(1);
- query.addSortField( "zjid", SolrQuery.ORDER.asc );//排序字段
- query.setRows(10);
- QueryResponse response = null;
- try {
- response = server.query(query);
- System.out.println(response);
- } catch (SolrServerException e) {
- e.printStackTrace();
- }
- List<FacetField> facets = response.getFacetFields();
- for (FacetField facet : facets) {
- System.out.println("Facet:" + facet);
- }
- }
- 六、一个简单的web引用:
- 首先说明的是索引来源,是根据已有的lucene索引上开发的,又因为lucene的索引直接用solrJ应用效果不好,会出现很多问题,找不到类似的解决办法,比如全字段查询,和高亮分组等,和多级索引目录…。但用solrJ创建的索引不存在类似的问题.
- 大致的思路是,读取已有的lucene索引 ,再用solrJ创建索引并分发到多台机器上,最后再做开发.
- 第一步:读取lucene的多级索引目录,用solrJ创建和分发索引;
- 需注意的是:要加大虚拟机的内存,因为采用的map做为缓存,理论上虚拟机的内存大,map的存储的索引文档数也就多.主要是针对内存溢出.
- 代码 :
- package org.readerIndex;
- import org.apache.solr.client.solrj.beans.Field;
- public class BookIndex2 {
- @Field
- private String zjid;
- @Field
- private String zhangjie;
- @Field
- private String ssid;
- @Field
- private String qwpos;
- @Field
- private String publishDate;
- @Field
- private String mulu;
- @Field
- private String fenleiurl;
- @Field
- private String fenlei;
- @Field
- private String dxid;
- @Field
- private String author;
- @Field
- private String address;
- @Field
- private String bookname;
- public String getZjid() {
- return zjid;
- …………………………………………
- }
- package org.readerIndex;
- import java.io.File;
- import java.io.IOException;
- import java.text.SimpleDateFormat;
- import java.util.ArrayList;
- import java.util.Date;
- import java.util.List;
- import org.apache.lucene.document.Document;
- import org.apache.lucene.index.CorruptIndexException;
- import org.apache.lucene.index.IndexReader;
- import org.apache.lucene.store.LockObtainFailedException;
- import org.apache.solr.client.solrj.SolrServerException;
- import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer;
- public class ReaderIndex {
- public CommonsHttpSolrServer getServer(String hostUrl){
- CommonsHttpSolrServer server = null;
- try {
- server = new CommonsHttpSolrServer(hostUrl);
- server.setConnectionTimeout(100);
- server.setDefaultMaxConnectionsPerHost(100);
- server.setMaxTotalConnections(100);
- } catch (IOException e) {
- System.out.println("请检查tomcat服务器或端口是否开启!");
- }
- return server;
- }
- public void indexDocuements(String path,String[] hostUrls) throws CorruptIndexException, LockObtainFailedException, IOException{
- File pareFile = new File(path);
- List<String> list = new ArrayList<String>();
- getFile(pareFile,list); //递归方法得到路径保存到list中
- System.out.println("***程序一共递归到"+list.size()+"个索引目录***");
- int arevageSize = list.size()/hostUrls.length;// 根据主机数平分目录
- int remainSize = list.size()%hostUrls.length;//取余
- SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
- for(int i=0;i<hostUrls.length;i++){
- Date startDate = new Date();
- String url = hostUrls[i].substring(hostUrls[i].indexOf("//")+2,hostUrls[i].lastIndexOf("/"));
- System.out.println("第"+(i+1)+"次,在主机:"+url+" 上创建索引,创建时间 "+sdf.format(startDate));
- if(i==(hostUrls.length-1)){
- list(list,arevageSize*i,arevageSize*(i+1)+remainSize,hostUrls[i]);
- }else{
- list(list,arevageSize*i,arevageSize*(i+1),hostUrls[i]);
- }
- Date endDate = new Date();
- System.out.println("本次索引结束时间为:"+sdf.format(endDate));
- }
- }
- public void list(List<String> list,int start,int end,String url){
- CommonsHttpSolrServer server = getServer(url);
- for(int j=start;j<end;j++){
- try {
- long startMs = System.currentTimeMillis();
- hostCreate(list.get(j),server);
- long endMs = System.currentTimeMillis();
- System.out.println("程序第"+(j+1)+"个目录处理完毕,目录路径:"+list.get(j)+", 耗时"+(endMs-startMs)+"ms");
- } catch (CorruptIndexException e) {
- e.printStackTrace();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- }
- public void getFile(File fileDirectory,List<String> list){
- if(fileDirectory.isDirectory()){
- File[] files = fileDirectory.listFiles();
- for(File file :files){
- getFile(file,list);
- }
- }else if(fileDirectory.isFile()){
- String filePath = fileDirectory.getPath();
- String path = filePath.replace('\\', '/');
- if(path.endsWith(".cfs")){
- int lastIndex = path.lastIndexOf("/");
- String directory = path.substring(0,lastIndex);
- list.add(directory);
- }
- }
- }
- @SuppressWarnings("deprecation")
- public void hostCreate(String directory,CommonsHttpSolrServer server) throws CorruptIndexException, IOException{
- IndexReader reader = IndexReader.open(directory);
- List<BookIndex2> beans = new ArrayList<BookIndex2>();
- for(int i=0;i<reader.numDocs();i++){
- Document doc = reader.document(i);
- BookIndex2 book = new BookIndex2();
- book.setZjid(doc.getField("zjid").stringValue());
- book.setAddress(doc.getField("address").stringValue());
- book.setAuthor(doc.getField("author").stringValue());
- book.setbookname(doc.getField("bookname").stringValue());
- book.setDxid(doc.getField("dxid").stringValue());
- book.setFenlei(doc.getField("fenlei").stringValue());
- book.setFenleiurl(doc.getField("fenleiurl").stringValue());
- book.setMulu(doc.getField("mulu").stringValue());
- book.setPublishDate(doc.getField("publishDate").stringValue());
- book.setQwpos(doc.getField("qwpos").stringValue());
- book.setSsid(doc.getField("ssid").stringValue());
- book.setZhangjie(doc.getField("zhangjie").stringValue());
- beans.add(book);
- }
- createIndex(beans,server);
- beans.clear();
- reader.close();
- }
- public void createIndex(List<BookIndex2> beans,CommonsHttpSolrServer server){
- try {
- server.addBeans(beans);
- server.commit();
- // server.optimize();
- } catch (SolrServerException e) {
- e.printStackTrace();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- public static void main(String[] args) throws CorruptIndexException, IOException{
- ReaderIndex reader = new ReaderIndex();
- String path = "D:\\91";
- String[] hosts = new String[] {"http://192.168.169.121:9888/solr","http://192.168.169.121:9088/solr","http://192.168.169.48:9888/solr"};
- long startTime = new Date().getTime();
- reader.indexDocuements(path,hosts);
- long endTime = new Date().getTime();
- System.out.println("-------------------");
- long ms = (endTime-startTime)%60000-(((endTime-startTime)%60000)/1000)*1000;
- System.out.println("全部文档索引完毕,一共用了"+(endTime-startTime)/60000+"分" +
- ""+((endTime-startTime)%60000)/1000+"秒"+ms+"毫秒");
- }
- }
- 第二步,进行简单的web应用.
评论
1 楼
bobbell
2014-12-24
I have setup a test liferay environment and a Solr search engine. I installed the plugin according to the wiki and Solr is indexing all the content from Wiki, Blogs and forums.
When I do the search on wiki and blogs the results are correct, but when doing a search on the forums I get no results.
Running a sniffer between the Solr server and liferay I see the search query and Solr is returning valid results but for some reason the search function in the forums is not showing any results.
Is anybody else seeing this behavior? Anybody with some pointers on how to solve this or what other information will be useful to solve it or to open a ticket in liferay bug tracking tool?
Thank you very much.
java barcode generator
When I do the search on wiki and blogs the results are correct, but when doing a search on the forums I get no results.
Running a sniffer between the Solr server and liferay I see the search query and Solr is returning valid results but for some reason the search function in the forums is not showing any results.
Is anybody else seeing this behavior? Anybody with some pointers on how to solve this or what other information will be useful to solve it or to open a ticket in liferay bug tracking tool?
Thank you very much.
java barcode generator
发表评论
-
Solr4.0+IKAnalyzer中文分词安装
2012-11-29 19:14 1609有近2年没接触Solr跟Lucene ... -
solr搜索打分规制排序
2012-09-26 21:58 2429solr使用了Lucene的内核,也继承了Luce ... -
solr DataimportHanler
2012-09-22 17:01 1290大多数的应用程序将数据存储在关系数据库、xml文件 ... -
solr第一弹 autocomplete(自动补全)
2012-09-22 16:38 1487百度和google中都有 ... -
Solr Data Import 快速入门
2012-09-20 14:32 855原文出处:http://blog.chenl ... -
JAVA环境下利用solrj二次开发SOlR搜索的环境部署常见错误
2012-09-20 11:36 1815问题一:出现控制台坏的响应错误一Bad reque ... -
Solr学习总结
2012-09-20 10:06 6457一、 SOLR搭建企业搜索平台 运行环境: 运行容器:Tomc ... -
olr 的客户端调用solrj 建索引+分页查询
2012-09-20 08:54 1968在 solr 3.5 配置及应用(一) 讲过一了 sol ... -
Solr笔记
2012-09-19 23:07 1318... -
Apache Solr 初级教程(介绍、安装部署、Java接口、中文分词)
2012-09-19 22:56 1797Apache Solr 介绍 Solr 是 ... -
lucene3.0 分页显示与高亮显示(转)
2012-09-19 11:44 1774分页类 Java代码 pac ... -
lucene3 中文IKAnalyzer分词例子
2012-09-10 13:37 1207import java.io.IOException; im ... -
Lucene3.0.1 学习笔记
2012-09-08 08:57 972不管怎么说,搜索都是非 ... -
Compass2.0.2自带例子解析
2012-09-05 08:47 1486Compass2.0.2自带例子解析: 下面的代码来自com ... -
compass站内搜索
2012-09-05 08:49 1020compass站内搜索: 1.去官方网站下载compass的 ... -
Spring + Compass + paoding配置
2012-09-05 08:50 1079Spring + Compass + paoding配置: ... -
配置compass的索引位置为相对路径
2012-09-01 10:49 1405配置compass的索引位置为相对路径: Compass是对 ... -
lucene创建索引
2012-09-01 10:48 1109lucene创建索引: import java.io.Fi ... -
Lucene demo调试运行:
2012-09-01 10:47 2056Lucene demo调试运行: 运行环境: JDK ... -
SSH + Lucene + 分页 + 排序 + 高亮 模拟简单新闻网站搜索引擎
2012-09-01 10:43 3499前两天看到了一个中国新闻网,这个网站的搜索form的a ...
相关推荐
Solr是Apache Lucene项目的一个子项目,是一个高性能、全文本搜索服务器,广泛应用于各种信息检索场景。在Solr6版本中,为了更方便地进行客户端操作,通常会使用SolrJ库,这是一个Java客户端库,它允许Java开发者与...
Solr建立在Lucene Java搜索库之上,提供了强大的全文检索能力,并支持多种高级特性,如分面导航、结果高亮显示、模糊查询以及更复杂的评分算法等。 #### 二、Solr的特点 1. **基于HTTP的API**:Solr通过一个基于...
嵌入式八股文面试题库资料知识宝典-华为的面试试题.zip
训练导控系统设计.pdf
嵌入式八股文面试题库资料知识宝典-网络编程.zip
人脸转正GAN模型的高效压缩.pdf
少儿编程scratch项目源代码文件案例素材-几何冲刺 转瞬即逝.zip
少儿编程scratch项目源代码文件案例素材-鸡蛋.zip
嵌入式系统_USB设备枚举与HID通信_CH559单片机USB主机键盘鼠标复合设备控制_基于CH559单片机的USB主机模式设备枚举与键盘鼠标数据收发系统支持复合设备识别与HID
嵌入式八股文面试题库资料知识宝典-linux常见面试题.zip
面向智慧工地的压力机在线数据的预警应用开发.pdf
基于Unity3D的鱼类运动行为可视化研究.pdf
少儿编程scratch项目源代码文件案例素材-霍格沃茨魔法学校.zip
少儿编程scratch项目源代码文件案例素材-金币冲刺.zip
内容概要:本文深入探讨了HarmonyOS编译构建子系统的作用及其技术细节。作为鸿蒙操作系统背后的关键技术之一,编译构建子系统通过GN和Ninja工具实现了高效的源代码到机器代码的转换,确保了系统的稳定性和性能优化。该系统不仅支持多系统版本构建、芯片厂商定制,还具备强大的调试与维护能力。其高效编译速度、灵活性和可扩展性使其在华为设备和其他智能终端中发挥了重要作用。文章还比较了HarmonyOS编译构建子系统与安卓和iOS编译系统的异同,并展望了其未来的发展趋势和技术演进方向。; 适合人群:对操作系统底层技术感兴趣的开发者、工程师和技术爱好者。; 使用场景及目标:①了解HarmonyOS编译构建子系统的基本概念和工作原理;②掌握其在不同设备上的应用和优化策略;③对比HarmonyOS与安卓、iOS编译系统的差异;④探索其未来发展方向和技术演进路径。; 其他说明:本文详细介绍了HarmonyOS编译构建子系统的架构设计、核心功能和实际应用案例,强调了其在万物互联时代的重要性和潜力。阅读时建议重点关注编译构建子系统的独特优势及其对鸿蒙生态系统的深远影响。
嵌入式八股文面试题库资料知识宝典-奇虎360 2015校园招聘C++研发工程师笔试题.zip
嵌入式八股文面试题库资料知识宝典-腾讯2014校园招聘C语言笔试题(附答案).zip
双种群变异策略改进RWCE算法优化换热网络.pdf
内容概要:本文详细介绍了基于瞬时无功功率理论的三电平有源电力滤波器(APF)仿真研究。主要内容涵盖并联型APF的工作原理、三相三电平NPC结构、谐波检测方法(ipiq)、双闭环控制策略(电压外环+电流内环PI控制)以及SVPWM矢量调制技术。仿真结果显示,在APF投入前后,电网电流THD从21.9%降至3.77%,显著提高了电能质量。 适用人群:从事电力系统研究、电力电子技术开发的专业人士,尤其是对有源电力滤波器及其仿真感兴趣的工程师和技术人员。 使用场景及目标:适用于需要解决电力系统中谐波污染和无功补偿问题的研究项目。目标是通过仿真验证APF的有效性和可行性,优化电力系统的电能质量。 其他说明:文中提到的仿真模型涉及多个关键模块,如三相交流电压模块、非线性负载、信号采集模块、LC滤波器模块等,这些模块的设计和协同工作对于实现良好的谐波抑制和无功补偿至关重要。
内容概要:本文探讨了在工业自动化和物联网交汇背景下,构建OPC DA转MQTT网关软件的需求及其具体实现方法。文中详细介绍了如何利用Python编程语言及相关库(如OpenOPC用于读取OPC DA数据,paho-mqtt用于MQTT消息传递),完成从OPC DA数据解析、格式转换到最终通过MQTT协议发布数据的关键步骤。此外,还讨论了针对不良网络环境下数据传输优化措施以及后续测试验证过程。 适合人群:从事工业自动化系统集成、物联网项目开发的技术人员,特别是那些希望提升跨协议数据交换能力的专业人士。 使用场景及目标:适用于需要在不同通信协议间建立高效稳定的数据通道的应用场合,比如制造业生产线监控、远程设备管理等。主要目的是克服传统有线网络限制,实现在不稳定无线网络条件下仍能保持良好性能的数据传输。 其他说明:文中提供了具体的代码片段帮助理解整个流程,并强调了实际部署过程中可能遇到的问题及解决方案。