- 浏览: 2551072 次
- 性别:
- 来自: 成都
文章分类
最新评论
-
nation:
你好,在部署Mesos+Spark的运行环境时,出现一个现象, ...
Spark(4)Deal with Mesos -
sillycat:
AMAZON Relatedhttps://www.godad ...
AMAZON API Gateway(2)Client Side SSL with NGINX -
sillycat:
sudo usermod -aG docker ec2-use ...
Docker and VirtualBox(1)Set up Shared Disk for Virtual Box -
sillycat:
Every Half an Hour30 * * * * /u ...
Build Home NAS(3)Data Redundancy -
sillycat:
3 List the Cron Job I Have>c ...
Build Home NAS(3)Data Redundancy
Solr(10)Async and Multiple Thread on SolrJ
1. Async Operation
I try to provide async http client there, but it seems does not help a lot
import jp.sf.amateras.solr.scala.async.AsyncSolrClient
import scala.concurrent.ExecutionContext.Implicits.global
import scala.collection.mutable.ListBuffer
import scala.concurrent.duration.Duration
import scala.concurrent.{Future, Await}
def addJobs(jobs:List[Job]):Unit = {
val futures = new ListBuffer[Future[Unit]]
for ( i<- 0 to jobs.size - 1){
val future = solrAsyncClient.add(jobs(i))
futures+= future
}
Await.ready(Future.sequence(futures),Duration.Inf)
}
The test class is as follow, it helps, but not much.
val num = 10000
val batch = 350
val jobs = ListBuffer[Job]()
val start = System.currentTimeMillis()
for ( i<- 1 to num){
val job = Job("id" + i, "title" + i, "desc" + i, "industry" + i)
jobs += job
if(i % batch == 0){
SolrClientDAO.addJobs(jobs.toList)
jobs.clear()
}
}
val end = System.currentTimeMillis()
println("total time for " + num + " is " + (end-start))
println("it is " + num / ((end-start)/1000) + " jobs/second")
2. SolrJ with Latest Version
package com.sillycat.jobsconsumer.persistence
import com.sillycat.jobsconsumer.models.Job
import com.sillycat.jobsconsumer.utilities.{IncludeConfig, IncludeLogger}
import org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient
import org.apache.solr.common.SolrInputDocument
import scala.collection.mutable.ListBuffer
import scala.collection.JavaConverters._
/**
* Created by carl on 8/7/15.
*/
object SolrJDAO extends IncludeLogger with IncludeConfig{
private val solrClient = {
try {
logger.info("Init the SOLR Client ---------------")
val solrURL = config.getString(envStr("solr.url.jobs"))
logger.info("SOLR URL = " + solrURL)
val client = new ConcurrentUpdateSolrClient(solrURL,100000,100)
client
} catch {
case x: Throwable =>
logger.error("Couldn't connect to SOLR: " + x)
null
}
}
def releaseResource = {
if(solrClient != null){
solrClient.close()
}
}
def addJobs(jobs:List[Job]):Unit = {
val docs = new ListBuffer[SolrInputDocument]
for( i<- 0 to (jobs.size - 1)){
val job = jobs(i)
val doc = new SolrInputDocument()
doc.addField("title", job.title)
doc.addField("id", job.id)
doc.addField("desc", job.desc)
doc.addField("industry", job.industry)
docs += doc
}
solrClient.add(docs.toList.asJava)
}
def commit = {
solrClient.commit()
}
}
The test class is as follow:
package com.sillycat.jobsconsumer.persistence
import com.sillycat.jobsconsumer.models.Job
import com.sillycat.jobsconsumer.utilities.IncludeConfig
import org.scalatest.{BeforeAndAfterAll, Matchers, FunSpec}
import scala.collection.mutable.ListBuffer
/**
* Created by carl on 8/7/15.
*/
class SolrJDAOSpec extends FunSpec with Matchers with BeforeAndAfterAll with IncludeConfig{
override def beforeAll() {
if(config.getString("build.env").equals("test")){
}
}
override def afterAll() {
}
describe("SolrDAO") {
describe("#add and query"){
it("Add one single job to Solr") {
val expect = Job("id1","title1","desc1","industry1")
val num = 1000000
val batch = 300
val jobs = ListBuffer[Job]()
val start = System.currentTimeMillis()
for ( i<- 1 to num ){
val job = Job("id" + i, "title" + i, "desc" + i, "industry" + i)
jobs += job
if(i % batch == 0){
SolrJDAO.addJobs(jobs.toList)
jobs.clear()
}
}
val end = System.currentTimeMillis()
println("total time for " + num + " is " + (end-start))
val duration = (end - start) / 1000
println("it is " + num / duration + " jobs/second")
SolrJDAO.commit
}
}
}
}
The result is amazing….
INFO [pool-4-thread-2-ScalaTest-running-SolrJDAOSpec] 2015-08-07 16:04:10,009 SolrJDAO.scala (line 24) Init the SOLR Client ---------------
INFO [pool-4-thread-2-ScalaTest-running-SolrJDAOSpec] 2015-08-07 16:04:10,012 SolrJDAO.scala (line 26) SOLR URL = http://ubuntu-master:8983/solr/jobs
total time for 1000000 is 8502
it is 125000 jobs/second
3. Try to Build the Driver myself
Tips
Add the open file limit on ubuntu and mac os
sudo sh -c "ulimit -n 65535 && exec su carl"
sudo ulimit -n 10000
References:
http://sillycat.iteye.com/blog/2233708
http://www.cnblogs.com/wgp13x/p/3742653.html
http://blog.csdn.net/john_f_lau/article/details/8780013
1. Async Operation
I try to provide async http client there, but it seems does not help a lot
import jp.sf.amateras.solr.scala.async.AsyncSolrClient
import scala.concurrent.ExecutionContext.Implicits.global
import scala.collection.mutable.ListBuffer
import scala.concurrent.duration.Duration
import scala.concurrent.{Future, Await}
def addJobs(jobs:List[Job]):Unit = {
val futures = new ListBuffer[Future[Unit]]
for ( i<- 0 to jobs.size - 1){
val future = solrAsyncClient.add(jobs(i))
futures+= future
}
Await.ready(Future.sequence(futures),Duration.Inf)
}
The test class is as follow, it helps, but not much.
val num = 10000
val batch = 350
val jobs = ListBuffer[Job]()
val start = System.currentTimeMillis()
for ( i<- 1 to num){
val job = Job("id" + i, "title" + i, "desc" + i, "industry" + i)
jobs += job
if(i % batch == 0){
SolrClientDAO.addJobs(jobs.toList)
jobs.clear()
}
}
val end = System.currentTimeMillis()
println("total time for " + num + " is " + (end-start))
println("it is " + num / ((end-start)/1000) + " jobs/second")
2. SolrJ with Latest Version
package com.sillycat.jobsconsumer.persistence
import com.sillycat.jobsconsumer.models.Job
import com.sillycat.jobsconsumer.utilities.{IncludeConfig, IncludeLogger}
import org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient
import org.apache.solr.common.SolrInputDocument
import scala.collection.mutable.ListBuffer
import scala.collection.JavaConverters._
/**
* Created by carl on 8/7/15.
*/
object SolrJDAO extends IncludeLogger with IncludeConfig{
private val solrClient = {
try {
logger.info("Init the SOLR Client ---------------")
val solrURL = config.getString(envStr("solr.url.jobs"))
logger.info("SOLR URL = " + solrURL)
val client = new ConcurrentUpdateSolrClient(solrURL,100000,100)
client
} catch {
case x: Throwable =>
logger.error("Couldn't connect to SOLR: " + x)
null
}
}
def releaseResource = {
if(solrClient != null){
solrClient.close()
}
}
def addJobs(jobs:List[Job]):Unit = {
val docs = new ListBuffer[SolrInputDocument]
for( i<- 0 to (jobs.size - 1)){
val job = jobs(i)
val doc = new SolrInputDocument()
doc.addField("title", job.title)
doc.addField("id", job.id)
doc.addField("desc", job.desc)
doc.addField("industry", job.industry)
docs += doc
}
solrClient.add(docs.toList.asJava)
}
def commit = {
solrClient.commit()
}
}
The test class is as follow:
package com.sillycat.jobsconsumer.persistence
import com.sillycat.jobsconsumer.models.Job
import com.sillycat.jobsconsumer.utilities.IncludeConfig
import org.scalatest.{BeforeAndAfterAll, Matchers, FunSpec}
import scala.collection.mutable.ListBuffer
/**
* Created by carl on 8/7/15.
*/
class SolrJDAOSpec extends FunSpec with Matchers with BeforeAndAfterAll with IncludeConfig{
override def beforeAll() {
if(config.getString("build.env").equals("test")){
}
}
override def afterAll() {
}
describe("SolrDAO") {
describe("#add and query"){
it("Add one single job to Solr") {
val expect = Job("id1","title1","desc1","industry1")
val num = 1000000
val batch = 300
val jobs = ListBuffer[Job]()
val start = System.currentTimeMillis()
for ( i<- 1 to num ){
val job = Job("id" + i, "title" + i, "desc" + i, "industry" + i)
jobs += job
if(i % batch == 0){
SolrJDAO.addJobs(jobs.toList)
jobs.clear()
}
}
val end = System.currentTimeMillis()
println("total time for " + num + " is " + (end-start))
val duration = (end - start) / 1000
println("it is " + num / duration + " jobs/second")
SolrJDAO.commit
}
}
}
}
The result is amazing….
INFO [pool-4-thread-2-ScalaTest-running-SolrJDAOSpec] 2015-08-07 16:04:10,009 SolrJDAO.scala (line 24) Init the SOLR Client ---------------
INFO [pool-4-thread-2-ScalaTest-running-SolrJDAOSpec] 2015-08-07 16:04:10,012 SolrJDAO.scala (line 26) SOLR URL = http://ubuntu-master:8983/solr/jobs
total time for 1000000 is 8502
it is 125000 jobs/second
3. Try to Build the Driver myself
Tips
Add the open file limit on ubuntu and mac os
sudo sh -c "ulimit -n 65535 && exec su carl"
sudo ulimit -n 10000
References:
http://sillycat.iteye.com/blog/2233708
http://www.cnblogs.com/wgp13x/p/3742653.html
http://blog.csdn.net/john_f_lau/article/details/8780013
发表评论
-
Update Site will come soon
2021-06-02 04:10 1677I am still keep notes my tech n ... -
Stop Update Here
2020-04-28 09:00 315I will stop update here, and mo ... -
NodeJS12 and Zlib
2020-04-01 07:44 475NodeJS12 and Zlib It works as ... -
Docker Swarm 2020(2)Docker Swarm and Portainer
2020-03-31 23:18 367Docker Swarm 2020(2)Docker Swar ... -
Docker Swarm 2020(1)Simply Install and Use Swarm
2020-03-31 07:58 368Docker Swarm 2020(1)Simply Inst ... -
Traefik 2020(1)Introduction and Installation
2020-03-29 13:52 335Traefik 2020(1)Introduction and ... -
Portainer 2020(4)Deploy Nginx and Others
2020-03-20 12:06 429Portainer 2020(4)Deploy Nginx a ... -
Private Registry 2020(1)No auth in registry Nginx AUTH for UI
2020-03-18 00:56 435Private Registry 2020(1)No auth ... -
Docker Compose 2020(1)Installation and Basic
2020-03-15 08:10 373Docker Compose 2020(1)Installat ... -
VPN Server 2020(2)Docker on CentOS in Ubuntu
2020-03-02 08:04 454VPN Server 2020(2)Docker on Cen ... -
Buffer in NodeJS 12 and NodeJS 8
2020-02-25 06:43 384Buffer in NodeJS 12 and NodeJS ... -
NodeJS ENV Similar to JENV and PyENV
2020-02-25 05:14 475NodeJS ENV Similar to JENV and ... -
Prometheus HA 2020(3)AlertManager Cluster
2020-02-24 01:47 421Prometheus HA 2020(3)AlertManag ... -
Serverless with NodeJS and TencentCloud 2020(5)CRON and Settings
2020-02-24 01:46 336Serverless with NodeJS and Tenc ... -
GraphQL 2019(3)Connect to MySQL
2020-02-24 01:48 246GraphQL 2019(3)Connect to MySQL ... -
GraphQL 2019(2)GraphQL and Deploy to Tencent Cloud
2020-02-24 01:48 450GraphQL 2019(2)GraphQL and Depl ... -
GraphQL 2019(1)Apollo Basic
2020-02-19 01:36 326GraphQL 2019(1)Apollo Basic Cl ... -
Serverless with NodeJS and TencentCloud 2020(4)Multiple Handlers and Running wit
2020-02-19 01:19 312Serverless with NodeJS and Tenc ... -
Serverless with NodeJS and TencentCloud 2020(3)Build Tree and Traverse Tree
2020-02-19 01:19 317Serverless with NodeJS and Tenc ... -
Serverless with NodeJS and TencentCloud 2020(2)Trigger SCF in SCF
2020-02-19 01:18 292Serverless with NodeJS and Tenc ...
相关推荐
主要讲解了 solr客户端如何调用带账号密码的solr服务器调用,实现添加索引和查询索引,以及分组查询
solrj工具类封装,包括条件批量查询,批量增删改,分段修改。
Solrj是Apache Solr的一个Java客户端库,用于与Solr服务器进行交互。它提供了丰富的API,使得开发人员可以方便地执行索引、查询、配置和管理Solr实例。Solrj简化了Solr的集成工作,例如在Java应用中添加或更新文档,...
solr详细配置教程与solrj的使用
solr-solrj-4.9.0.jar
SolrJ是Apache Solr项目的Java客户端库,它为与Solr服务器进行交互提供了便利的API。这个压缩包包含了两个版本的SolrJ库:solr-solrj-4.10.3.jar和solr-solrj-5.0.0.jar。这两个版本的差异主要在于对Solr服务器的...
Solr-Solrj是Apache Lucene项目下的一个子项目,专门为Apache Solr搜索引擎提供Java客户端库。Solr是一款强大的全文检索服务器,而Solrj则是与之交互的Java API,使得开发人员能够轻松地在Java应用程序中集成Solr的...
solr-solrj-4.4.0.jar
Solr-Solrj 5.0.0 是一个用于与Apache Solr进行交互的Java客户端库。在本文中,我们将深入探讨Solr-Solrj的使用、功能及其与自建Solr服务的集成,特别是涉及到中文分词的场景。 Apache Solr是一款流行的开源全文...
Solr,全称为Apache Solr,是一款开源的全文搜索引擎,被广泛应用于企业级搜索解决方案中。它基于Lucene库,提供了高效、可扩展的搜索和分析能力。在处理多表join查询时,传统的关系型数据库如MySQL等通常能很好地...
solrJ是Java连接solr进行查询检索和索引更新维护的jar包。
总之,"solr-config_solrj-demo.rar_DEMO_solr_solr的j"这个DEMO是一个全面了解和实践Solr配置及SolrJ使用的宝贵资源,它将引导你逐步掌握如何在实际项目中有效地运用Solr进行全文检索和数据分析。通过深入学习和...
solr-solrj-6.6.0.jar
### Solr配置与SolrJ使用详解 #### 一、Solr基本安装与配置 **1. 下载Solr** - **步骤说明**: 从Apache官方镜像站点下载Solr 1.4.1版本。 - **操作详情**: 访问链接`http://apache.etoak.com/lucene/solr/`,...
### Solr配置与SolrJ使用详解 #### 一、Solr简介 Solr是一款开源的、高性能的企业级全文搜索引擎,它可以独立运行并通过HTTP协议提供服务。用户可以通过发送XML文件来创建索引,或者通过HTTP GET请求来进行检索,...
commons-fileupload-1.3.3 commons-io-2.2 commons-io-logging-1.1.2 httpclient-4.3.5 httpcore-4.3 httpmime-4.3.6 noggit-0.5 slf4j-api-1.6.1 slf4j-log4j12-1.6.1
SolrJ是Apache Solr官方提供的Java客户端库,它使得在Java应用程序中与Solr搜索引擎进行交互变得简单。这个压缩包文件包含了SolrJ运行所必需的一些关键库,包括JUnit测试框架、Commons IO和Commons Logging。接下来...
apache-solr-solrj-3.5.0.jar
solr-solrj-4.10.3.jar。
SolrJ 6.3.0 是一个针对 Apache Solr 的 Java 客户端库,它使得在 Java 应用程序中与 Solr 服务器进行交互变得更加便捷。Solr 是一个流行的开源搜索引擎,用于处理和索引大量文本数据,提供高效、可扩展的全文搜索...