- 浏览: 2551822 次
- 性别:
- 来自: 成都
文章分类
最新评论
-
nation:
你好,在部署Mesos+Spark的运行环境时,出现一个现象, ...
Spark(4)Deal with Mesos -
sillycat:
AMAZON Relatedhttps://www.godad ...
AMAZON API Gateway(2)Client Side SSL with NGINX -
sillycat:
sudo usermod -aG docker ec2-use ...
Docker and VirtualBox(1)Set up Shared Disk for Virtual Box -
sillycat:
Every Half an Hour30 * * * * /u ...
Build Home NAS(3)Data Redundancy -
sillycat:
3 List the Cron Job I Have>c ...
Build Home NAS(3)Data Redundancy
Java Generate/Merge Files(1)Redis Content Fetch
Fetch Data from Redis
https://github.com/xetorthio/jedis
https://github.com/lettuce-io/lettuce-core
I first use lettuce, it is not that well document as JEDIS and I tried with uncompress the data I gzcompress in PHP. I can not uncompress that. So I finally use JEDIS instead.
Some Lettuce-core codes for future references.
package com.j2c.feeds2g.services;
import java.util.List;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import com.lambdaworks.redis.RedisFuture;
import com.lambdaworks.redis.RedisURI;
import com.lambdaworks.redis.ValueScanCursor;
import com.lambdaworks.redis.api.async.RedisSetAsyncCommands;
import com.lambdaworks.redis.api.async.RedisStringAsyncCommands;
import com.lambdaworks.redis.cluster.RedisClusterClient;
import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection;
public class RedisLettuceServiceImpl implements RedisService{
RedisClusterClient redisClusterClient;
public void processJobsBySource(Integer sourceID) {
StatefulRedisClusterConnection<String, String> connection = redisClusterClient.connect();
RedisStringAsyncCommands<String, String> async = connection.async();
RedisFuture<String> smembers = async.getrange("source_referencenumbers_1299", 0, 10);
try {
smembers.get(10, TimeUnit.SECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
} catch (TimeoutException e) {
e.printStackTrace();
}
}
public void processJobsByBucket(String bucketJobIDs) {
}
public void setRedisClusterClient(RedisClusterClient redisClusterClient) {
this.redisClusterClient = redisClusterClient;
}
public static void main(String[] args) {
RedisURI redisUri = RedisURI.Builder.redis("stage-jobs-c.cache.amazonaws.com").build();
RedisClusterClient clusterClient = RedisClusterClient.create(redisUri);
StatefulRedisClusterConnection<String, String> connection = clusterClient.connect();
RedisSetAsyncCommands<String, String> setAsync = connection.async();
RedisFuture<ValueScanCursor<String>> results = setAsync.sscan("source_referencenumbers_1299");
RedisStringAsyncCommands<String, String> async = connection.async();
try {
ValueScanCursor<String> contents = results.get(10, TimeUnit.SECONDS);
List<String> jobIDs = contents.getValues();
System.out.println(jobIDs);
String jobID = "jobinfo_1299_" + jobIDs.get(0);
RedisFuture<String> results2 = async.get(jobID);
System.out.println("key = " + jobID);
String jobInfo = results2.get(10, TimeUnit.SECONDS);
System.out.println("raw job info = " + jobInfo);
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
} catch (TimeoutException e) {
e.printStackTrace();
}
}
}
<dependency>
<groupId>biz.paluch.redis</groupId>
<artifactId>lettuce</artifactId>
<version>4.3.1.Final</version>
</dependency>
Try to format the unit tests the JEDIS codes. For JEDIS, I am using JedisCluster.
And I want to use JedisPool as well, but it seems that JedisPool does not support JedisCluster, I am implement one myself with object common pool.
Some JedisPool codes
package com.j2c.feeds2g.services;
import java.util.HashSet;
import java.util.Set;
import com.j2c.feeds2g.commons.utils.CompressUtil;
import redis.clients.jedis.HostAndPort;
import redis.clients.jedis.JedisCluster;
public class RedisJRedisServiceImpl implements RedisService {
public void processJobsBySource(Integer sourceID) {
}
public void processJobsByBucket(String bucketJobIDs) {
}
public static void main(String[] args) {
Set<HostAndPort> jedisClusterNodes = new HashSet<HostAndPort>();
// Jedis Cluster will attempt to discover cluster nodes automatically
jedisClusterNodes.add(new HostAndPort("stage-jobs-c.pnuura.clustercfg.use1.cache.amazonaws.com", 6379));
JedisCluster jedis = new JedisCluster(jedisClusterNodes);
// JedisPoolConfig poolConfig = new JedisPoolConfig();
// poolConfig.setMaxTotal(1000);
// poolConfig.setMaxIdle(10);
// poolConfig.setMinIdle(1);
// poolConfig.setMaxWaitMillis(30000);
// JedisPool jedisPool = new JedisPool(poolConfig,
// "stage-jobs-c.pnuura.com",
// 6379,
// 1000
// );
//
// Jedis jedis = jedisPool.getResource();
// ScanParams scanParams = new ScanParams().count(100);
// String cursor = "0";
// int count = 0;
// do {
// count = count + 100;
// ScanResult<String> scanResult =
// jredisCluster.sscan("source_referencenumbers_1299", cursor,
// scanParams);
// cursor = scanResult.getStringCursor();
// List<String> jobIDs = scanResult.getResult();
// if (!jobIDs.isEmpty()) {
// System.out.println(count + " " + jobIDs.get(0));
// }
// } while (!"0".equals(cursor));
String jobID = "jobinfo_1299_1679_5e36e4a78e5cc158cd48de067ef085e0";
byte[] jobinfo = jedis.get(jobID.getBytes());
byte[] jobinfo2 = CompressUtil.gzuncompress(jobinfo);
String jobinfo3 = new String(jobinfo2);
System.out.println(jobinfo3);
}
}
References:
http://www.mkyong.com/spring-batch/spring-batch-and-spring-taskscheduler-example/
compress java
http://type.so/java/php-gzuncompress-in-java.html
spring with JEDIS
http://docs.spring.io/spring-data/redis/docs/1.8.1.RELEASE/reference/html/
http://projects.spring.io/spring-data-redis/#quick-start
logging set up
https://wiki.base22.com/display/btg/How+to+setup+SLF4J+and+LOGBack+in+a+web+app+-+fast
Fetch Data from Redis
https://github.com/xetorthio/jedis
https://github.com/lettuce-io/lettuce-core
I first use lettuce, it is not that well document as JEDIS and I tried with uncompress the data I gzcompress in PHP. I can not uncompress that. So I finally use JEDIS instead.
Some Lettuce-core codes for future references.
package com.j2c.feeds2g.services;
import java.util.List;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import com.lambdaworks.redis.RedisFuture;
import com.lambdaworks.redis.RedisURI;
import com.lambdaworks.redis.ValueScanCursor;
import com.lambdaworks.redis.api.async.RedisSetAsyncCommands;
import com.lambdaworks.redis.api.async.RedisStringAsyncCommands;
import com.lambdaworks.redis.cluster.RedisClusterClient;
import com.lambdaworks.redis.cluster.api.StatefulRedisClusterConnection;
public class RedisLettuceServiceImpl implements RedisService{
RedisClusterClient redisClusterClient;
public void processJobsBySource(Integer sourceID) {
StatefulRedisClusterConnection<String, String> connection = redisClusterClient.connect();
RedisStringAsyncCommands<String, String> async = connection.async();
RedisFuture<String> smembers = async.getrange("source_referencenumbers_1299", 0, 10);
try {
smembers.get(10, TimeUnit.SECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
} catch (TimeoutException e) {
e.printStackTrace();
}
}
public void processJobsByBucket(String bucketJobIDs) {
}
public void setRedisClusterClient(RedisClusterClient redisClusterClient) {
this.redisClusterClient = redisClusterClient;
}
public static void main(String[] args) {
RedisURI redisUri = RedisURI.Builder.redis("stage-jobs-c.cache.amazonaws.com").build();
RedisClusterClient clusterClient = RedisClusterClient.create(redisUri);
StatefulRedisClusterConnection<String, String> connection = clusterClient.connect();
RedisSetAsyncCommands<String, String> setAsync = connection.async();
RedisFuture<ValueScanCursor<String>> results = setAsync.sscan("source_referencenumbers_1299");
RedisStringAsyncCommands<String, String> async = connection.async();
try {
ValueScanCursor<String> contents = results.get(10, TimeUnit.SECONDS);
List<String> jobIDs = contents.getValues();
System.out.println(jobIDs);
String jobID = "jobinfo_1299_" + jobIDs.get(0);
RedisFuture<String> results2 = async.get(jobID);
System.out.println("key = " + jobID);
String jobInfo = results2.get(10, TimeUnit.SECONDS);
System.out.println("raw job info = " + jobInfo);
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
} catch (TimeoutException e) {
e.printStackTrace();
}
}
}
<dependency>
<groupId>biz.paluch.redis</groupId>
<artifactId>lettuce</artifactId>
<version>4.3.1.Final</version>
</dependency>
Try to format the unit tests the JEDIS codes. For JEDIS, I am using JedisCluster.
And I want to use JedisPool as well, but it seems that JedisPool does not support JedisCluster, I am implement one myself with object common pool.
Some JedisPool codes
package com.j2c.feeds2g.services;
import java.util.HashSet;
import java.util.Set;
import com.j2c.feeds2g.commons.utils.CompressUtil;
import redis.clients.jedis.HostAndPort;
import redis.clients.jedis.JedisCluster;
public class RedisJRedisServiceImpl implements RedisService {
public void processJobsBySource(Integer sourceID) {
}
public void processJobsByBucket(String bucketJobIDs) {
}
public static void main(String[] args) {
Set<HostAndPort> jedisClusterNodes = new HashSet<HostAndPort>();
// Jedis Cluster will attempt to discover cluster nodes automatically
jedisClusterNodes.add(new HostAndPort("stage-jobs-c.pnuura.clustercfg.use1.cache.amazonaws.com", 6379));
JedisCluster jedis = new JedisCluster(jedisClusterNodes);
// JedisPoolConfig poolConfig = new JedisPoolConfig();
// poolConfig.setMaxTotal(1000);
// poolConfig.setMaxIdle(10);
// poolConfig.setMinIdle(1);
// poolConfig.setMaxWaitMillis(30000);
// JedisPool jedisPool = new JedisPool(poolConfig,
// "stage-jobs-c.pnuura.com",
// 6379,
// 1000
// );
//
// Jedis jedis = jedisPool.getResource();
// ScanParams scanParams = new ScanParams().count(100);
// String cursor = "0";
// int count = 0;
// do {
// count = count + 100;
// ScanResult<String> scanResult =
// jredisCluster.sscan("source_referencenumbers_1299", cursor,
// scanParams);
// cursor = scanResult.getStringCursor();
// List<String> jobIDs = scanResult.getResult();
// if (!jobIDs.isEmpty()) {
// System.out.println(count + " " + jobIDs.get(0));
// }
// } while (!"0".equals(cursor));
String jobID = "jobinfo_1299_1679_5e36e4a78e5cc158cd48de067ef085e0";
byte[] jobinfo = jedis.get(jobID.getBytes());
byte[] jobinfo2 = CompressUtil.gzuncompress(jobinfo);
String jobinfo3 = new String(jobinfo2);
System.out.println(jobinfo3);
}
}
References:
http://www.mkyong.com/spring-batch/spring-batch-and-spring-taskscheduler-example/
compress java
http://type.so/java/php-gzuncompress-in-java.html
spring with JEDIS
http://docs.spring.io/spring-data/redis/docs/1.8.1.RELEASE/reference/html/
http://projects.spring.io/spring-data-redis/#quick-start
logging set up
https://wiki.base22.com/display/btg/How+to+setup+SLF4J+and+LOGBack+in+a+web+app+-+fast
发表评论
-
Update Site will come soon
2021-06-02 04:10 1678I am still keep notes my tech n ... -
Portainer 2020(4)Deploy Nginx and Others
2020-03-20 12:06 431Portainer 2020(4)Deploy Nginx a ... -
Private Registry 2020(1)No auth in registry Nginx AUTH for UI
2020-03-18 00:56 436Private Registry 2020(1)No auth ... -
Docker Compose 2020(1)Installation and Basic
2020-03-15 08:10 374Docker Compose 2020(1)Installat ... -
VPN Server 2020(2)Docker on CentOS in Ubuntu
2020-03-02 08:04 455VPN Server 2020(2)Docker on Cen ... -
Nginx Deal with OPTIONS in HTTP Protocol
2020-02-15 01:33 356Nginx Deal with OPTIONS in HTTP ... -
PDF to HTML 2020(1)pdftohtml Linux tool or PDFBox
2020-01-29 07:37 405PDF to HTML 2020(1)pdftohtml Li ... -
Elasticsearch Cluster 2019(2)Kibana Issue or Upgrade
2020-01-12 03:25 720Elasticsearch Cluster 2019(2)Ki ... -
Spark Streaming 2020(1)Investigation
2020-01-08 07:19 295Spark Streaming 2020(1)Investig ... -
Hadoop Docker 2019 Version 3.2.1
2019-12-10 07:39 294Hadoop Docker 2019 Version 3.2. ... -
MongoDB 2019(3)Security and Auth
2019-11-16 06:48 241MongoDB 2019(3)Security and Aut ... -
MongoDB 2019(1)Install 4.2.1 Single and Cluster
2019-11-11 05:07 294MongoDB 2019(1) Follow this ht ... -
Monitor Tool 2019(1)Monit Installation and Usage
2019-10-17 08:22 325Monitor Tool 2019(1)Monit Insta ... -
Ansible 2019(1)Introduction and Installation on Ubuntu and CentOS
2019-10-12 06:15 312Ansible 2019(1)Introduction and ... -
Timezone and Time on All Servers and Docker Containers
2019-10-10 11:18 332Timezone and Time on All Server ... -
Kafka Cluster 2019(6) 3 Nodes Cluster on CentOS7
2019-10-05 23:28 283Kafka Cluster 2019(6) 3 Nodes C ... -
K8S Helm(1)Understand YAML and Kubectl Pod and Deployment
2019-10-01 01:21 326K8S Helm(1)Understand YAML and ... -
Rancher and k8s 2019(5)Private Registry
2019-09-27 03:25 362Rancher and k8s 2019(5)Private ... -
Jenkins 2019 Cluster(1)Version 2.194
2019-09-12 02:53 444Jenkins 2019 Cluster(1)Version ... -
Redis Cluster 2019(3)Redis Cluster on CentOS
2019-08-17 04:07 373Redis Cluster 2019(3)Redis Clus ...
相关推荐
1. **GenerateKey.java**: 这个文件很可能包含了生成加密密钥的逻辑。在Java中,通常使用`java.security.KeyPairGenerator`类来生成公钥和私钥对,如RSA、DSA等算法。`KeyPairGenerator`需要一个密钥算法作为参数...
解决 java.lang.RuntimeException: Could not generate DH keypair异常处理。 bcprov-ext-jdk15on-1.60、bcprov-jdk15on-1.60两个包放到jre下的$JAVA_HOME/jre/lib/ext的路径下,然后配置$JAVA_HOME/jre/lib/...
// reponse, we instead simply fetch the PAC script. This is done for a few reasons: // 1. At present our PAC code does not yet handle multiple PACs on multiple networks @@ -730,6 +730,11 @@ ...
在Java编程环境中,生成二维码(QR Code)是一项常见的任务,特别是在移动应用、网站链接分享、电子支付等领域。`QRCOdeUtil.java`文件显然包含了用于生成二维码的实用工具类。下面将详细介绍如何使用Java来生成...
npm run generate 生成一个新的 page (并且在顶层的 app.json 中添加这个 page) 有命令行交互 npm run build 小程序项目文件夹选中 dist 文件夹就 OK (需要先打包 dist 文件夹) 开发流程 fork 这个仓库到自己的 ...
a) 定义JAVA_HOME变量,指向JDK的安装路径,例如:D:/Program Files/Java/Java/jdk1.6.0_05。 b) 更新系统变量PATH,添加JDK的bin目录,如:D:/Program Files/Java/jdk1.6.0_05/bin。 c) 将javabuilder.jar添加到...
//java -jar testAES.jar --generate-key ./key.txt 256 //java -jar testAES.jar --encrypt ./input.txt ./OUT.txt ./key.txt CFB //java -jar testAES.jar --decrypt ./OUT.txt ./OUTDEC.txt ./key.txt CFB
Thus it is possible to generate a report that is a complete record of all the differences in all of the files involved in a folder comparison. This is especially useful in code review and code audit ...
在本文中,我们将深入探讨如何在Spring Boot应用中利用Redis作为数据缓存系统。Spring Boot以其简化微服务开发的特性,结合Redis的高效缓存能力,可以为应用程序提供高效的性能优化。 首先,集成Spring Boot与Redis...
《codegenerate-3.6.1源码解析与二次开发指南》 在IT行业中,源码分析和二次开发是提升软件功能、优化性能的重要手段。本文将深入探讨"codegenerate-3.6.1源码",它是基于Jeecg框架的自动生成代码工具的源代码版本...
### 1. 安装Redis 首先,你需要在本地安装Redis服务器。你可以从官方下载页面(https://redis.io/download)获取最新版本的Redis。安装过程非常简单,通常包括解压并运行`redis-server.exe`来启动服务。当看到...
Marven + Jetty + Myeclipse实现java修改实时生效 1、把jrebel.jar放在任意地方(非项目中) 2、在myeclipse中配置 输入jetty:run -X 输入-noverify -javaagent:D:/java/spring/jrebel.jar 3、在pom.xml中...
### 利用DES加密算法保护Java源代码 #### 一、引言 随着信息技术的快速发展,数据的安全性问题越来越受到人们的重视。对于软件开发者来说,保护自己的Java源代码不被非法访问或篡改是非常重要的。Java作为一种跨...
1. **模块开发**:使用Redis的模块API(如 RedisModule_* 函数系列)来实现自定义数据类型、命令和事件处理。 2. **时间戳同步**:为了确保所有节点在同一时间戳下生成ID,需要同步所有Redis实例的时钟。这可能通过...
redis ssrf gopher 生成器和 redis ssrf 通过 master-slave-sync 进行 rceredis-ssrfssrf 写入文件。例如webshell 和 ssh 密钥ssrf 到 rce 4.x - 5.x要求ssrf-redis.py : python2.x 3.xrogue-server.py : python...
《PyPI官网下载:generate_files-1.0.1-py3-none-any.whl——Python后端开发必备工具》 PyPI(Python Package Index)是Python开发者的重要资源库,它提供了大量的第三方Python库,供全球的程序员下载和使用。本文...
java java_leetcode题解之Generate Parentheses.java
1. **generate语句类型**: - **generate for**:使用`genvar`关键字定义循环变量,类似于C语言中的for循环。例如: ```verilog genvar i; generate for(i = 0; i ; i = i + 1) begin : block_name // 代码块 ...
Thus it is possible to generate a report that is a complete record of all the differences in all of the files involved in a folder comparison. This is especially useful in code review and code audit ...
1. **Java版本问题**:如果你的代码是用Java编写的,可能会因为Java版本太旧,不支持某些高级的DH参数而导致此问题。确保你的JRE或JDK版本是最新的,或者至少是与你的应用兼容的。 2. **密钥长度限制**:默认情况下...