`
星夜的遐想
  • 浏览: 190569 次
  • 性别: Icon_minigender_1
  • 来自: 上海
社区版块
存档分类
最新评论
阅读更多

     1、创建一个构造集群的脚本:cluster-start.sh

 

./redis-trib.rb create --replicas 1 192.168.58.101:7000 192.168.58.101:7001 192.168.58.101:7002 192.168.58.102:7000 192.168.58.102:7001 192.168.58.102:7002

    2、启动redis各个节点服务脚本 :servers-start.sh

 

    

cd 7000
rm -rf appendonly.aof
rm -rf dump.rdb
rm -rf nodes.conf
redis-server redis.conf

cd ..
cd 7001
rm -rf appendonly.aof
rm -rf dump.rdb
rm -rf nodes.conf
redis-server redis.conf

cd ..
cd 7002
rm -rf appendonly.aof
rm -rf dump.rdb
rm -rf nodes.conf
redis-server redis.conf

   

 

    3、集群测试过程:

   

[root@localhost redis-cluster]# sh cluster-start.sh 
>>> Creating cluster
Connecting to node 192.168.58.101:7000: OK
Connecting to node 192.168.58.101:7001: OK
Connecting to node 192.168.58.101:7002: OK
Connecting to node 192.168.58.102:7000: OK
Connecting to node 192.168.58.102:7001: OK
Connecting to node 192.168.58.102:7002: OK
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.58.102:7000
192.168.58.101:7000
192.168.58.102:7001
Adding replica 192.168.58.101:7001 to 192.168.58.102:7000
Adding replica 192.168.58.102:7002 to 192.168.58.101:7000
Adding replica 192.168.58.101:7002 to 192.168.58.102:7001
M: aec976f33acd4971cf5e087ceaf2b5e606c56f36 192.168.58.101:7000
   slots:5461-10922 (5462 slots) master
S: 514548a9a01d7e125d716fd51d9ffd36165a2647 192.168.58.101:7001
   replicates d5ec4dee922385007f09005d0ef24024f3d513a3
S: 3a1962324921188520896b1e9329e210906c1641 192.168.58.101:7002
   replicates 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b
M: d5ec4dee922385007f09005d0ef24024f3d513a3 192.168.58.102:7000
   slots:0-5460 (5461 slots) master
M: 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b 192.168.58.102:7001
   slots:10923-16383 (5461 slots) master
S: 990c7f1b44034646cacb51a7754668ee5ada6005 192.168.58.102:7002
   replicates aec976f33acd4971cf5e087ceaf2b5e606c56f36
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join..
>>> Performing Cluster Check (using node 192.168.58.101:7000)
M: aec976f33acd4971cf5e087ceaf2b5e606c56f36 192.168.58.101:7000
   slots:5461-10922 (5462 slots) master
M: 514548a9a01d7e125d716fd51d9ffd36165a2647 192.168.58.101:7001
   slots: (0 slots) master
   replicates d5ec4dee922385007f09005d0ef24024f3d513a3
M: 3a1962324921188520896b1e9329e210906c1641 192.168.58.101:7002
   slots: (0 slots) master
   replicates 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b
M: d5ec4dee922385007f09005d0ef24024f3d513a3 192.168.58.102:7000
   slots:0-5460 (5461 slots) master
M: 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b 192.168.58.102:7001
   slots:10923-16383 (5461 slots) master
M: 990c7f1b44034646cacb51a7754668ee5ada6005 192.168.58.102:7002
   slots: (0 slots) master
   replicates aec976f33acd4971cf5e087ceaf2b5e606c56f36
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

./redis-trib.rb check 192.168.58.102:7000  --检查集群状态的命令 check 后面的ip:port只要是集群内部的任意节点

[root@localhost redis-cluster]# ./redis-trib.rb check 192.168.58.102:7000 
Connecting to node 192.168.58.102:7000: OK
Connecting to node 192.168.58.101:7002: OK
Connecting to node 192.168.58.101:7001: OK
Connecting to node 192.168.58.102:7001: OK
Connecting to node 192.168.58.101:7000: OK
Connecting to node 192.168.58.102:7002: OK
>>> Performing Cluster Check (using node 192.168.58.102:7000)
M: d5ec4dee922385007f09005d0ef24024f3d513a3 192.168.58.102:7000
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
S: 3a1962324921188520896b1e9329e210906c1641 192.168.58.101:7002
   slots: (0 slots) slave
   replicates 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b
S: 514548a9a01d7e125d716fd51d9ffd36165a2647 192.168.58.101:7001
   slots: (0 slots) slave
   replicates d5ec4dee922385007f09005d0ef24024f3d513a3
M: 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b 192.168.58.102:7001
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
M: aec976f33acd4971cf5e087ceaf2b5e606c56f36 192.168.58.101:7000
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 990c7f1b44034646cacb51a7754668ee5ada6005 192.168.58.102:7002
   slots: (0 slots) slave
   replicates aec976f33acd4971cf5e087ceaf2b5e606c56f36
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

我在两台机器上面配置了6个节点,集群式,自动生成3个Master节点下面挂有3个Slave节点。

2、模拟某台主节点挂掉的情况,(我直接用kill命令杀死某个节点进程)
测试删除 192.168.102:7000,id为 d5ec4dee922385007f09005d0ef24024f3d513a3 的主节点,按照redis的主从机制,当主节点挂掉后,从节点补上成为主节点,
那么它的从节点 514548a9a01d7e125d716fd51d9ffd36165a2647 应该会成为主节点。

192.168.183.102下面:
[root@localhost redis-cluster]# ps -ef | grep redis
root      2460     1  0 14:00 ?        00:00:03 redis-server *:7000 [cluster]
root      2465     1  0 14:00 ?        00:00:03 redis-server *:7001 [cluster]
root      2474     1  0 14:00 ?        00:00:03 redis-server *:7002 [cluster]
root      2572  2378  0 14:16 pts/0    00:00:00 grep redis
[root@localhost redis-cluster]# kill 2460
[root@localhost redis-cluster]# ./redis-trib.rb check  192.168.58.102:7000 
Connecting to node 192.168.58.102:7000: [ERR] Sorry, can't connect to node 192.168.58.102:7000
[root@localhost redis-cluster]# ./redis-trib.rb check  192.168.58.102:7000 
Connecting to node 192.168.58.102:7000: [ERR] Sorry, can't connect to node 192.168.58.102:7000
[root@localhost redis-cluster]# ./redis-trib.rb check  192.168.58.102:7002
Connecting to node 192.168.58.102:7002: OK
Connecting to node 192.168.58.101:7002: OK
Connecting to node 192.168.58.101:7000: OK
Connecting to node 192.168.58.101:7001: OK
Connecting to node 192.168.58.102:7001: OK
>>> Performing Cluster Check (using node 192.168.58.102:7002)
S: 990c7f1b44034646cacb51a7754668ee5ada6005 192.168.58.102:7002
   slots: (0 slots) slave
   replicates aec976f33acd4971cf5e087ceaf2b5e606c56f36
S: 3a1962324921188520896b1e9329e210906c1641 192.168.58.101:7002
   slots: (0 slots) slave
   replicates 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b
M: aec976f33acd4971cf5e087ceaf2b5e606c56f36 192.168.58.101:7000
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
M: 514548a9a01d7e125d716fd51d9ffd36165a2647 192.168.58.101:7001
   slots:0-5460 (5461 slots) master
   0 additional replica(s)
M: 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b 192.168.58.102:7001
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

果然和预期的一致,集群中被杀死的进程的节点d5ec4dee922385007f09005d0ef24024f3d513a3已经不见了,它的从节点
514548a9a01d7e125d716fd51d9ffd36165a2647成为了主节点,同时当节点被杀死后,直接访问该节点会连接不上的错误。





继续删除192.168.183.101:7000,id为 aec976f33acd4971cf5e087ceaf2b5e606c56f36 的主节点
192.168.183.101下面:
[root@localhost src]# ps -ef | grep redis
root      2483     1  0 14:00 ?        00:00:13 redis-server *:7000 [cluster]
root      2488     1  0 14:00 ?        00:00:13 redis-server *:7001 [cluster]
root      2497     1  0 14:00 ?        00:00:13 redis-server *:7002 [cluster]
root      3001  2352  0 14:44 pts/0    00:00:00 grep redis
[root@localhost src]# kill 2483


[root@localhost src]# ./redis-trib.rb check  192.168.58.102:7001
Connecting to node 192.168.58.102:7001: OK
Connecting to node 192.168.58.101:7002: OK
Connecting to node 192.168.58.101:7001: OK
Connecting to node 192.168.58.102:7002: OK
>>> Performing Cluster Check (using node 192.168.58.102:7001)
M: 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b 192.168.58.102:7001
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: 3a1962324921188520896b1e9329e210906c1641 192.168.58.101:7002
   slots: (0 slots) slave
   replicates 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b
M: 514548a9a01d7e125d716fd51d9ffd36165a2647 192.168.58.101:7001
   slots:0-5460 (5461 slots) master
   0 additional replica(s)
M: 990c7f1b44034646cacb51a7754668ee5ada6005 192.168.58.102:7002
   slots:5461-10922 (5462 slots) master
   0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

和预期的一样,它的从节点990c7f1b44034646cacb51a7754668ee5ada6005变成了主节点。

结论:如果删除的主节点有从节点,删除后,不影响集群;

下面继续删除没有从节点的192.168.183.101:7001(id为 514548a9a01d7e125d716fd51d9ffd36165a2647)的主节点
192.168.183.101下面:
[root@localhost src]# ps -ef | grep redis
root      2488     1  0 14:00 ?        00:00:15 redis-server *:7001 [cluster]
root      2497     1  0 14:00 ?        00:00:15 redis-server *:7002 [cluster]
root      3029  2352  0 14:49 pts/0    00:00:00 grep redis
[root@localhost src]# kill 2488
[root@localhost src]# ./redis-trib.rb check  192.168.58.102:7001
Connecting to node 192.168.58.102:7001: OK
Connecting to node 192.168.58.101:7002: OK
Connecting to node 192.168.58.102:7002: OK
>>> Performing Cluster Check (using node 192.168.58.102:7001)
M: 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b 192.168.58.102:7001
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: 3a1962324921188520896b1e9329e210906c1641 192.168.58.101:7002
   slots: (0 slots) slave
   replicates 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b
M: 990c7f1b44034646cacb51a7754668ee5ada6005 192.168.58.102:7002
   slots:5461-10922 (5462 slots) master
   0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[ERR] Not all 16384 slots are covered by nodes.
[root@localhost src]# redis-cli -c  -h 192.168.58.101 -p 7001
Could not connect to Redis at 192.168.58.101:7001: Connection refused
not connected> set b
[root@localhost src]# redis-cli -c  -h 192.168.58.102 -p 7001
192.168.58.102:7001> set b c
(error) CLUSTERDOWN The cluster is down. Use CLUSTER INFO for more information
192.168.58.102:7001> 

发现集群失败,某些slot槽无法覆盖,当调用某个节点赋值式会报(error) CLUSTERDOWN The cluster is down.的异常。
暂时没有发现好的办法,只能删除节点文件下除redis.conf外其他文件,并杀死各节点的进程,重新启动服务,再构造集群。

 

 

    

 

 

 

 

    

    

分享到:
评论

相关推荐

    Redis集群测试

    在“Redis集群测试”中,我们通常会关注以下几个关键知识点: 1. **集群架构**:Redis集群采用无中心节点的分布式架构,由多个独立的Redis实例组成,每个实例负责一部分键空间。数据通过哈希槽(Hash Slots)进行...

    Redis集群性能测试分析

    本次实验的目的主要是搭建Redis Cluster和TwemProxy Redis两种集群,分别对其进行性能测试,测试出集群性能的拐点,找出性能的瓶颈有哪些,并对两套集群进行比较,以便于在不同业务场景下择优选择。

    spring + redis集群

    运行项目后,可以测试Redis集群的存储和查询功能。 总结,本篇文章详细介绍了如何使用Spring Data Redis搭建Redis集群,以及如何在Spring应用中操作Redis集群,存储和检索对象。`RedisClusterDemo`项目提供了一个...

    java客户端测试redis集群

    4. **Jedis集群使用**:使用Jedis进行集群测试时,需要配置集群节点信息,初始化JedisCluster对象。然后,可以使用getClusterNodes()方法获取所有节点的信息,进行分布式操作测试。 5. **Lettuce集群使用**:...

    Redis集群部署

    #### 三、Redis集群测试 最后,可以通过以下命令测试Redis集群: 1. **连接集群**: ```bash cd REDIS_HOME/redis-4.0.2/bin ./redis-cli -c -p 7000 ``` 2. **写入键值对**: ```bash set key_21 70 ``` ...

    SpringMVC整合Redis集群

    项目由maven构建,使用springMVC整合了Redis的集群,发布到tomcat中,访问http://localhost:8080/SpringRedisCluster/redis/hello.do测试即可,前提是配好了redis的集群。

    Redis集群搭建工具及教程

    在本资源包中,提供了搭建Redis集群所需的各种工具和教程,帮助你在Windows环境下进行操作。 首先,让我们来了解Redis集群的基础知识。Redis集群通过分片(Sharding)技术将数据分散到多个节点上,实现数据的冗余和...

    高可用之Redis集群的扩展测试.docx

    高可用之Redis集群的扩展测试 高可用之Redis集群的扩展测试是指在既有的Redis集群基础上,通过添加新的Redis节点来提高集群的可用性和扩展性。本文将介绍如何在现有的Redis集群中添加新的Redis节点,以提高集群的...

    windows环境下redis集群的搭建

    redis集群测试 01、安装Redis 02、创建6个目录Redis7001、Redis7002、Redis7003、Redis7004、Redis7005、Redis7006 03、把Redis目录下的redis.windows.conf文件分别复制到上创建创建的6个文件夹中 04、修改6个...

    Linux系统中redis集群包和Windows环境中的redis集群资源

    在本文中,我们将深入探讨Linux系统中的Redis集群以及如何在Windows环境下搭建Redis集群。首先,我们来看一下标题和描述中提及的关键组件。 标题提到了“Linux系统中redis集群包”,这意味着我们要讨论的是在Linux...

    搭建springboot与redis集群关联操作.doc

    搭建 Spring Boot 与 Redis 集群关联操作 本文档主要讲解如何搭建 Spring Boot 与 Redis 集群的关联操作,包括搭建 Redis 集群、Spring Boot 配置调用 Redis、实例代码操作和运行结果展示。同时,也会涉及到 Redis ...

    redis集群 三主三从模式

    在"redis集群 三主三从模式"中,我们探讨的是一个典型的高可用配置,其中包含三个主节点和三个从节点。这种模式确保了即使在单个节点故障的情况下,系统仍能保持正常运行。 首先,我们需要理解Redis集群的基本概念...

    shiro连接redis集群 根据org.crazycake.shiro包改造源码

    在分布式系统中,尤其是在使用Redis作为缓存或session存储时,Shiro的原生支持可能无法直接与Redis集群配合工作。这里提到的"shiro-redis-cluster"项目,显然是针对这个问题进行的一个定制化改造,它使得Shiro能够...

    linux下安装redis以及搭建redis集群

    在Linux环境下安装Redis并搭建Redis集群是一个涉及到系统管理、网络配置和数据库操作的重要任务。Redis是一种高性能的键值存储系统,常用于缓存、消息队列等场景,而Redis集群则能提供高可用性和数据冗余,确保服务...

    Linux下Redis集群 搭建教程

    现在,我们可以使用redis-cli命令来测试Redis集群。可以使用以下命令来登陆每个Redis实例: redis-cli -c -h 192.168.127.130 -p 7000 redis-cli -c -h 192.168.127.130 -p 7001 redis-cli -c -h 192.168.127.130 -...

    Redis集群搭建与验证.pdf

    以上是对文件《Redis集群搭建与验证.pdf》中所述知识点的详细解读,涵盖了从环境准备、软件安装、Redis安装和测试到集群搭建、配置、验证以及维护等方面的知识。在操作过程中,需要注意命令和配置文件的具体参数,...

    Redis集群部署文档

    6. **测试Redis集群**: - 使用`redis-cli`工具测试集群: ``` redis-cli ``` #### 四、添加新节点 随着业务需求的增长,可能需要向现有的Redis集群中添加新的节点。这可以通过两种方式进行: 1. **添加...

    Redis集群教程(中文翻译)

    本教程主要针对集群的入门,适用于想要了解如何设置、测试和操作Redis集群的用户。 首先,Redis集群并不支持涉及多键的命令,如`MSET`或`KEYS`等,因为在高并发环境中,这些操作可能导致数据移动,降低性能并产生不...

    Redis集群的扩展测试.docx

    "Redis集群的扩展测试" Redis集群的扩展测试 Redis集群的扩展测试是指在高可用架构环境中,通过添加新的Redis节点来扩展Redis集群的容量和性能。本节教程将指导您如何安装新的Redis节点,并进行扩展性测试。 安装...

    redis集群部署方式

    ### Redis集群部署方式详解 #### 一、单台服务多节点集群部署 Redis集群通过分区数据来实现高可用性和水平...通过以上步骤,可以在单台或多台服务器上成功部署并测试Redis集群,从而实现数据的高可用性和高性能处理。

Global site tag (gtag.js) - Google Analytics