`
冷静
  • 浏览: 146859 次
  • 性别: Icon_minigender_1
  • 来自: 佛山
社区版块
存档分类
最新评论

redis3.0.6 集群安装

阅读更多

 

(集群正常工作至少需要3个主节点,在这里我们要创建6个redis节点,其中三个为主节点,三个为从节点,对应的redis节点的ip和端口对应关系如下)

127.0.0.1:6379
127.0.0.1:6380
127.0.0.1: 6381
127.0.0.1: 6382
127.0.0.1: 6383
127.0.0.1: 6384

 

一、redis安装

 

wget http://download.redis.io/releases/redis-3.0.6.tar.gz
tar -zxvf redis-3.0.6.tar.gz
mv redis-3.0.6 redis
mv redis /data/setup/
make
make install

 

 

二、安装ruby的环境

yum install ruby
yum install rubygems
gem install redis

 

 

创建集群目录

cd /data/setup/redis/
mkdir cluster
cd cluster
mkdir 6379
mkdir 6380
mkdir 6381
mkdir 6382
mkdir 6383
mkdir 6384

 

 

四、修改配置文件redis.conf

cp /data/setup/redis/redis.conf /data/setup/redis/cluster/
vi redis.conf
#修改配置文件中的下面选项

daemonize yes
port 6379
pidfile /data/setup/redis/cluster/6379/redis-6379.pid
dbfilename dump-6379.rdb
dir /data/setup/redis/cluster/6379/
logfile /data/setup/redis/cluster/6379/redis-6379.logs
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 15000
appendonly yes

 

#修改完redis.conf配置文件中拷贝配置文件
cp redis.conf /data/setup/redis/cluster/6379
cp redis.conf /data/setup/redis/cluster/6380
cp redis.conf /data/setup/redis/cluster/6381
cp redis.conf /data/setup/redis/cluster/6382
cp redis.conf /data/setup/redis/cluster/6383
cp redis.conf /data/setup/redis/cluster/6384
#注意:拷贝完成之后要修改6379/6380/6381/6382/6383/6384目录下面redis.conf文件中的port参数

 

五、分别启动Redis

/data/setup/redis/cluster/6379/redis-6379.conf
redis-server /data/setup/redis/cluster/6380/redis-6380.conf
redis-server /data/setup/redis/cluster/6381/redis-6381.conf
redis-server /data/setup/redis/cluster/6382/redis-6382.conf
redis-server /data/setup/redis/cluster/6383/redis-6383.conf
redis-server /data/setup/redis/cluster/6384/redis-6384.conf

 

 

:执行命令创建集群

   # --replicas 则指定了为Redis Cluster中的每个Master节点配备几个Slave节点 

 

[root@localhost cluster]# /data/setup/redis/src/redis-trib.rb create --replicas 1 172.16.5.240:6379 172.16.5.240:6380 172.16.5.240:6381 172.16.5.240:6382 172.16.5.240:6383 172.16.5.240:6384

 

.16.5.240:6383 172.16.5.240:6384
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
172.16.5.240:6379
172.16.5.240:6380
172.16.5.240:6381
Adding replica 172.16.5.240:6382 to 172.16.5.240:6379
Adding replica 172.16.5.240:6383 to 172.16.5.240:6380
Adding replica 172.16.5.240:6384 to 172.16.5.240:6381
M: 89d58edb12b775a5be489690b1955990271af896 172.16.5.240:6379
   slots:0-5460 (5461 slots) master
M: a795943a5cba83c8ec8cf81146c9e2e4233d2a97 172.16.5.240:6380
   slots:5461-10922 (5462 slots) master
M: 4ad16d3551d88a11edef03711a0c451ef38d89f0 172.16.5.240:6381
   slots:10923-16383 (5461 slots) master
S: 7d138fe67343c13be4b78fe6a969088b08d48cc0 172.16.5.240:6382
   replicates 89d58edb12b775a5be489690b1955990271af896
S: 22631d93e34708b570b078733e73e3d6b584890d 172.16.5.240:6383
   replicates a795943a5cba83c8ec8cf81146c9e2e4233d2a97
S: 4d98f6f561a85278d57ab20bf9aa036fdf31bb17 172.16.5.240:6384
   replicates 4ad16d3551d88a11edef03711a0c451ef38d89f0
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join....
>>> Performing Cluster Check (using node 172.16.5.240:6379)
M: 89d58edb12b775a5be489690b1955990271af896 172.16.5.240:6379
   slots:0-5460 (5461 slots) master
M: a795943a5cba83c8ec8cf81146c9e2e4233d2a97 172.16.5.240:6380
   slots:5461-10922 (5462 slots) master
M: 4ad16d3551d88a11edef03711a0c451ef38d89f0 172.16.5.240:6381
   slots:10923-16383 (5461 slots) master
M: 7d138fe67343c13be4b78fe6a969088b08d48cc0 172.16.5.240:6382
   slots: (0 slots) master
   replicates 89d58edb12b775a5be489690b1955990271af896
M: 22631d93e34708b570b078733e73e3d6b584890d 172.16.5.240:6383
   slots: (0 slots) master
   replicates a795943a5cba83c8ec8cf81146c9e2e4233d2a97
M: 4d98f6f561a85278d57ab20bf9aa036fdf31bb17 172.16.5.240:6384
   slots: (0 slots) master
   replicates 4ad16d3551d88a11edef03711a0c451ef38d89f0
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

 

 

 

 

七、查看集群状态

[root@localhost cluster]# /data/setup/redis/src/redis-trib.rb check 172.16.5.240:6379

 

>>> Performing Cluster Check (using node 172.16.5.240:6379)
S: 89d58edb12b775a5be489690b1955990271af896 172.16.5.240:6379
   slots: (0 slots) slave
   replicates 7d138fe67343c13be4b78fe6a969088b08d48cc0
M: 7d138fe67343c13be4b78fe6a969088b08d48cc0 172.16.5.240:6382
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: a795943a5cba83c8ec8cf81146c9e2e4233d2a97 172.16.5.240:6380
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
M: 4ad16d3551d88a11edef03711a0c451ef38d89f0 172.16.5.240:6381
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: 22631d93e34708b570b078733e73e3d6b584890d 172.16.5.240:6383
   slots: (0 slots) slave
   replicates a795943a5cba83c8ec8cf81146c9e2e4233d2a97
S: 4d98f6f561a85278d57ab20bf9aa036fdf31bb17 172.16.5.240:6384
   slots: (0 slots) slave
   replicates 4ad16d3551d88a11edef03711a0c451ef38d89f0
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

 

 

 

 

八、客户端操作

1.客户端登陆

[root@localhost cluster]# redis-cli -c -p 6379

 

 

2.查看节点状态

 

127.0.0.1:6379> cluster nodes

 

7d138fe67343c13be4b78fe6a969088b08d48cc0 172.16.5.240:6382 master - 0 1452915979733 7 connected 0-5460
a795943a5cba83c8ec8cf81146c9e2e4233d2a97 172.16.5.240:6380 master - 0 1452915980741 2 connected 5461-10922
89d58edb12b775a5be489690b1955990271af896 172.16.5.240:6379 myself,slave 7d138fe67343c13be4b78fe6a969088b08d48cc0 0 0 1 connected
4ad16d3551d88a11edef03711a0c451ef38d89f0 172.16.5.240:6381 master - 0 1452915977717 3 connected 10923-16383
22631d93e34708b570b078733e73e3d6b584890d 172.16.5.240:6383 slave a795943a5cba83c8ec8cf81146c9e2e4233d2a97 0 1452915976709 5 connected
4d98f6f561a85278d57ab20bf9aa036fdf31bb17 172.16.5.240:6384 slave 4ad16d3551d88a11edef03711a0c451ef38d89f0 0 1452915978725 6 connected

 

 九、服务开机启动

 

cd /etc/init.d/ 

    #分别创建 redis-6379 redis-6380 redis-6381 redis-6382 redis-6383 redis-6384,并修改对应端口

 

 

vi redis-6379

   

   # 添加下面选项

# chkconfig: 2345 10 90
# description: Start and Stop redis
 
PATH=/usr/local/bin:/sbin:/usr/bin:/bin
 
REDISPORT=6379 #实际环境而定
EXEC=/usr/local/bin/redis-server #实际环境而定
REDIS_CLI=/usr/local/bin/redis-cli #实际环境而定
 
PIDFILE=/data/setup/redis/cluster/6379/redis-6379.pid
CONF="/data/setup/redis/cluster/6379/redis-6379.conf" #实际环境而定
 
case "$1" in
        start)
                if [ -f $PIDFILE ]
                then
                        echo "$PIDFILE exists, process is already running or crashed."
                else
                        echo "Starting Redis server..."
                        $EXEC $CONF
                fi
                if [ "$?"="0" ]
                then
                        echo "Redis is running..."
                fi
                ;;
        stop)
                if [ ! -f $PIDFILE ]
                then
                        echo "$PIDFILE exists, process is not running."
                else
                        PID=$(cat $PIDFILE)
                        echo "Stopping..."
                        $REDIS_CLI -p $REDISPORT SHUTDOWN
                        while [ -x $PIDFILE ]
                        do
                                echo "Waiting for Redis to shutdown..."
                                sleep 1
                        done
                        echo "Redis stopped"
                fi
                ;;
        restart|force-reload)
                ${0} stop
                ${0} start
                ;;
        *)
                echo "Usage: /etc/init.d/redis-6379 {start|stop|restart|force-reload}" >&2
                exit 1
esac

  

 

#添加服务
chkconfig redis-6379 on
#查看服务
chkconfig --list

 

 

十、Jedis测试

package my.redis.demo;
 
import java.io.IOException;
import java.util.HashSet;
import java.util.Set;
 
import redis.clients.jedis.HostAndPort;
import redis.clients.jedis.JedisCluster;
 
public class ClusterTest {
  
   private static JedisCluster jc; 
     static { 
           Set<HostAndPort> jedisClusterNodes = new HashSet<HostAndPort>(); 
           jedisClusterNodes.add(new HostAndPort("172.16.5.240", 6379)); 
           jedisClusterNodes.add(new HostAndPort("172.16.5.240", 6380)); 
           jedisClusterNodes.add(new HostAndPort("172.16.5.240", 6381)); 
           jedisClusterNodes.add(new HostAndPort("172.16.5.240", 6382)); 
           jedisClusterNodes.add(new HostAndPort("172.16.5.240", 6383)); 
           jedisClusterNodes.add(new HostAndPort("172.16.5.240", 6384));
           jc = new JedisCluster(jedisClusterNodes,5000,1000);
           jc = new JedisCluster(jedisClusterNodes,1000,1000);
       } 
    
   public static void main(String[] args) throws IOException, InterruptedException {
      System.out.println("##########################################");
     
      Thread t0 = new Thread(new Runnable() {
 
         @Override
         public void run() {
            for (int i = 0; i < 10000; i++) {
                String key = "key:" + i;
                jc.del(key);
                System.out.println("delete:" + key);
            }
         }
      });
     
      t0.start();
      t0.join();
     
      System.out.println("##########################################");
     
      Thread t1 = new Thread(new Runnable() {
 
         @Override
         public void run() {
            for (int i = 0; i < 10000; i++) {
                String key = "key:" + i;
                jc.set(key, key);
                System.out.println("write:" + key);
            }
         }
      });
 
      t1.start();
      t1.join();
       
        System.out.println("##########################################");
       
      Thread t2 = new Thread(new Runnable() {
 
         @Override
         public void run() {
            for (int i = 0; i < 10000; i++) {
                  String key = "key:" + i; 
                  jc.get(key);
                  System.out.println("read:"+key);
            }
         }
      });
 
      t2.start();
   }
}

 

 

 

分享到:
评论

相关推荐

    redis-3.0.6.tar.gz

    提供的"redis-3.0.6集群部署.docx"文档应该包含了详细的集群搭建步骤。一般来说,这包括以下几个阶段: 1. 初始化节点:每个节点都需配置为集群模式,并指定其他节点的IP和端口。 2. 创建槽:使用`redis-cli --...

    CentOS 7下安装 redis 3.0.6并配置集群的过程详解

    安装依赖 [root@centos7-1 ~]# yum -y install gcc openssl-...安装 redis [root@centos7-1 ~]# wget http://download.redis.io/releases/redis-3.0.6.tar.gz [root@centos7-1 ~]# tar xvf redis-3.0.6.tar.gz [r

    centos安装redis集群

    Redis 3.0.6版本引入了集群支持,这是一个重要的里程碑,使得Redis能够处理更大规模的数据和更高的并发请求。 **一、CentOS安装Redis** 1. **添加EPEL仓库**:由于Redis可能不在默认的CentOS仓库中,我们需要先...

    redis-3.0.6.gem

    redis-3.0.6.gem集群搭建

    redis集群部署所需依赖包

    redis集群部署所需的依赖包 redis-4.0.1.gem redis-4.0.11.tar.gz ruby-2.5.1.tar.gz rubygems-2.6.12.zip redis-2.10.6-py2.py3-none-any.whl redis-3.0.1-py2.py3-none-any.whl redis-py-cluster-1.3.5.tar....

    tomcat集群基于redis共享session使用到的jar包

    apache-tomcat-7.0.56+nginx-1.8.0+redis-3.0.6集群部署所需JAR包,session共享 tomcat-redis-session-manager1.2.jar jedis-2.6.2.jar tomcat-juli.jar tomcat-juli-adapters.jar commons-pool-1.5.4.jar commons...

    window版redis集群安装包.7z

    总的来说,搭建Windows版的Redis集群需要安装Redis服务器、Ruby解释器以及Ruby的Gem包管理器,然后通过`redis-trib.rb`脚本配置集群。整个过程涉及多个步骤,但遵循上述指导,你应该能成功创建并运行Redis集群。在...

    redis集群搭建与环境.docx

    接下来,我们要安装 Redis 的 Ruby 接口,以便于与 Redis 集群交互: 9. 安装 Redis gem:`gem install redis-3.2.1.gem` 现在,我们可以开始配置和启动 Redis 集群了: 1. 创建一个目录来存放集群文件:`mkdir ...

    redis集群离线安装.zip

    知识中所涉及的zlib-1.2.11.tar.gz,rubygems-3.0.6.tgz,ruby-2.5.0.tar.gz,redis-3.0.7.tar.gz,openssl-1.0.2n.tar.gz安装包

    redis-cluster.rar

    Redis ruby集群工具,包括Redis x32 ruby redis-trib.rb rubygems-3.0.6.zip Redis ruby集群工具,包括Redis x32 ruby redis-trib.rb rubygems-3.0.6.zip Redis ruby集群工具,包括Redis x32 ruby redis-trib.rb ...

    Redis Cluster部署文档.docx

    ##### 2.2 集群安装 **2.2.1 安装 Redis 的 Ruby 客户端** ```bash gem install -l redis-3.0.6.gem ``` **2.2.2 编译并安装 Redis 服务器** ```bash tar -zxvf redis-3.0.0.tar.gz cd redis-3.0.0 make cp src/...

    redis-trib.rb.zip

    如果需要在3.0.6以前的版本,或者redis集群中存在3.0.6以前的版本,那就无法使用这个工具的rebalance工具,比较遗憾。但不要紧,这里这个脚本是可以进行rebalance的,只是速度没有在3.0.6以及以上版本那么快。3.0.6...

    tomcat-redis-session-manager jedis

    apache-tomcat-7.0.56+nginx-1.8.0+redis-3.0.6集群部署所需JAR包,session共享 tomcat-redis-session-manager1.2.jar jedis-2.6.2.jar tomcat-juli.jar tomcat-juli-adapters.jar commons-pool-1.5.4.jar commons...

    如何正确的使用Redis.docx

    Redis 3.0.6 引入了集群支持,允许多个节点共享数据,提供水平扩展和故障转移能力。 在配置 Redis 时,需要根据实际需求调整参数,例如内存快照的保存策略、主从复制的配置、网络超时设置等。理解并正确配置这些...

    tomcat session

    apache-tomcat-7.0.56+nginx-1.8.0+redis-3.0.6集群部署所需JAR包,session共享 tomcat-redis-session-manager1.2.jar jedis-2.6.2.jar tomcat-juli.jar tomcat-juli-adapters.jar commons-pool-1.5.4.jar commons...

    redis-linux.zip

    8. **集群模式**:虽然Redis 3.0.0不支持完整的集群功能(需要3.0.6及以上版本),但可以使用哨兵(Sentinel)系统监控和管理多个主从副本,实现高可用性。 9. **安全与网络**:Redis默认不启用密码认证,出于安全...

    Nosql Redis

    - 下载Redis:从官方网站下载对应版本的Redis源码包,例如`redis-3.0.6`。 - 编译安装:解压源码包,执行`make`和`make install`进行编译和安装。 - 配置文件:修改`/etc/redis.conf`配置文件,设置端口、目录、...

    Redis云管理平台CacheCloud.zip

    例如:100个redis数据节点组成的redis-cluster集群,如果单纯手工安装,既耗时又容易出错。2.实例碎片化 作为一个Redis管理员(可以看做redis DBA)需要帮助开发者管理上百个Redis-Cluster集群,分布在数百台机器上,...

Global site tag (gtag.js) - Google Analytics