`
zhengdl126
  • 浏览: 2539782 次
  • 性别: Icon_minigender_1
  • 来自: 深圳
社区版块
存档分类

Nginx+keepalived双机互备

阅读更多

主服务器IP:211.151.138.2     从服务IP:211.151.138.3   虚IP:211.151.138.5   你可以将你网站域名解析到 211.151.138.5 这一个公网IP上,这样主从服务器可以轮流接管该虚IP,保证网站正常对外提供访问

 


Keepalived是Linux下面实现VRRP 备份路由的高可靠性运行件。基于Keepalived设计的服务模式能够真正做到主服务器和
备份服务器故障时IP瞬间无缝交接。在新浪动态应用平台上,Keepalived配合LVS在线上服务中有着很好的稳定性。

  Nginx是基于Linux 2.6内核中epoll模型http服务器,与Apache进程派生模式不同的是Nginx进程基于于Master+Slave
多进程模型,自身具有非常稳定的子进程管理功能。在Master进程分配模式下,Master进程永远不进行业务处理,只是进行
任务分发,从而达到Master进程的存活高可靠性,Slave进程所有的业务信号都由主进程发出,Slave进程所有的超时任务都
会被Master中止,属于非阻塞式任务模型。在新浪博客应用平台上,经过将近8个月的运行,没有因为主进程退出或者子进
程僵死导致服务中致的故障存在。

  在生产环境中,任何的机器宕机导致的损失都需要被降到最低,传统的生产环境中,都是将服务器直接放置在4/7层交
换机后面以避免因为服务器或者服务器软件故障导致的服务中止。当前的业务模式下,有许多高并发的服务需求,Js小文件、
高速动态接口、Nginx七层业务,都希望所有的Socket操作能够尽快完成,减少用户的时间等待。4/7层交换机由于负责了
新浪全站多个产品的服务,经常会成为高并发服务应用的一个制约条件。于是,就孕育出了使用 Keepalived+Nginx实现双
机交叉热备使用公网ip进行DNS轮询服务的想法,这个方案可以运用于需要高并发服务的所有应用环境。越少的 Socket通讯层,
数据到达用户桌面的速度越快。

  1、服务器IP存活检测:

  服务器IP存活检测是由Keepalived自己本身完成的,将2台服务器配置成Keepalived互为主辅关系,任意一方机器故障
对方都能够将IP接管过去。

  2、服务器应用服务存活检测:

  一个正常的业务服务,除了保证服务器的状态存活之外,还需要应用业务的存活。之前之所以有Apache服务器因为进程
僵死导致HTTP不响应从而影响服务是因为Apache的进程模式导致的。在Nginx的进程模型下,可以认为只要Nginx进程存活状态,
服务就是正常的,于是只需要做到检测进程存活就能够做到检测服务的存活。Slave进程的健康状态由Nginx自身的Master进
程去完成,Master进程的存活可以通过服务器上的专用脚本进行监测,一旦发现Nginx Master进程异常退出,则立即重新启
动Nginx进程,该方案已经在新浪博客系统上运行近半年。

  3、服务器在线维护:

  Keepalived的服务IP通过其配置文件进行管理,依靠其自身的进程去确定服务器的存活状态,如果在需要对服务器进程
在线维护的情况下,只需要停掉被维护机器的Keepalived服务进程,另外一台服务器就能够接管该台服务器的所有应用。

 

 

 =============================内网测试

 

=======服务器和软件说明

nginx主   130
nginx从   131
vip       192.168.93.132 

Keepalived 1.1.15
nginx-0.9.7
pcre-8.02

 

=======安装准备
yum list |grep pcre  已安装
nginx 已安装

分别在两台nginx机器创建测试文件(禁止负载反向代理)
echo "192.168.93.130" > /data0/htdocs/index.html
echo "192.168.93.131" >/data0/htdocs/index.html

 


=======nginx主   130 安装配置keepalived
http://www.keepalived.org/
wget http://www.keepalived.org/software/keepalived-1.2.2.tar.gz

 


#tar zxvf keepalived-1.2.2.tar.gz
#cd keepalived-1.2.2
#./configure --prefix=/usr/local/keepalived
#make
#make install
#cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
#cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
#cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
#mkdir /etc/keepalived
#cd /etc/keepalived/

vim keepalived.conf
! Configuration File for keepalived
global_defs {
    notification_email {
    yuhongchun027@163.com
         }
    notification_email_from keepalived@chtopnet.com
    smtp_server 127.0.0.1
    smtp_connect_timeout 30
    router_id LVS_DEVEL
}
vrrp_instance VI_1 {
     state MASTER
     interface eth0
     virtual_router_id 51
     mcast_src_ip 192.168.93.130
     priority 100
     advert_int 1
     authentication {
         auth_type PASS
         auth_pass chtopnet
     }
     virtual_ipaddress {
         192.168.93.132
     }
}

service keepalived start

我们来看一下日志:
[root@ltos ~]# tail /var/log/messages
May 13 13:41:55 jushanwebdb Keepalived_vrrp: Opening file '/etc/keepalived/keepalived.conf'.
May 13 13:41:55 jushanwebdb Keepalived_vrrp: Configuration is using : 35980 Bytes
May 13 13:41:55 jushanwebdb Keepalived_vrrp: Using LinkWatch kernel netlink reflector...
May 13 13:41:55 jushanwebdb Keepalived: Starting VRRP child process, pid=4990
May 13 13:41:56 jushanwebdb Keepalived_vrrp: VRRP sockpool: [ifindex(2), proto(112), fd(10,11)]
May 13 13:41:57 jushanwebdb Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
May 13 13:41:58 jushanwebdb Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
May 13 13:41:58 jushanwebdb Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.
May 13 13:41:58 jushanwebdb avahi-daemon[3091]: Registering new address record for 192.168.93.132 on eth0.
May 13 13:41:58 jushanwebdb Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.93.132

 

很显然vrrp已经启动,我们还可以通过命令:#ip a 来检查
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:0c:29:ab:e6:22 brd ff:ff:ff:ff:ff:ff
    inet 192.168.93.130/24 brd 192.168.93.255 scope global eth0
    inet 192.168.93.132/32 scope global eth0
    inet6 fe80::20c:29ff:feab:e622/64 scope link
       valid_lft forever preferred_lft forever
3: sit0: <NOARP> mtu 1480 qdisc noop
    link/sit 0.0.0.0 brd 0.0.0.0

 

说明vip已经启动,这样主服务器 就配置好了,辅机的配置大致一样,除了配置文件 有少部分的变化,下面贴出辅机的配置文件:

 

 

 

 

=======nginx从   131 安装配置keepalived
http://www.keepalived.org/
wget http://www.keepalived.org/software/keepalived-1.2.2.tar.gz

 


#tar zxvf keepalived-1.2.2.tar.gz
#cd keepalived-1.2.2
#./configure --prefix=/usr/local/keepalived
#make
#make install
#cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
#cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
#cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
#mkdir /etc/keepalived
#cd /etc/keepalived/
#vim keepalived.conf
! Configuration File for keepalived
global_defs {
    notification_email {
    yuhongchun027@163.com
         }
    notification_email_from keepalived@chtopnet.com
    smtp_server 127.0.0.1
    smtp_connect_timeout 30
    router_id LVS_DEVEL
}
vrrp_instance VI_1 {
     state BACKUP
     interface eth0
     virtual_router_id 51
     mcast_src_ip 192.168.93.131
     priority 100
     advert_int 1
     authentication {
         auth_type PASS
         auth_pass chtopnet
     }
     virtual_ipaddress {
        192.168.93.132
     }
}

 

#service keepalived start

 

很显然vrrp已经启动,我们还可以通过命令:#ip a 来检查
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:0c:29:4a:2a:eb brd ff:ff:ff:ff:ff:ff
    inet 192.168.93.131/24 brd 192.168.93.255 scope global eth0
    inet6 fe80::20c:29ff:fe4a:2aeb/64 scope link
       valid_lft forever preferred_lft forever
3: sit0: <NOARP> mtu 1480 qdisc noop
    link/sit 0.0.0.0 brd 0.0.0.0
-----注:此时为从,所以无显示inet 192.168.93.132/32 scope global eth0


=======测试
测试其效果方法很简单,分别在主辅机上建立不同的主页,
然后用客户机上http://192.168.93.132 ,显示nginx主的页面效果130


1)主机nginx down掉后辅机会马上接替提供服务 ,间隔时间几乎无法感觉出来,

我们关闭nginx主
访问http://192.168.93.132 显示 131
可查看naginx从=》主 后131的变换 #ip a 就可以看到inet 192.168.93.132/32 scope global eth0

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:0c:29:4a:2a:eb brd ff:ff:ff:ff:ff:ff
    inet 192.168.93.131/24 brd 192.168.93.255 scope global eth0
    inet 192.168.93.132/32 scope global eth0
    inet6 fe80::20c:29ff:fe4a:2aeb/64 scope link
       valid_lft forever preferred_lft forever
3: sit0: <NOARP> mtu 1480 qdisc noop
    link/sit 0.0.0.0 brd 0.0.0.0


2)在开启nginx主服务器(keepalived 开机自动启动)
访问http://192.168.93.132 显示 130
可查看naginx从后131的变换 #ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:0c:29:4a:2a:eb brd ff:ff:ff:ff:ff:ff
    inet 192.168.93.131/24 brd 192.168.93.255 scope global eth0
    inet6 fe80::20c:29ff:fe4a:2aeb/64 scope link
       valid_lft forever preferred_lft forever
3: sit0: <NOARP> mtu 1480 qdisc noop
    link/sit 0.0.0.0 brd 0.0.0.0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

================ 两台 ubuntu + nginx

 

 配置一:

 

以前写过一篇,nginx+keepalived 双机互备的文章,写那篇文章的时候没有想过如果apache或者nginx 挂了,而 keepalived 或者 机器没有死,那么主辅是不会切换的,今天就研究了一下该如何监控 nginx进程呢,看官方站看到了。vrrp_script 功能,但是用他的方法实在形不通,可能是我的方法不对,或者是个BUG。所以后来我自己写了个小脚本来完成工作。
环境
Server 1  :  ubuntu-server 8.04.4          192.168.6.162
Server 2  :  userver-server 8.04.4          192.168.6.188
软件
Keepalived 1.1.15
nginx-0.8.35
pcre-8.02
1.分别在两台服务器上安装nginx
tar jxvf pcre-8.02.tar.bz2
cd pcre-8.02
./configure --prefix=/usr --enable-utf8 --enable-pcregrep-libbz2 --enable-pcregrep-libz
make
make install
tar zxvf nginx-0.8.35.tar.gz
cd nginx-0.8.35
--prefix=/usr/local/nginx --with-pcre --user=www --group=www --with-file-aio --with-http_ssl_module --with-http_flv_module --with-http_gzip_static_module --with-http_stub_status_module --with-cc-opt=' -O3'
make
make install
2.分别在两台服务器编写配置文件
vim /usr/local/nginx/conf/nginx.conf
user    www www;
worker_processes    1;
error_log    logs/error.log    notice;
pid                logs/nginx.pid;
events {
        worker_connections    1024;
}
http {
        include             mime.types;
        default_type    application/octet-stream;
        sendfile                on;
        tcp_nopush         on;
        keepalive_timeout    65;
        gzip    on;
        server {
                listen             80;
                server_name    localhost;
                index     index.html index.htm;
                root        /var/www;
                error_page     500 502 503 504    /50x.html;
                location = /50x.html {
                        root     html;
                }
        }
}
3.分别在两台机器创建测试文件
echo "192.168.6.162" > /var/www/index.html
echo "192.168.6.188" > /var/www/index.html
4.安装 keepalived
apt-get install keepalived
5.在server 1服务器编写配置文件
vrrp_script chk_http_port {
                script "/opt/nginx_pid.sh"         ###监控脚本
                interval 2                             ###监控时间
                weight 2                                ###目前搞不清楚
}
vrrp_instance VI_1 {
        state MASTER                            ### 设置为 主
        interface eth0                             ### 监控网卡   
        virtual_router_id 51                    ### 这个两台服务器必须一样
        priority 101                                 ### 权重值 MASTRE 一定要高于 BAUCKUP
        authentication {
                     auth_type PASS             ### 加密
                     auth_pass eric                ### 加密的密码,两台服务器一定要一样,不然会出错
        }
        track_script {
                chk_http_port                     ### 执行监控的服务
        }
        virtual_ipaddress {
             192.168.6.7                            ###    VIP 地址
        }
}
6.在 server 2 服务器 keepalived 配置
vrrp_script chk_http_port {
                script "/opt/nginx_pid.sh"
                interval 2
                weight 2
}
vrrp_instance VI_1 {
        state BACKUP                                ### 设置为 辅机
        interface eth0
        virtual_router_id 51                        ### 与 MASTRE 设置 值一样
        priority 100                                     ### 比 MASTRE权重值 低
        authentication {
                     auth_type PASS
                     auth_pass eric                    ### 密码 与 MASTRE 一样
        }
        track_script {
                chk_http_port
        }
        virtual_ipaddress {
                 192.168.6.7
        }
}
7.编写监控nginx监控脚本
vim /opt/nginx_pid.sh
#!/bin/bash
# varsion 0.0.2
# 根据一网友说这样做不科学,如果nginx服务起来了,但是我把keepalived 杀掉了,我的理由是,如果nginx死掉了,我觉得就很难在起来,再有就是nagios 当然要给你报警了啊。不过这位同学说的有道理,所以就稍加改了一下脚本
A=`ps -C nginx --no-header |wc -l`                ## 查看是否有 nginx进程 把值赋给变量A
if [ $A -eq 0 ];then                                         ## 如果没有进程值得为 零
                /usr/local/nginx/sbin/nginx
                sleep 3
                if [ `ps -C nginx --no-header |wc -l` -eq 0 ];then
                       killall keepalived                        ## 则结束 keepalived 进程
                fi
fi
8、测试,分别在两个服务器 启动 nginx 和 keepalived
/usr/local/nginx/sbin/nginx
/etc/init.d/keepalived start
监控 server 1 的日志
Apr 20 18:37:39 nginx Keepalived_vrrp: Registering Kernel netlink command channel
Apr 20 18:37:39 nginx Keepalived_vrrp: Registering gratutious ARP shared channel
Apr 20 18:37:39 nginx Keepalived_vrrp: Opening file '/etc/keepalived/keepalived.conf'.
Apr 20 18:37:39 nginx Keepalived_healthcheckers: Opening file '/etc/keepalived/keepalived.conf'.
Apr 20 18:37:39 nginx Keepalived_healthcheckers: Configuration is using : 3401 Bytes
Apr 20 18:37:39 nginx Keepalived_vrrp: Configuration is using : 35476 Bytes
Apr 20 18:37:40 nginx Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Apr 20 18:37:41 nginx Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
Apr 20 18:37:41 nginx Keepalived_vrrp: Netlink: skipping nl_cmd msg...
Apr 20 18:37:41 nginx Keepalived_vrrp: VRRP_Script(chk_http_port) succeeded
监控 server 2的日志
Apr2018:38:23 varnish Keepalived_healthcheckers: Opening file '/etc/keepalived/keepalived.conf'.
Apr 20 18:38:23 varnish Keepalived_healthcheckers: Configuration is using : 3405 Bytes
Apr 20 18:38:23 varnish Keepalived_vrrp: Using MII-BMSR NIC polling thread...
Apr 20 18:38:23 varnish Keepalived_vrrp: Registering Kernel netlink reflector
Apr 20 18:38:23 varnish Keepalived_vrrp: Registering Kernel netlink command channel
Apr 20 18:38:23 varnish Keepalived_vrrp: Registering gratutious ARP shared channel
Apr 20 18:38:23 varnish Keepalived_vrrp: Opening file '/etc/keepalived/keepalived.conf'.
Apr 20 18:38:23 varnish Keepalived_vrrp: Configuration is using : 35486 Bytes
Apr 20 18:38:23 varnish Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE
Apr 20 18:38:25 varnish Keepalived_vrrp: VRRP_Script(chk_http_port) succeeded
看日志可以看出,两台服务器的 MASTRE 和 BACUKUP 已经都正常了
现在我们在 server 1 把 nginx 服务器停到
Server 1 $> killall nginx
这时候看server 1的日志
Apr 20 18:41:26 nginx Keepalived_healthcheckers: Terminating Healthchecker child process on signal
Apr 20 18:41:26 nginx Keepalived_vrrp: Terminating VRRP child process on signal
可以看出keepalived 的进程已经停到
这时候看server 2的日志,看是否已经接管
Apr 20 18:41:23 varnish Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Apr 20 18:41:24 varnish Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
Apr 20 18:41:24 varnish Keepalived_vrrp: Netlink: skipping nl_cmd msg...
很明显的看出 server 2 已经接管了,已经变为 MASTER 了

 

 

 

 

配置二:

 

 

! Configuration File for keepalived
global_defs {
   notification_email {
   xiaohan@163.com
        }
   notification_email_from keepalived@chtopnet.com
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}
vrrp_instance VI_1 {
    state MASTER     <== 主MASTER,从为BACKUP,其他一样。
    interface eth0
    virtual_router_id 51
    mcast_src_ip 192.168.2.24    <== 主nginx的IP地址
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass chtopnet
    }
    virtual_ipaddress {
        192.168.2.96                   <==vip 地址
    }
}

 

   重启keepalived,这里比较重要,很多朋友喜欢把sbin 里面的复制到/etc/init.d/下面,用service keepalvied restart 这种方式,不过这里不推荐,我也是这里这样做,结果出现很多的问题,建议用最老实的方法来重启:

#/usr/local/sbin/keepalived –D –f /usr/local/etc/keeplive/keepalived.conf

 查看是否绑定了vip ,注意,用ifconfig是看不到的,要用ip a 来查看,切记切记。

 

 

Ok,成功,在BACKUP上也做同样的操作。

验证,其实很简单,一直ping 192.168.2.96 ,然后将主nginx的network停掉,可以看到大概有2个timeout,BACKUP在极短的时间内接替了工作。再将主的network开启,可以看到主又继续接替了BACKUP,继续工作。

 

 

配置三:

 

安装keepalived
tar zxvf keepalived-1.1.19.tar.gz
cd keepalived-1.1.19
./configure --prefix=/usr/local/keepalived
make
make install
cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
mkdir /etc/keepalived
cd /etc/keepalived/
#####################################################
! Configuration File for keepalived
global_defs {
   notification_email {
   yeli4017@163.com
        }
   notification_email_from yeli4017@163.com
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id web_nginx
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    garp_master_delay 10
    smtp_alert
    virtual_router_id 51
    mcast_src_ip 172.16.3.51  
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111chtopnet
    }
    virtual_ipaddress {
        172.16.3.199
    }
}
辅助服务器
! Configuration File for keepalived
global_defs {
   notification_email {
   yuhongchun027@163.com
        }
   notification_email_from keepalived@chtopnet.com
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    mcast_src_ip 172.16.3.51              <==主nginx的IP的地址
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass chtopnet
    }
    virtual_ipaddress {
        172.16.3.199
    }
}

 

 

 

 

 

================ 两台 centos + nginx

 

由于nginx的url hash功能可以很好的提升squid的性能,所以我把squid前端的负载均衡器更换为nginx,但是一台nginx就形成了单点,现在使用 keepalived来解决这个问题,keepalived的故障转移时间很短,而且配置简单,这也是选择keepalived的一个主要原因,建议日 PV值小的中小型企业 web 均可采用如下方案 实行,下面直接上安装步骤:



一、环境:
centos5.3、nginx-0.7.51、keepalived-1.1.19
主nginx负载均衡器:192.168.0.154
辅nginx负载均衡器:192.168.9.155
vip:192.168.0.188



二、安装keepalived


#tar zxvf keepalived-1.1.19.tar.gz
#cd keepalived-1.1.19
#./configure --prefix=/usr/local/keepalived
#make
#make install
#cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
#cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
#cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
#mkdir /etc/keepalived
#cd /etc/keepalived/

vim keepalived.conf



! Configuration File for keepalived
global_defs {
    notification_email {
    yuhongchun027@163.com
         }
    notification_email_from keepalived@chtopnet.com
    smtp_server 127.0.0.1
    smtp_connect_timeout 30
    router_id LVS_DEVEL
}
vrrp_instance VI_1 {
     state MASTER
     interface eth0
     virtual_router_id 51
     mcast_src_ip 192.168.0.154    <==主nginx的IP地址
     priority 100
     advert_int 1
     authentication {
         auth_type PASS
         auth_pass chtopnet
     }
     virtual_ipaddress {
         192.168.0.188                      <==VIP地址
     }
}

#service keepalived start
我们来看一下日志:
[root@ltos ~]# tail /var/log/messages


Oct 6 03:25:03 ltos avahi-daemon[2306]: Registering new address record for 192.168.0.188 on eth0.
Oct 6 03:25:03 ltos avahi-daemon[2306]: Registering new address record for 192.168.0.154 on eth0.
Oct 6 03:25:03 ltos avahi-daemon[2306]: Registering HINFO record with values 'I686'/'LINUX'.
Oct 6 03:25:23 ltos avahi-daemon[2306]: Withdrawing address record for fe80::20c:29ff:feb9:eeab on eth0.
Oct 6 03:25:23 ltos avahi-daemon[2306]: Withdrawing address record for 192.168.0.154 on eth0.
Oct 6 03:25:23 ltos avahi-daemon[2306]: Host name conflict, retrying with <ltos-31>
Oct 6 03:25:23 ltos avahi-daemon[2306]: Registering new address record for fe80::20c:29ff:feb9:eeab on eth0.
Oct 6 03:25:23 ltos avahi-daemon[2306]: Registering new address record for 192.168.0.188 on eth0.
Oct 6 03:25:23 ltos avahi-daemon[2306]: Registering new address record for 192.168.0.154 on eth0.
Oct 6 03:25:23 ltos avahi-daemon[2306]: Registering HINFO record with values 'I686'/'LINUX'.



很显然vrrp已经启动,我们还可以通过命令:#ip a 来检查

[root@ltos html]# ip a


1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
     inet 127.0.0.1/8 scope host lo
     inet6 ::1/128 scope host
        valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
     link/ether 00:0c:29:ba:9b:e7 brd ff:ff:ff:ff:ff:ff
     inet 192.168.0.154/24 brd 192.168.0.255 scope global eth0
     inet 192.168.0.188/32 scope global eth0
     inet6 fe80::20c:29ff:feba:9be7/64 scope link
        valid_lft forever preferred_lft forever
3: sit0: <NOARP> mtu 1480 qdisc noop
     link/sit 0.0.0.0 brd 0.0.0.0


说明vip已经启动,这样主服务器 就配置好了,辅机的配置大致一样,除了配置文件 有少部分的变化,下面贴出辅机的配置文件:



! Configuration File for keepalived
global_defs {
    notification_email {
    yuhongchun027@163.com
         }
    notification_email_from keepalived@chtopnet.com
    smtp_server 127.0.0.1
    smtp_connect_timeout 30
    router_id LVS_DEVEL
}
vrrp_instance VI_1 {
     state BACKUP
     interface eth0
     virtual_router_id 51
     mcast_src_ip 192.168.0.155             <==从nginx的IP的地址
     priority 100
     advert_int 1
     authentication {
         auth_type PASS
         auth_pass chtopnet
     }
     virtual_ipaddress {
         192.168.0.188
     }
}



检查其配置
[root@ltos html]# ip a


1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
     inet 127.0.0.1/8 scope host lo
     inet6 ::1/128 scope host
        valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
     link/ether 00:0c:29:ba:9b:e7 brd ff:ff:ff:ff:ff:ff
     inet 192.168.0.155/24 brd 192.168.0.255 scope global eth0
     inet 192.168.0.188/32 scope global eth0
     inet6 fe80::20c:29ff:feba:9be7/64 scope link
        valid_lft forever preferred_lft forever
3: sit0: <NOARP> mtu 1480 qdisc noop
     link/sit 0.0.0.0 brd 0.0.0.0


测试其效果方法很简单,分别在主辅机上建立不同的主页,index.html的内容分别为192.168.0.154,192.168.0.155,然后用客户机上elinks http://192.168.0.188 ,主机down掉后辅机会马上接替提供服务 ,间隔时间几乎无法感觉出来,

分享到:
评论

相关推荐

    Nginx+keepalived双机热备(主从模式)

    Nginx+keepalived双机热备(主从模式) Nginx+keepalived双机热备(主从模式)是一种常见的负载均衡技术,用于实现高可用环境和故障转移。该技术通过将Nginx与keepalived结合,实现了前端负载均衡和高可用性。 ...

    Nginx+keepalived双机热备(主从模式)高可用集群方案-完整部署记录(个人珍藏版)

    本片详细记录了Nginx+keepalived双机热备(主从模式)高可用集群方案-完整部署过程,讲解十分到位,可作为线上实操手册。特在此分享,希望能帮助到有用到的朋友。

    Nginx++Keepalived+Tomcat负载均衡&动静分离

    二、部署调度器—搭建Nginx+Keepalived(双机热备) 在调度器服务器上,需要安装Nginx和Keepalived软件包。首先,需要安装编译工具和插件,然后添加nginx用户和组,解压Nginx安装包,编译和安装Nginx。接着,需要...

    nginx+keepalived实现双机热备高可用

    nginx+keepalived实现双机热备高可用 本文详细介绍了使用nginx和keepalived实现双机热备高可用的技术解决方案。该解决方案旨在解决nginx集群部署时的单点故障问题。通过keepalived软件,实现对nginx服务器的高可用...

    Nginx+keepalived+tomcat实现性负载均衡(包含需要的包)

    3. 安装keepalived:解压`keepalived-1.2.22.tar.gz`,编译安装,并配置keepalived的配置文件,指定虚拟IP、主备Nginx服务器的地址以及VRRP参数。 4. 配置Nginx与Tomcat的通信:可能需要借助`nginx-upstream-jvm-...

    nginx+keepalived安装部署手册

    Nginx+keepalived双机主备,keepalived广播模式。

    redhat6.4+nginx+keepalived__实现双机热备和负载均衡

    redhat6.4+nginx+keepalived__实现双机热备和负载均衡

    Nginx负载均衡+keepalived双机热备

    ### Nginx负载均衡与Keepalived双机热备配置详解 #### 一、环境配置与准备工作 在介绍具体的配置步骤之前,我们首先需要明确一下本案例中的环境配置: - **负载均衡器**: - **LB主**:192.168.1.1 - **LB从**...

    keepalived+Nginx负载均衡+双机互备[文].pdf

    本文档详细介绍了如何在Linux系统上通过编译安装的方式设置一个基于Keepalived和Nginx的双机互备负载均衡系统。 首先,确保系统已经安装了必要的编译工具和依赖库。在Ubuntu或Debian系统中,可以使用`apt-get`命令...

    架构基于WEB的负载均衡Nginx+keepalived.pdf

    虽然这一步可以实现基本的LVS/DR双机互备,但无法自动处理故障Web服务器。 3. **部署Keepalived**:在LVS主、备份服务器上安装并配置Keepalived。Keepalived的配置文件会定义监控规则和故障切换策略。当Keepalived...

    keepalived+nginx实现双机主备

    Keepalived+nginx 实现双机主备 Keepalived 是一个基于 VRRP 协议的高可用性解决方案,可以与 Nginx 服务器集成,以实现双机主备的高可用性架构。在本文中,我们将详细介绍使用 Keepalived 和 Nginx 实现双机主备的...

    Nginx+Keepalived实现双机热备

    一.Keepalived Keepalived是保证集群高可用的服务软件,网络中优先级高的节点为master负责响应VIP的ARP包,将VIP和MAC地址映射关系告诉网络内其他主机,还会以多播的形式向网络中发送VRRP通告,告知自己的优先级。...

    nginx+keepalive主从 双机热备

    nginx+keepalive 主从双机热备解决方案 nginx 是一种流行的开源 Web 服务器软件,keepalive 是一种心跳检测机制,用于检测服务器的健康状态。nginx+keepalive 主从双机热备解决方案是指使用 nginx 和 keepalive ...

    Nginx+Keepalived实现双机主备的方法

    Nginx+Keepalived 实现双机主备的配置是一种常见的高可用性解决方案,它能够确保在一台服务器出现故障时,服务能够自动切换到另一台备用服务器,从而避免单点故障导致的服务中断。Nginx 是一款轻量级、高性能的 Web ...

    高并发nginx+keepalived部署教程

    ### 高并发nginx+keepalived部署教程 #### 一、引言 随着互联网技术的发展,用户数量激增,单一服务器难以应对大量的并发访问。为了提高系统的稳定性和可靠性,采用负载均衡技术配合高可用架构成为了必然选择。本文...

    keepalived+nginx双机热备+负载均衡 非抢占模式

    "keepalived+nginx双机热备+负载均衡 非抢占模式"是一种常见的解决方案,它能够确保服务的连续性和效率。下面将详细介绍这个主题。 **Keepalived** Keepalived是一款基于VRRP(Virtual Router Redundancy Protocol...

Global site tag (gtag.js) - Google Analytics