- 浏览: 312340 次
- 性别:
- 来自: 北京
文章分类
最新评论
-
zhou363667565:
看到你的这个配置 有个地方有点问题:
< aop:po ...
spring ibatis 事务配置 -
wo17796452:
[b][/b][i][/i][u][/u]引用[*][img] ...
crowd Jira confluence 集成 -
wo17796452:
<input type="button&quo ...
crowd Jira confluence 集成 -
benbear2008:
这些类图呢?
Spring MVC框架类图与顺序图 -
TTLtry:
谢了 最近学习spring时候 却总是登不上官方网站 很多 ...
Spring 2.5.5 api 帮助文档 chm格式 下载
作者:hosyp
LVS是中国人发起的项目,真是意外呀!大家可以看http://www.douzhe.com/linuxtips/1665.html
我是从最初的HA(高可用性)开始的,别人的例子是用VMWARE,可以做试验但不能实际应用,我又
没有光纤卡的Share Storage,于是就选用ISCSI,成功后又发现ISCSI+EXT3不能用于LVS,倒最后发
现GFS可用,我最终成功配成可实际应用的LVS,前后断断续续花了四个月,走了很多弯路。我花了
三天时间写下这篇文章,希望对大家有用。
这里要感谢linuxfans.org、linuxsir.com、chinaunix.com以及其它很多网站,很多资料都是从
他们的论坛上找到的。
参考文档及下载点
a.http://www.gyrate.org/misc/gfs.txt
b.http://www.redhat.com/docs/manuals/enterprise/RHEL-3-Manual/cluster-suite/index.html
http://www.redhat.com/docs/manuals/csgfs/admin-guide/index.html
c.ftp://ftp.redhat.com/pub/redhat/linux/updates/enterprise/3ES/en/RHGFS/SRPMS
d.http://distro.ibiblio.org/pub/linux/distributions/caoslinux/centos/3.1/contrib/i386/RPMS/
LVS结构图:
eth0=10.3.1.101
eth0:1=10.3.1.254
Load Balance
Router
eth1=192.168.1.71
eth1:1=192.168.1.1
| |
| |
Real1 Real2
eth0=192.168.1.68 eth0=192.168.1.67
(eth0 gateway=192.168.1.1)
eth1=192.168.0.1---eth1=192.168.0.2
(双机互联线)
|
|
GFS
ISCSI
Share storage
eth0=192.168.1.124
1.Setup ISCSI Server
Server: PIII 1.4,512M, Dell 1650,Redhat 9,IP=192.168.1.124
从http://iscsitarget.sourceforge.net/下载ISCSI TARGET的Source code
(http://sourceforge.net/project/showfiles.php?group_id=108475&package_id=117141)
我选了iscsitarget-0.3.8.tar.gz,要求kernel 2.4.29
从kernel.org下载kernel 2.4.29,解开编译重启后编译安装iscsitarget-0.3.8:
#make KERNELSRC=/usr/src/linux-2.4.29
#make KERNELSRC=/usr/src/linux-2.4.29 install
#cp ietd.conf /etc
#vi /etc/ietd.conf
# Example iscsi target configuration
#
# Everything until the first target definition belongs
# to the global configuration.
# Right now this is only the user configuration used
# during discovery sessions:
# Users, who can access this target
# (no users means anyone can access the target)
User iscsiuser 1234567890abc
Target iqn.2005-04.com.my:storage.disk2.sys1.iraw1
User iscsiuser 1234567890abc
Lun 0 /dev/sda5 fileio
Alias iraw1
Target iqn.2005-04.com.my:storage.disk2.sys1.iraw2
User iscsiuser 1234567890abc
Lun 1 /dev/sda6 fileio
Alias iraw2
Target iqn.2005-04.com.my:storage.disk2.sys2.idisk
User iscsiuser 1234567890abc
Lun 2 /dev/sda3 fileio
Alias idisk
Target iqn.2005-04.com.my:storage.disk2.sys2.icca
User iscsiuser 1234567890abc
Lun 3 /dev/sda7 fileio
Alias icca
说明:password 长度必须不小于12个字符, Alias是别名, 不知为何这个别名在
Client端显示不出来.
分区:我只有一个SCSI盘,所以:
/dev/sda3: Share storage,容量越大越好
/dev/sda5: raw1, 建Cluster要的rawdevice, 我给了900M
/dev/sda6: raw2, 建Cluster要的rawdevice, 我给了900M
/dev/sda7: cca, 建GFS要的,我给了64M
(/dev/sda4是Extended分区,在其中建了sda5,6,7)
#Reboot,用service iscsi-target start启ISCSI server(我觉得比建议的好,可以
用service iscsi-target status看状态)
2.Setup ISCSI Client(on two real server)
Server: PIII 1.4,512M, Dell 1650,Redhat AS3U4(用AS3U5更好),2.4.21-27.EL
#vi /etc/iscsi.conf
DiscoveryAddress=192.168.1.124
OutgoingUsername=iscsiuser
OutgoingPassword=1234567890abc
Username=iscsiuser
Password=1234567890abc
LoginTimeout=15
IncomingUsername=iscsiuser
IncomingPassword=1234567890abc
SendAsyncTest=yes
#service iscsi restart
#iscsi-ls -l
..., 精简如下:
/dev/sdb:iraw2
/dev/sdc:iraw1
/dev/sdd:idisk
/dev/sde:icca
注意: 在real server中ISCSI device的顺序很重要,两个real server中一定要一样,如不一样
就改ISCSI Server中的设置,多试几次
3.Install Redhat Cluster suite
先下载Cluster Suite的ISO, AS3的我是从ChinaUnix.net找到的下载点, 安装clumanager和
redhat-config-cluster。没有Cluster Suite的ISO也没关系,从
ftp://ftp.redhat.com/pub/redhat/linux/updates/enterprise/3ES/en/RHCS/SRPMS/下载
clumanager-1.2.xx.src.rpm,redhat-config-cluster-1.0.x.src.rpm,编译后安装,应该更好:
#rpm -Uvh clumanager-1.2.26.1-1.src.rpm
#rpmbuild -bs /usr/src/redhat/SPECS/clumanager.spec
#rpmbuild --rebuild --target i686 /usr/src/redhat/SRPMS/clumanager-1.2.26.1-1.src.rpm
还有redhat-config-cluster-1.0.x.src.rpm,也装好
4.Setup Cluster as HA module
详细步骤我就不写了,网上有很多文章,我也是看了别人的文章学会的,不过人家是用VMWARE,
而我是用真的机子+ISCSI,raw device就是/dev/sdb,/dev/sdc, 然后就
mount /dev/sdd /u01, mkfs.ext3 /u01 ......
设好后会发现ISCSI有问题:同时只能有一个Client联接写盘,如果
两个Client同时联ISCSI的Share Storge,一个Client写,另一个Client是看不到的,而且此时文
件系统已经破坏了,Client重联ISCSI时会发现文件是坏的,用fsck也修复不了。
ISCSI真的是鸡肋吗?
NO!从GOOGLE上我终于查到ISCSI只有用Cluster File System才能真正用于Share Storage!
而Redhat买下的GFS就是一个!
5.Setup GFS on ISCSI
GFS只有Fedora Core4才自带了,而GFS又一定要用到Cluster Suite产生的/etc/cluster.xml文件,
我没见FC4有Cluster Suite,真不知Redhat给FC4带GFS干嘛,馋人吗?
好,闲话少说,下载:c处的GFS-6.0.2.20-2.src.rpm, 按a处的gfs.txt编译安装,不过关于
cluster.ccs,fence.ccs,nodes.ccs的设置没说,看b的文档,我总算弄出来了,都存在
/root/cluster下,存在别的地方也行,不过我不知道有没有错,我没有光纤卡,文档又没讲ISCSI
的例子,不过GFS能启动的。
#cat cluster.ccs
cluster {
name = "Cluster_1"
lock_gulm {
servers = ["cluster1", "cluster2"]
heartbeat_rate = 0.9
allowed_misses = 10
}
}
注:name就是Cluster Suite设置的Cluster name, servers就是Cluster member的Hostname,别忘
了加进/etc/hosts;allowed_misses我开始设为1,结果跑二天GFS就会死掉,改为10就没死过了。
#cat fence.ccs
fence_devices{
admin {
agent = "fence_manual"
}
}
#cat nodes.ccs
nodes {
cluster1 {
ip_interfaces {
hsi0 = "192.168.0.1"
}
fence {
human {
admin {
ipaddr = "192.168.0.1"
}
}
}
}
cluster2 {
ip_interfaces {
hsi0 = "192.168.0.2"
}
fence {
human {
admin {
ipaddr = "192.168.0.2"
}
}
}
}
}
注:ip就是心跳线的ip
这三个文件建在/root/cluster下,先建立Cluster Configuration System:
a.#vi /etc/gfs/pool0.cfg
poolname pool0
minor 1 subpools 1
subpool 0 8 1 gfs_data
pooldevice 0 0 /dev/sde1
b.#pool_assemble -a pool0
c.#ccs_tool create /root/cluster /dev/pool/pool0
d.#vi /etc/sysconfig/gfs
CCS_ARCHIVE="/dev/pool/pool0"
再Creating a Pool Volume,就是我们要的共享磁盘啦,
a.#vi /etc/gfs/pool1.cfg
poolname pool1
minor 2 subpools 1
subpool 0 128 1 gfs_data
pooldevice 0 0 /dev/sdd1
b.#pool_assemble -a pool1
c.#gfs_mkfs -p lock_gulm -t Cluster_1:gfs1 -j 8 /dev/pool/pool1
d.#mount -t gfs -o noatime /dev/pool/pool1 /u01
下面是个GFS的启动脚本,注意real1和real2必须同时启动lock_gulmd进程,第一台lock_gulmd
会成为Server并等Client的lock_gulmd,几十秒后没有响应会fail,GFS启动失败。Redhat建议
GFS盘不要写进/etc/fstab。
#cat gfstart.sh
#!/bin/sh
depmod -a
modprobe pool
modprobe lock_gulm
modprobe gfs
sleep 5
service iscsi start
sleep 20
service rawdevices restart
pool_assemble -a pool0
pool_assemble -a pool1
service ccsd start
service lock_gulmd start
mount -t gfs /dev/pool/pool1 /s02 -o noatime
service gfs status
6. Setup Linux LVS
LVS是章文嵩博士发起和领导的优秀的集群解决方案,许多商业的集群产品,比如RedHat的
Piranha,Turbolinux公司的Turbo Cluster等,都是基于LVS的核心代码的。
我的系统是Redhat AS3U4,就用Piranha了。
从rhel-3-u5-rhcs-i386.iso安装piranha-0.7.10-2.i386.rpm,ipvsadm-1.21-9.ipvs108.i386.rpm
(http://distro.ibiblio.org/pub/linux/distributions/caoslinux/centos/3.1/contrib/i386/RPMS/)
装完后service httpd start & service piranha-gui start,就可以从http://xx.xx.xx.xx:3636管理或
设置了,当然了,手工改/etc/sysconfig/ha/lvs.cf也一样。
#cat /etc/sysconfig/ha/lvs.cf
serial_no = 80
primary = 10.3.1.101
service = lvs
rsh_command = ssh
backup_active = 0
backup = 0.0.0.0
heartbeat = 1
heartbeat_port = 1050
keepalive = 6
deadtime = 18
network = nat
nat_router = 192.168.1.1 eth1:1
nat_nmask = 255.255.255.0
reservation_conflict_action = preempt
debug_level = NONE
virtual lvs1 {
active = 1
address = 10.3.1.254 eth0:1
vip_nmask = 255.255.255.0
fwmark = 100
port = 80
persistent = 60
pmask = 255.255.255.255
send = "GET / HTTP/1.0\r\n\r\n"
expect = "HTTP"
load_monitor = ruptime
scheduler = wlc
protocol = tcp
timeout = 6
reentry = 15
quiesce_server = 1
server Real1 {
address = 192.168.1.68
active = 1
weight = 1
}
server Real2 {
address = 192.168.1.67
active = 1
weight = 1
}
}
virtual lvs2 {
active = 1
address = 10.3.1.254 eth0:1
vip_nmask = 255.255.255.0
port = 21
send = "\n"
use_regex = 0
load_monitor = ruptime
scheduler = wlc
protocol = tcp
timeout = 6
reentry = 15
quiesce_server = 0
server ftp1 {
address = 192.168.1.68
active = 1
weight = 1
}
server ftp2 {
address = 192.168.1.67
active = 1
weight = 1
}
}
设置完后service pulse start, 别忘了把相关的client加进/etc/hosts
#iptables -t mangle -A PREROUTING -p tcp -d 10.3.1.254/32 --dport 80 -j MARK --set-mark 100
#iptables -t mangle -A PREROUTING -p tcp -d 10.3.1.254/32 --dport 443 -j MARK --set-mark 100
#iptables -A POSTROUTING -t nat -p tcp -s 10.3.1.0/24 --sport 20 -j MASQUERADE
运行以上三行命令并存入/etc/rc.d/rc.local,用ipvsadm看状态:
#ipvsadm
IP Virtual Server version 1.0.8 (size=65536)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.3.1.254:ftp wlc
-> cluster2:ftp Masq 1 0 0
-> cluster1:ftp Masq 1 0 0
FWM 100 wlc persistent 60
-> cluster1:0 Masq 1 0 0
-> cluster2:0 Masq 1 0 0
注意:a.Firewall Mark可以不要,我反正是加了,文档说有https的话加上,值我选了100,
b.Virtual IP别加进/etc/hosts,我上过当,80端口时有时无的,
c.eth0:1,eth1:1是piranha产生的,别自己手工设置,我干过这画蛇添足的事,网上有
些帖子没说清,最后是看Redhat的文档才弄清楚的。
d.The LVS router can monitor the load on the various real servers by using
either rup or ruptime. If you select rup from the drop-down menu, each real
server must run the rstatd service. If you select ruptime, each real server
must run the rwhod service.Redhat的原话,就是如选rup的监控模式real server上
都要运行rstatd进程,如选ruptime就要运行rwhod进程。
e.Real Server同Router相联的网卡的Gateway必须是Router的那块网卡的VIP,举本例:
Router的eth1同两个real server的eth0相联,如VIP eth1:1=192.168.1.1,则real
server 的eth0的Gateway=192.168.1.1
7.Setup TOMCAT5.59+JDK1.5(用Redhat自带的Apache)
a.#tar xzvf jakarta-tomcat-5.5.9.tar.gz
#mv jakarta-tomcat-5.5.9 /usr/local
#ln -s /usr/local/jakarta-tomcat-5.5.9 /usr/local/tomcat
b.#jdk-1_5_0_04-linux-i586.bin
#mv jdk1.5.0_4 /usr/java
#ln -s /usr/java/jdk1.5.0_4 /usr/java/jdk
c.#vi /etc/profile.d/tomcat.sh
export CATALINA_HOME=/usr/local/tomcat
export TOMCAT_HOME=/usr/local/tomcat
d.#vi /etc/profile.d/jdk.sh
if ! echo ${PATH} | grep "/usr/java/jdk/bin" ; then
JAVA_HOME=/usr/java/jdk
export JAVA_HOME
export PATH=/usr/java/jdk/bin:${PATH}
export CLASSPATH=$JAVA_HOME/lib
fi
e.#chmod 755 /etc/profile.d/*.sh
f.重新用root登录,让tomcat.sh和jdk.sh起作用,
#tar xzvf jakarta-tomcat-connectors-jk2-src-current.tar.gz
#cd jakarta-tomcat-connectors-jk2-2.0.4-src/jk/native2/
#./configure --with-apxs2=/usr/sbin/apxs --with-jni --with-apr-lib=/usr/lib
#make
#libtool --finish /usr/lib/httpd/modules
#cp ../build/jk2/apache2/mod_jk2.so ../build/jk2/apache2/libjkjni.so /usr/lib/httpd/modules/
g.#vi /usr/local/tomcat/bin/catalina.sh
在# Only set CATALINA_HOME if not already set后加上以下两行:
serverRoot=/etc/httpd
export serverRoot
h.#vi /usr/local/tomcat/conf/jk2.properties
serverRoot=/etc/httpd
apr.NativeSo=/usr/lib/httpd/modules/libjkjni.so
apr.jniModeSo=/usr/lib/httpd/modules/mod_jk2.so
i.#vi /usr/local/tomcat/conf/server.xml,
在</Host>前加上以下几行,建了两个VirtualPath:myjsp和local,一个指向share storage,
一个指向real server本地
<Context path="/myjsp" docBase="/u01/www/myjsp" debug="0"/>
<Logger className="org.apache.catalina.logger.FileLogger" directory="/var/log/httpd"
prefix="cluster.log." suffix=".txt" timestamp="true" />
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="/var/log/httpd"
prefix="cluster_access.log." suffix=".txt" pattern="common" resolveHosts="false" />
<Context path="/local" docBase="/var/www/html" debug="1" reloadable="true" crossContext="true" />
j.#vi /etc/httpd/conf/workers2.properties
#[logger.apache2]
#level=DEBUG
[shm]
file=/var/log/httpd/shm.file
size=1048576
[channel.socket:localhost:8009]
tomcatId=localhost:8009
keepalive=1
info=Ajp13 forwarding over socket
[ajp13:localhost:8009]
channel=channel.socket:localhost:8009
[status:status]
info=Status worker, displays runtime informations
[uri:/*.jsp]
worker=ajp13:localhost:8009
context=/
k.#vi /etc/httpd/conf/httpd.conf
改:DocumentRoot "/u01/www"
加:
在LoadModule最后加:
LoadModule jk2_module modules/mod_jk2.so
JkSet config.file /etc/httpd/conf/workers2.properties
在#<VirtualHost *>之前加:
<Directory ~ "/WEB-INF/">
Order allow,deny
Deny from all
</Directory>
l:#mkdir /u01/ftproot
#mkdir /u01/www
#mkdir /u01/www/myjsp
m:在每个real server上生成index.jsp
#vi /var/www/html/index.jsp
<%@ page import="java.util.*,java.sql.*,java.text.*" contentType="text/html"
%>
<%
out.println("test page on real server 1");
%>
在real server2上就是"test page on real server 2"
n:下载jdbc Driver
http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/htdocs/jdbc9201.html
可惜只有for JDK1.4的,在两台real server上分别
#cp -R /usr/local/tomcat/webapps/webdav/WEB-INF /u01/www/myjsp
#cp ojdbc14.jar ojdbc14_g.jar ocrs12.zip /u01/www/myjsp/WEB-INF/lib
o:假设我有一台OracleServer,ip=10.3.1.211,sid=MYID,username=my,password=1234,并有Oracle
的例子employees的read权限,或干脆把这个table拷过来,我是Oracle9i中的
#vi /u01/www/myjsp/testoracle.jsp
<%@ page contentType="text/html" %>
<%@ page import="java.sql.*"%>
<?xml version="1.0"?>
<html">
<head>
<meta http-equiv="Content-Type" content="text/html" />
<title>Test ORACLE Employees</title>
</head>
<body>
<%
String OracleDBDriver="oracle.jdbc.driver.OracleDriver";
String DBUrl="jdbc:oracle:thin:@10.3.1.211:1521:MYID";
String UserID="my";
String UserPWD="1234";
Connection conn=null;
Statement stmt=null;
ResultSet rs=null;
try
{
Class.forName(OracleDBDriver);
}
catch(ClassNotFoundException ex)
{
System.out.println("Class.forname:"+ex);
}
conn=DriverManager.getConnection(DBUrl,UserID,UserPWD);
stmt=conn.createStatement();
String sql="select * from EMPLOYEES";
rs = stmt.executeQuery(sql);
out.print("<table border>");
out.print("<tr>");
out.print("<th width=100>"+"EMPLOYEE_ID");
out.print("<th width=50>"+"FIRST_NAME");
out.print("<th width=50>"+"LAST_NAME");
out.print("<th width=50>"+"EMAIL");
out.print("<th width=50>"+"PHONE_NUMBER");
out.print("<th width=50>"+"HIRE_DATE");
out.print("<th width=50>"+"JOB_ID");
out.print("<tr>");
try
{
while(rs.next())
{
out.print("<tr>");
int n=rs.getInt(1);
out.print("<td>"+n+"</td>");
String e=rs.getString(2);
out.print("<td>"+e+"</td>");
//String e=rs.getString(3);
out.print("<td>"+rs.getString(3)+"</td>");
out.print("<td>"+rs.getString(4)+"</td>");
out.print("<td>"+rs.getString(5)+"</td>");
out.print("<td>"+rs.getString(6)+"</td>");
out.print("<td>"+rs.getString(7)+"</td>");
out.print("</tr>");
}
}
catch(SQLException ex)
{
System.err.println("ConnDB.Main:"+ex.getMessage());
}
out.print("</table>");
rs.close();
stmt.close();
conn.close();
%>
</body>
</html>
p:#vi /u01/www/index.html
<p>
<META HTTP-EQUIV="Refresh" CONTENT="10; URL=http://10.3.1.254/myjsp/testoracle.jsp">
</BODY>
<p>
<a href="http://10.3.1.254/local/index.jsp">WEB Local</a>
<p>
<a href="http://10.3.1.254/myjsp/testoracle.jsp">Test Oracle WEB</a>
</HTML>
q:在两台real server上分别
#vi /usr/local/tomcat/conf/tomcat-users.xml
加下面一行,允许页面管理:
<user username="manager" password="tomcat" roles="manager"/>
r:在两台real server上分别
#service httpd restart
#/usr/local/tomcat/bin/startup.sh
s:打开http://1092.168.1.68:8080和http://1092.168.1.67:8080,选Tomcat Manager,用
manager/tomcat登录,虚拟目录/myjsp和/local应该Start了
在两台机子上分别打开网页http://10.3.1.254,选WEB Local,可以看到一台显示:
"test page on real server 1",另一台为"test page on real server 2",同时在Router上
ipvsadm可以看到每个real server的联接数
8.设置FTP服务
#vi /etc/vsftpd/vsftp.conf,在两台real server上分别加入以下几行:
anon_root=/u01/ftproot
local_root=/u01/ftproot
setproctitle_enable=YES
#service vsftpd start
现在LVM+GFS+ISCSI+TOMCAT就设置好了,我们可以用Apache Jmeter来测试LVM的性能,两台机子上
分别运行jmeter,都指向10.3.1.254/myjsp/testoracle.jsp,各200个threads同时运行,在Router
上用ipvsadm可以监控,Oracle Server的性能可要好,否则大量的http进程会hang在real server
上,ipvsadm也会显示有个real server失去了。测试时real server的CPU idle会降到70%,而Router
的CPU idle几乎不动
我是从最初的HA(高可用性)开始的,别人的例子是用VMWARE,可以做试验但不能实际应用,我又
没有光纤卡的Share Storage,于是就选用ISCSI,成功后又发现ISCSI+EXT3不能用于LVS,倒最后发
现GFS可用,我最终成功配成可实际应用的LVS,前后断断续续花了四个月,走了很多弯路。我花了
三天时间写下这篇文章,希望对大家有用。
这里要感谢linuxfans.org、linuxsir.com、chinaunix.com以及其它很多网站,很多资料都是从
他们的论坛上找到的。
参考文档及下载点
a.http://www.gyrate.org/misc/gfs.txt
b.http://www.redhat.com/docs/manuals/enterprise/RHEL-3-Manual/cluster-suite/index.html
http://www.redhat.com/docs/manuals/csgfs/admin-guide/index.html
c.ftp://ftp.redhat.com/pub/redhat/linux/updates/enterprise/3ES/en/RHGFS/SRPMS
d.http://distro.ibiblio.org/pub/linux/distributions/caoslinux/centos/3.1/contrib/i386/RPMS/
LVS结构图:
eth0=10.3.1.101
eth0:1=10.3.1.254
Load Balance
Router
eth1=192.168.1.71
eth1:1=192.168.1.1
| |
| |
Real1 Real2
eth0=192.168.1.68 eth0=192.168.1.67
(eth0 gateway=192.168.1.1)
eth1=192.168.0.1---eth1=192.168.0.2
(双机互联线)
|
|
GFS
ISCSI
Share storage
eth0=192.168.1.124
1.Setup ISCSI Server
Server: PIII 1.4,512M, Dell 1650,Redhat 9,IP=192.168.1.124
从http://iscsitarget.sourceforge.net/下载ISCSI TARGET的Source code
(http://sourceforge.net/project/showfiles.php?group_id=108475&package_id=117141)
我选了iscsitarget-0.3.8.tar.gz,要求kernel 2.4.29
从kernel.org下载kernel 2.4.29,解开编译重启后编译安装iscsitarget-0.3.8:
#make KERNELSRC=/usr/src/linux-2.4.29
#make KERNELSRC=/usr/src/linux-2.4.29 install
#cp ietd.conf /etc
#vi /etc/ietd.conf
# Example iscsi target configuration
#
# Everything until the first target definition belongs
# to the global configuration.
# Right now this is only the user configuration used
# during discovery sessions:
# Users, who can access this target
# (no users means anyone can access the target)
User iscsiuser 1234567890abc
Target iqn.2005-04.com.my:storage.disk2.sys1.iraw1
User iscsiuser 1234567890abc
Lun 0 /dev/sda5 fileio
Alias iraw1
Target iqn.2005-04.com.my:storage.disk2.sys1.iraw2
User iscsiuser 1234567890abc
Lun 1 /dev/sda6 fileio
Alias iraw2
Target iqn.2005-04.com.my:storage.disk2.sys2.idisk
User iscsiuser 1234567890abc
Lun 2 /dev/sda3 fileio
Alias idisk
Target iqn.2005-04.com.my:storage.disk2.sys2.icca
User iscsiuser 1234567890abc
Lun 3 /dev/sda7 fileio
Alias icca
说明:password 长度必须不小于12个字符, Alias是别名, 不知为何这个别名在
Client端显示不出来.
分区:我只有一个SCSI盘,所以:
/dev/sda3: Share storage,容量越大越好
/dev/sda5: raw1, 建Cluster要的rawdevice, 我给了900M
/dev/sda6: raw2, 建Cluster要的rawdevice, 我给了900M
/dev/sda7: cca, 建GFS要的,我给了64M
(/dev/sda4是Extended分区,在其中建了sda5,6,7)
#Reboot,用service iscsi-target start启ISCSI server(我觉得比建议的好,可以
用service iscsi-target status看状态)
2.Setup ISCSI Client(on two real server)
Server: PIII 1.4,512M, Dell 1650,Redhat AS3U4(用AS3U5更好),2.4.21-27.EL
#vi /etc/iscsi.conf
DiscoveryAddress=192.168.1.124
OutgoingUsername=iscsiuser
OutgoingPassword=1234567890abc
Username=iscsiuser
Password=1234567890abc
LoginTimeout=15
IncomingUsername=iscsiuser
IncomingPassword=1234567890abc
SendAsyncTest=yes
#service iscsi restart
#iscsi-ls -l
..., 精简如下:
/dev/sdb:iraw2
/dev/sdc:iraw1
/dev/sdd:idisk
/dev/sde:icca
注意: 在real server中ISCSI device的顺序很重要,两个real server中一定要一样,如不一样
就改ISCSI Server中的设置,多试几次
3.Install Redhat Cluster suite
先下载Cluster Suite的ISO, AS3的我是从ChinaUnix.net找到的下载点, 安装clumanager和
redhat-config-cluster。没有Cluster Suite的ISO也没关系,从
ftp://ftp.redhat.com/pub/redhat/linux/updates/enterprise/3ES/en/RHCS/SRPMS/下载
clumanager-1.2.xx.src.rpm,redhat-config-cluster-1.0.x.src.rpm,编译后安装,应该更好:
#rpm -Uvh clumanager-1.2.26.1-1.src.rpm
#rpmbuild -bs /usr/src/redhat/SPECS/clumanager.spec
#rpmbuild --rebuild --target i686 /usr/src/redhat/SRPMS/clumanager-1.2.26.1-1.src.rpm
还有redhat-config-cluster-1.0.x.src.rpm,也装好
4.Setup Cluster as HA module
详细步骤我就不写了,网上有很多文章,我也是看了别人的文章学会的,不过人家是用VMWARE,
而我是用真的机子+ISCSI,raw device就是/dev/sdb,/dev/sdc, 然后就
mount /dev/sdd /u01, mkfs.ext3 /u01 ......
设好后会发现ISCSI有问题:同时只能有一个Client联接写盘,如果
两个Client同时联ISCSI的Share Storge,一个Client写,另一个Client是看不到的,而且此时文
件系统已经破坏了,Client重联ISCSI时会发现文件是坏的,用fsck也修复不了。
ISCSI真的是鸡肋吗?
NO!从GOOGLE上我终于查到ISCSI只有用Cluster File System才能真正用于Share Storage!
而Redhat买下的GFS就是一个!
5.Setup GFS on ISCSI
GFS只有Fedora Core4才自带了,而GFS又一定要用到Cluster Suite产生的/etc/cluster.xml文件,
我没见FC4有Cluster Suite,真不知Redhat给FC4带GFS干嘛,馋人吗?
好,闲话少说,下载:c处的GFS-6.0.2.20-2.src.rpm, 按a处的gfs.txt编译安装,不过关于
cluster.ccs,fence.ccs,nodes.ccs的设置没说,看b的文档,我总算弄出来了,都存在
/root/cluster下,存在别的地方也行,不过我不知道有没有错,我没有光纤卡,文档又没讲ISCSI
的例子,不过GFS能启动的。
#cat cluster.ccs
cluster {
name = "Cluster_1"
lock_gulm {
servers = ["cluster1", "cluster2"]
heartbeat_rate = 0.9
allowed_misses = 10
}
}
注:name就是Cluster Suite设置的Cluster name, servers就是Cluster member的Hostname,别忘
了加进/etc/hosts;allowed_misses我开始设为1,结果跑二天GFS就会死掉,改为10就没死过了。
#cat fence.ccs
fence_devices{
admin {
agent = "fence_manual"
}
}
#cat nodes.ccs
nodes {
cluster1 {
ip_interfaces {
hsi0 = "192.168.0.1"
}
fence {
human {
admin {
ipaddr = "192.168.0.1"
}
}
}
}
cluster2 {
ip_interfaces {
hsi0 = "192.168.0.2"
}
fence {
human {
admin {
ipaddr = "192.168.0.2"
}
}
}
}
}
注:ip就是心跳线的ip
这三个文件建在/root/cluster下,先建立Cluster Configuration System:
a.#vi /etc/gfs/pool0.cfg
poolname pool0
minor 1 subpools 1
subpool 0 8 1 gfs_data
pooldevice 0 0 /dev/sde1
b.#pool_assemble -a pool0
c.#ccs_tool create /root/cluster /dev/pool/pool0
d.#vi /etc/sysconfig/gfs
CCS_ARCHIVE="/dev/pool/pool0"
再Creating a Pool Volume,就是我们要的共享磁盘啦,
a.#vi /etc/gfs/pool1.cfg
poolname pool1
minor 2 subpools 1
subpool 0 128 1 gfs_data
pooldevice 0 0 /dev/sdd1
b.#pool_assemble -a pool1
c.#gfs_mkfs -p lock_gulm -t Cluster_1:gfs1 -j 8 /dev/pool/pool1
d.#mount -t gfs -o noatime /dev/pool/pool1 /u01
下面是个GFS的启动脚本,注意real1和real2必须同时启动lock_gulmd进程,第一台lock_gulmd
会成为Server并等Client的lock_gulmd,几十秒后没有响应会fail,GFS启动失败。Redhat建议
GFS盘不要写进/etc/fstab。
#cat gfstart.sh
#!/bin/sh
depmod -a
modprobe pool
modprobe lock_gulm
modprobe gfs
sleep 5
service iscsi start
sleep 20
service rawdevices restart
pool_assemble -a pool0
pool_assemble -a pool1
service ccsd start
service lock_gulmd start
mount -t gfs /dev/pool/pool1 /s02 -o noatime
service gfs status
6. Setup Linux LVS
LVS是章文嵩博士发起和领导的优秀的集群解决方案,许多商业的集群产品,比如RedHat的
Piranha,Turbolinux公司的Turbo Cluster等,都是基于LVS的核心代码的。
我的系统是Redhat AS3U4,就用Piranha了。
从rhel-3-u5-rhcs-i386.iso安装piranha-0.7.10-2.i386.rpm,ipvsadm-1.21-9.ipvs108.i386.rpm
(http://distro.ibiblio.org/pub/linux/distributions/caoslinux/centos/3.1/contrib/i386/RPMS/)
装完后service httpd start & service piranha-gui start,就可以从http://xx.xx.xx.xx:3636管理或
设置了,当然了,手工改/etc/sysconfig/ha/lvs.cf也一样。
#cat /etc/sysconfig/ha/lvs.cf
serial_no = 80
primary = 10.3.1.101
service = lvs
rsh_command = ssh
backup_active = 0
backup = 0.0.0.0
heartbeat = 1
heartbeat_port = 1050
keepalive = 6
deadtime = 18
network = nat
nat_router = 192.168.1.1 eth1:1
nat_nmask = 255.255.255.0
reservation_conflict_action = preempt
debug_level = NONE
virtual lvs1 {
active = 1
address = 10.3.1.254 eth0:1
vip_nmask = 255.255.255.0
fwmark = 100
port = 80
persistent = 60
pmask = 255.255.255.255
send = "GET / HTTP/1.0\r\n\r\n"
expect = "HTTP"
load_monitor = ruptime
scheduler = wlc
protocol = tcp
timeout = 6
reentry = 15
quiesce_server = 1
server Real1 {
address = 192.168.1.68
active = 1
weight = 1
}
server Real2 {
address = 192.168.1.67
active = 1
weight = 1
}
}
virtual lvs2 {
active = 1
address = 10.3.1.254 eth0:1
vip_nmask = 255.255.255.0
port = 21
send = "\n"
use_regex = 0
load_monitor = ruptime
scheduler = wlc
protocol = tcp
timeout = 6
reentry = 15
quiesce_server = 0
server ftp1 {
address = 192.168.1.68
active = 1
weight = 1
}
server ftp2 {
address = 192.168.1.67
active = 1
weight = 1
}
}
设置完后service pulse start, 别忘了把相关的client加进/etc/hosts
#iptables -t mangle -A PREROUTING -p tcp -d 10.3.1.254/32 --dport 80 -j MARK --set-mark 100
#iptables -t mangle -A PREROUTING -p tcp -d 10.3.1.254/32 --dport 443 -j MARK --set-mark 100
#iptables -A POSTROUTING -t nat -p tcp -s 10.3.1.0/24 --sport 20 -j MASQUERADE
运行以上三行命令并存入/etc/rc.d/rc.local,用ipvsadm看状态:
#ipvsadm
IP Virtual Server version 1.0.8 (size=65536)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.3.1.254:ftp wlc
-> cluster2:ftp Masq 1 0 0
-> cluster1:ftp Masq 1 0 0
FWM 100 wlc persistent 60
-> cluster1:0 Masq 1 0 0
-> cluster2:0 Masq 1 0 0
注意:a.Firewall Mark可以不要,我反正是加了,文档说有https的话加上,值我选了100,
b.Virtual IP别加进/etc/hosts,我上过当,80端口时有时无的,
c.eth0:1,eth1:1是piranha产生的,别自己手工设置,我干过这画蛇添足的事,网上有
些帖子没说清,最后是看Redhat的文档才弄清楚的。
d.The LVS router can monitor the load on the various real servers by using
either rup or ruptime. If you select rup from the drop-down menu, each real
server must run the rstatd service. If you select ruptime, each real server
must run the rwhod service.Redhat的原话,就是如选rup的监控模式real server上
都要运行rstatd进程,如选ruptime就要运行rwhod进程。
e.Real Server同Router相联的网卡的Gateway必须是Router的那块网卡的VIP,举本例:
Router的eth1同两个real server的eth0相联,如VIP eth1:1=192.168.1.1,则real
server 的eth0的Gateway=192.168.1.1
7.Setup TOMCAT5.59+JDK1.5(用Redhat自带的Apache)
a.#tar xzvf jakarta-tomcat-5.5.9.tar.gz
#mv jakarta-tomcat-5.5.9 /usr/local
#ln -s /usr/local/jakarta-tomcat-5.5.9 /usr/local/tomcat
b.#jdk-1_5_0_04-linux-i586.bin
#mv jdk1.5.0_4 /usr/java
#ln -s /usr/java/jdk1.5.0_4 /usr/java/jdk
c.#vi /etc/profile.d/tomcat.sh
export CATALINA_HOME=/usr/local/tomcat
export TOMCAT_HOME=/usr/local/tomcat
d.#vi /etc/profile.d/jdk.sh
if ! echo ${PATH} | grep "/usr/java/jdk/bin" ; then
JAVA_HOME=/usr/java/jdk
export JAVA_HOME
export PATH=/usr/java/jdk/bin:${PATH}
export CLASSPATH=$JAVA_HOME/lib
fi
e.#chmod 755 /etc/profile.d/*.sh
f.重新用root登录,让tomcat.sh和jdk.sh起作用,
#tar xzvf jakarta-tomcat-connectors-jk2-src-current.tar.gz
#cd jakarta-tomcat-connectors-jk2-2.0.4-src/jk/native2/
#./configure --with-apxs2=/usr/sbin/apxs --with-jni --with-apr-lib=/usr/lib
#make
#libtool --finish /usr/lib/httpd/modules
#cp ../build/jk2/apache2/mod_jk2.so ../build/jk2/apache2/libjkjni.so /usr/lib/httpd/modules/
g.#vi /usr/local/tomcat/bin/catalina.sh
在# Only set CATALINA_HOME if not already set后加上以下两行:
serverRoot=/etc/httpd
export serverRoot
h.#vi /usr/local/tomcat/conf/jk2.properties
serverRoot=/etc/httpd
apr.NativeSo=/usr/lib/httpd/modules/libjkjni.so
apr.jniModeSo=/usr/lib/httpd/modules/mod_jk2.so
i.#vi /usr/local/tomcat/conf/server.xml,
在</Host>前加上以下几行,建了两个VirtualPath:myjsp和local,一个指向share storage,
一个指向real server本地
<Context path="/myjsp" docBase="/u01/www/myjsp" debug="0"/>
<Logger className="org.apache.catalina.logger.FileLogger" directory="/var/log/httpd"
prefix="cluster.log." suffix=".txt" timestamp="true" />
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="/var/log/httpd"
prefix="cluster_access.log." suffix=".txt" pattern="common" resolveHosts="false" />
<Context path="/local" docBase="/var/www/html" debug="1" reloadable="true" crossContext="true" />
j.#vi /etc/httpd/conf/workers2.properties
#[logger.apache2]
#level=DEBUG
[shm]
file=/var/log/httpd/shm.file
size=1048576
[channel.socket:localhost:8009]
tomcatId=localhost:8009
keepalive=1
info=Ajp13 forwarding over socket
[ajp13:localhost:8009]
channel=channel.socket:localhost:8009
[status:status]
info=Status worker, displays runtime informations
[uri:/*.jsp]
worker=ajp13:localhost:8009
context=/
k.#vi /etc/httpd/conf/httpd.conf
改:DocumentRoot "/u01/www"
加:
在LoadModule最后加:
LoadModule jk2_module modules/mod_jk2.so
JkSet config.file /etc/httpd/conf/workers2.properties
在#<VirtualHost *>之前加:
<Directory ~ "/WEB-INF/">
Order allow,deny
Deny from all
</Directory>
l:#mkdir /u01/ftproot
#mkdir /u01/www
#mkdir /u01/www/myjsp
m:在每个real server上生成index.jsp
#vi /var/www/html/index.jsp
<%@ page import="java.util.*,java.sql.*,java.text.*" contentType="text/html"
%>
<%
out.println("test page on real server 1");
%>
在real server2上就是"test page on real server 2"
n:下载jdbc Driver
http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/htdocs/jdbc9201.html
可惜只有for JDK1.4的,在两台real server上分别
#cp -R /usr/local/tomcat/webapps/webdav/WEB-INF /u01/www/myjsp
#cp ojdbc14.jar ojdbc14_g.jar ocrs12.zip /u01/www/myjsp/WEB-INF/lib
o:假设我有一台OracleServer,ip=10.3.1.211,sid=MYID,username=my,password=1234,并有Oracle
的例子employees的read权限,或干脆把这个table拷过来,我是Oracle9i中的
#vi /u01/www/myjsp/testoracle.jsp
<%@ page contentType="text/html" %>
<%@ page import="java.sql.*"%>
<?xml version="1.0"?>
<html">
<head>
<meta http-equiv="Content-Type" content="text/html" />
<title>Test ORACLE Employees</title>
</head>
<body>
<%
String OracleDBDriver="oracle.jdbc.driver.OracleDriver";
String DBUrl="jdbc:oracle:thin:@10.3.1.211:1521:MYID";
String UserID="my";
String UserPWD="1234";
Connection conn=null;
Statement stmt=null;
ResultSet rs=null;
try
{
Class.forName(OracleDBDriver);
}
catch(ClassNotFoundException ex)
{
System.out.println("Class.forname:"+ex);
}
conn=DriverManager.getConnection(DBUrl,UserID,UserPWD);
stmt=conn.createStatement();
String sql="select * from EMPLOYEES";
rs = stmt.executeQuery(sql);
out.print("<table border>");
out.print("<tr>");
out.print("<th width=100>"+"EMPLOYEE_ID");
out.print("<th width=50>"+"FIRST_NAME");
out.print("<th width=50>"+"LAST_NAME");
out.print("<th width=50>"+"EMAIL");
out.print("<th width=50>"+"PHONE_NUMBER");
out.print("<th width=50>"+"HIRE_DATE");
out.print("<th width=50>"+"JOB_ID");
out.print("<tr>");
try
{
while(rs.next())
{
out.print("<tr>");
int n=rs.getInt(1);
out.print("<td>"+n+"</td>");
String e=rs.getString(2);
out.print("<td>"+e+"</td>");
//String e=rs.getString(3);
out.print("<td>"+rs.getString(3)+"</td>");
out.print("<td>"+rs.getString(4)+"</td>");
out.print("<td>"+rs.getString(5)+"</td>");
out.print("<td>"+rs.getString(6)+"</td>");
out.print("<td>"+rs.getString(7)+"</td>");
out.print("</tr>");
}
}
catch(SQLException ex)
{
System.err.println("ConnDB.Main:"+ex.getMessage());
}
out.print("</table>");
rs.close();
stmt.close();
conn.close();
%>
</body>
</html>
p:#vi /u01/www/index.html
<p>
<META HTTP-EQUIV="Refresh" CONTENT="10; URL=http://10.3.1.254/myjsp/testoracle.jsp">
</BODY>
<p>
<a href="http://10.3.1.254/local/index.jsp">WEB Local</a>
<p>
<a href="http://10.3.1.254/myjsp/testoracle.jsp">Test Oracle WEB</a>
</HTML>
q:在两台real server上分别
#vi /usr/local/tomcat/conf/tomcat-users.xml
加下面一行,允许页面管理:
<user username="manager" password="tomcat" roles="manager"/>
r:在两台real server上分别
#service httpd restart
#/usr/local/tomcat/bin/startup.sh
s:打开http://1092.168.1.68:8080和http://1092.168.1.67:8080,选Tomcat Manager,用
manager/tomcat登录,虚拟目录/myjsp和/local应该Start了
在两台机子上分别打开网页http://10.3.1.254,选WEB Local,可以看到一台显示:
"test page on real server 1",另一台为"test page on real server 2",同时在Router上
ipvsadm可以看到每个real server的联接数
8.设置FTP服务
#vi /etc/vsftpd/vsftp.conf,在两台real server上分别加入以下几行:
anon_root=/u01/ftproot
local_root=/u01/ftproot
setproctitle_enable=YES
#service vsftpd start
现在LVM+GFS+ISCSI+TOMCAT就设置好了,我们可以用Apache Jmeter来测试LVM的性能,两台机子上
分别运行jmeter,都指向10.3.1.254/myjsp/testoracle.jsp,各200个threads同时运行,在Router
上用ipvsadm可以监控,Oracle Server的性能可要好,否则大量的http进程会hang在real server
上,ipvsadm也会显示有个real server失去了。测试时real server的CPU idle会降到70%,而Router
的CPU idle几乎不动
发表评论
-
git macos 配置
2011-10-04 12:33 1745git有4种协议方式建git服务器,分别是本地协议、SSH协议 ... -
Java加密技术(十)
2011-05-08 22:31 903在Java 加密技术(九)中,我们使用自签名证书完成了认证。接 ... -
Java加密技术(九)
2011-05-08 22:30 871在Java加密技术(八)中,我们模拟了一个基于RSA非对称加密 ... -
Java加密技术(八)
2011-05-08 22:28 874在构建Java代码实现前,我们需要完成证书的制作。 1.生成k ... -
Java加密技术(七)
2011-05-08 22:26 830ECC ECC-Elliptic Curves Cryptog ... -
Java加密技术(六)
2011-05-08 22:24 816接下来我们介绍DSA数字签名,非对称加密的另一种实现。 DSA ... -
Java加密技术(五)
2011-05-08 22:23 727接下来我们分析DH加 ... -
Java加密技术(四)
2011-05-08 22:21 778接下来我们介绍典型的 ... -
Java加密技术(三)
2011-05-08 22:19 977除了DES,我们还知道有DESede(TripleDES,就 ... -
Java加密技术(二)
2011-05-08 22:18 840接下来我们介绍对称加密算法,最常用的莫过于DES数据加密算法 ... -
Java加密技术(一)
2011-05-08 22:16 821加密解密,曾经是我一个毕业设计的重要组件。在工作了多年以后回想 ... -
java并发学习之五:读JSR133笔记(持续更新中)
2011-04-11 07:02 892在写线程池的时候,遇 ... -
java并发学习之四:JSR 133 (Java Memory Model) FAQ【译】
2011-04-11 07:01 870Jsr133地址:http://www ... -
java并发学习之三:非阻塞漫想,关于环岛与地铁
2011-04-11 07:00 869到过北京上地的都会知道,上地城铁往西走有一个很大的上地环岛,旁 ... -
java并发学之二
2011-04-11 06:59 909在看书的时候看到了一个观察死锁的工具TDA(Thread Du ... -
ava并发学习之二:线程池
2011-04-11 06:58 758第二步,是实现一个线程池 因为之前看书的时候留了个心眼,看线程 ... -
java并发学习之一:CountDownLatch
2011-04-11 06:57 724看了几个月的《Java Concurrency in Prac ... -
CAS
2011-04-03 20:08 1117需求描述1:大家知道J2EE应用程序都可以用类型以下形式进行保 ... -
定制TortoiseSVN安装包
2011-03-26 07:05 1106TortoiseSVN的MSI安装包是使用Windows ... -
编译TortoiseSVN源代码
2011-03-26 07:04 1252装编译器软件 A. 你需要 VS.NET2005 (或 ...
相关推荐
linux集群,rhcs、iscsi和gfs2提供共享存储,通过lvs搭建基于共享存储的双机热备web服务,根据lvs的工作模式可选负载均衡、互为主备的工作模式。
随着你的网站业务量的增长你网站的服务器压力越来越大?需要负载均衡方案!商业的硬件如F5又太贵,你们又是创业型互联公司...我们利用LVS+Keepalived基于完整开源软件的架构可以为你提供一个负载均衡及高可用的服务器。
LVS+Keepalived+Nginx+Tomcat 高可用集群项目 本文主要讲述了如何构建一个高可用集群项目,使用 LVS、Keepalived、Nginx 和 Tomcat 实现高可用性和负载均衡。该项目的架构中,Keepalived 负责对 LVS 架构中的调度器...
这里提到的“4 lvs+keepalived+nginx+tomcat”架构,就是一种常用的解决方案,用于处理高流量的Web服务。下面将详细解释每个组件的作用和配置方法。 1. LVS(Linux Virtual Server): LVS是Linux内核中的一个负载...
LVS+Keepalived 详细安装配置文档 LVS(Linux Virtual Server)是一种开源的负载均衡解决方案,通过 Keepalived 实现高可用性的虚拟服务器。下面将详细介绍 LVS+Keepalived 的安装配置过程和技术实现原理。 LVS ...
MySQL集群+LVS+KEEPALIVED环境搭建
本篇文章详细记录了Mysql双主热备+LVS+Keepalived高可用操作过程,可作为线上长期的实操手册.特此分享,希望能帮助到有用到的朋友.
本篇文档为Mysql双主热备+LVS+Keepalived高可用操作记录,可作为线上实操手册,有需要的朋友可以拿走,希望能帮助到有用到的人~
lvs+Keepalived+nginx高可用负载均衡搭建部署方案
【LVS+Keepalived+MySQL半同步主主复制高可用方案】 1. 方案概述 LVS(Linux Virtual Server)结合Keepalived构建的高可用解决方案,通常用于实现负载均衡和故障转移,以提高系统的整体可用性。在这个方案中,MySQL...
LVS+Nginx+Lamp+NFS架构 本文将详细介绍 LVS+Nginx+Lamp+NFS 架构的知识点,并对每个组件进行详细的解释。 LVS LVS(Linux Virtual Server)是一种开源的负载均衡器,可以实现服务器集群的负载均衡。它可以将...
在构建大型、高可用性的FTP(File ...通过上述步骤,我们可以构建一个基于lvs+keepalived+vsftp的FTP服务器负载均衡环境,提供高可用性和良好的扩展性。记住,实施过程中应根据实际需求和服务器环境进行适当的调整。
Lvs抗负载能力强,因为 lvs 工作方式的逻辑是非常之简单,而且工作在网络 4 层仅做请求分发之用,没有流量,所以在效率上基本不...文档简单演示了lvs+keepalived的搭建过程和负载测试,顺便记录下来。每天进步一点。
### CentOS 7 上 LVS+Keepalived 部署详解 #### 一、环境准备与软件安装 根据提供的部分内容可以看出,本教程旨在演示如何在 CentOS 7 系统上安装配置 LVS(Linux Virtual Server)及 Keepalived 服务,以实现...
在搭建基于`lvs+keepalived+nginx+tomcat`的集群时,我们需要理解这些组件各自的作用以及它们如何协同工作以实现高可用性和负载均衡。以下是详细的知识点解释: 1. **LVS (Linux Virtual Server)**: LVS 是一种在 ...
lvs+mariadb+galera环境搭建
LVS+HAproxy+NGINX+mysql+nf综合实验过程,详细步骤,可以参考。
搭建LVS+KEEPALIVED负载均衡 基于LVS(Linux Virtual Server)和KEEPALIVED的负载均衡技术,可以实现高可用性和高性能的服务器集群。下面是搭建LVS+KEEPALIVED负载均衡的详细步骤和知识点: 一、 环境准备 1. ...