`
haohappy2
  • 浏览: 327456 次
  • 性别: Icon_minigender_1
  • 来自: 上海
社区版块
存档分类
最新评论

loadbalance apache3

 
阅读更多

Usually a single AMP system is enough to serve - let's say - around 500 concurrent users. Sometimes more, sometimes less, strongly depending on the particular web application, the overall architecture of your system, of course the hardware itself, and how you define "concurrent users".

Nevertheless, if your server gets too slow, you'll need to take actions. You may upgrade your server up to the maximum (aka vertical scaling), optimize your software (aka refactoring), and finally add more servers (aka horizontal scaling). The whole process of horizontal scaling is quite complex and far too much for a single blog post, but here's a first shot. Others will follow.

Today I'll focus on one single aspect of horizontal scaling: an HTTP load balancer.

loadbalancer1bsc.jpg

On the left: a whole crowd of people ready to visit our web site. On the right: our server farm (called workers). And in the middle: our current hero, the load balancer. The purpose of the load balancer (in this case an HTTP load balancer) is to distribute all incoming requests to our backend web servers. The load balancer hides all our backend servers to the public, and from the outside it looks like a single server doing all of the work.

The Recipe

Okay, let's start. Step by step.

  1. Since version 2.2 the Apache web server ships a load balancer module called mod_proxy_balancer. All you need to do is to enable this module and the modules mod_proxy and mod_proxy_http:

    LoadModule proxy_module mod_proxy.so
    LoadModule proxy_http_module mod_proxy_http.so
    LoadModule proxy_balancer_module mod_proxy_balancer.so

    Please don't forget to load mod_proxy_http, because you wouldn't get any error messages if it's not loaded. The balancer just won't work.

  2. Because mod_proxy makes Apache become an (open) proxy server, and open proxy servers are dangerous both to your network and to the Internet at large, I completely disable this feature:

    	ProxyRequests Off
    	<Proxy \*>
    		Order deny,allow
    		Deny from all
    	</Proxy>
    

    The load balancer doesn't need this feature at all.

  3. Now I need to make sure all my backend web servers have the same content:

    serverA htdocs% cat index.html
    This is A.
    serverB htdocs% cat index.html
    This is B.
    serverC htdocs% cat index.html
    This is C.
    serverD htdocs% cat index.html
    This is D.

    Okay, in this case the content differs, but I need this to show how the load balancer works.

  4. And here's the actual load balancer configuration:

    	<Proxy balancer://clusterABCD>
    		BalancerMember http://serverA
    		BalancerMember http://serverB
    		BalancerMember http://serverC
    		BalancerMember http://serverD
    		Order allow,deny
    		Allow from all
    	</Proxy>
    	ProxyPass / balancer://clusterABCD/
    

    The <Proxy>...</Proxy> container defines which backend servers belong to my balancer. I chose the name clusterABCD for this server group, but you are free to choose any name you want.

    And the ProxyPass directive instructs the Apache to forward all incoming requests to this group of backend servers.

  5. That's all? Yes, that's all. Here's the prove:

    # repeat 12 lynx -source http://loadbalancer
    This is A.
    This is B.
    This is C.
    This is D.
    This is A.
    This is B.
    This is C.
    This is D.
    This is A.
    This is B.
    This is C.
    This is D.

    Each request to the load balancer is forwarded to one of the backend servers. By default Apache simply counts the number of requests and makes sure every backend server gets the same amount of requests forwarded.

    If you want to know more about available balancing algorithms please refer to Apache's mod_proxy_balancer manual.

Did you ever imagine setting up a load balancer would be this easy? Of course, there is more to say about (HTTP) load balancing and much more about vertical scaling too, but this is only a blog posting and not a place for such an expansive reference. If time and space allows I'll go into further details on this in the near future.

分享到:
评论

相关推荐

    apache+tomcat 负载平衡

    3. **mod_proxy 模块**: Apache的mod_proxy模块提供了反向代理和负载平衡功能。配置mod_proxy可以在Apache配置文件中定义多个后端Tomcat服务器,并设置负载平衡策略。例如,可以使用以下配置将请求均匀分配给多个...

    Tomcat5基于JK的集群(Cluster)和负载平衡(Load Balance)

    标题"Tomcat5基于JK的集群(Cluster)和负载平衡(Load Balance)"提及了两个核心概念:Tomcat集群和负载平衡。Tomcat是Apache软件基金会的开源Java Servlet容器,用于部署和运行Java web应用程序。集群是在多台...

    apache+tomcat集群文档

    - 常见的设计模式包括本地服务器负载均衡(Local Server Load Balance)和全局服务器负载均衡(Global Server Load Balance)。 - **负载均衡方法**: - DNS轮询:通过DNS解析时的轮询机制实现负载均衡。 - IP哈希:...

    Apache安装及jboss部署说明文档

    本文档描述了apache web服务器安装以及常用的编译模式;描述了apache+jboss3.2.6做负载均衡(load balance)的部署细节以及一些常见错误说明;描述了部署jboss3.2.3/3.2.6时一些心得、常用配置项。

    Apache安装及JBOSS部署说明文档.rar

    描述了apache jboss3.2.6做负载均衡(load balance)的部署细节以及一些常见错误说明;描述了部署jboss3.2.3/3.2.6时一些心得、常用配置项。 目 录 1 .Apache2.0及连接器jk1.2的编译部署 4 1.1下载相关软件包 ...

    Apache Dubbo:ApacheDubbo简介与快速入门

    在这个配置中,`loadbalance="leastactive"` 表示使用最少活跃调用数的负载均衡策略。这样可以确保服务调用尽可能分散到活跃度较低的服务器上,避免某些服务器过载。 以上内容详细介绍了 Apache Dubbo 的基本概念、...

    thrift-zookeeper-rpc

    3.关于Failover/LoadBalance,由于zookeeper的watcher,当服务端不可用是及时通知客户端,并移除不可用的服务节点,而LoadBalance有很多算法,这里我们采用随机加权方式,也是常有的负载算法,至于其他的算法介绍...

    apache tomcat 集群

    这依赖于`worker.loadBalancer.balance_workers`配置中的正确设置。 6. **监控和日志**: 集群的监控非常重要,可以使用Apache的`mod_status`模块来查看服务器状态,以及Tomcat的管理界面来监控每个实例的性能和...

    Jboss集群配置指南

    1. WEB Loadbalance 3 2. HTTP Session复制 3 3. JNDI 3 4. EJB 3 第二部分 集群物理实现 4 1. 物理架构 4 2. 机器网址分配 4 3. 软件环境 4 第三部分 集群配置 5 1. Apache 配置 5 2. Tomcat配置 6 3. Jboss配置 6 ...

    Apache Dubbo:Dubbo配置与参数详解

    @Reference(version = "1.0.0", timeout = 10000, retries = 3, loadbalance = "roundrobin") private DemoService demoService; ``` ```xml &lt;dubbo:parameter key="retries" value="3"/&gt; &lt;dubbo:load...

    apache+Tomcat负载平衡设置详解

    worker.loadbalancer.balance_workers=worker1,worker2 worker.loadbalancer.sticky_session=true ``` 这个配置创建了两个 AJP 连接的 worker(worker1 和 worker2),以及一个负载平衡器(loadbalancer),使用...

    Dubbo手册.pdf

    3) loadbalance:负载均衡算法,默认随机 4) actives:消费者端最大并发调用限制 【Dubbo启动时依赖服务不可用会怎样?】 默认情况下,Dubbo在启动时会检查依赖服务的可用性,如果不可用会抛出异常阻止Spring初始...

    技术栈3.pdf,java面试

    Dubbo 的负载均衡策略包括 Random LoadBalance、RoundRobin LoadBalance、LeastActive LoadBalance、ConsistentHash LoadBalance 等,每种策略都有其优缺点和使用场景。 服务降级: Dubbo 的服务降级是指在服务不...

    loadbalancer:使用Vagrant在VirtualBox环境上构建负载平衡基础结构(NGNIX和APACHE服务器); 建立服务及其依赖关系; 部署简单的Web应用程序(Ansible)

    先决条件 流浪汉 Vagrant是用于在单个工作流程中构建和管理虚拟机环境的工具。 请按照的说明进行安装。 VirtualBox提供者 并安装Virtual Box作为虚拟化提供程序。 拓扑结构 主机名 ... 现在,从控制V

    Apache+Tomcat负载平衡设置方法详细解析

    例如,可以使用`worker.loadbalancer.type=lb`创建一个负载均衡器worker,并通过`worker.loadbalancer.balance_workers`指定参与负载平衡的worker列表。 在实际部署中,除了上述配置,还需要确保Apache和Tomcat之间...

    Jboss6+mod_jk+apache2.2集群实现session的复制

    - **配置详解**:通过设置`worker.loadbalancer.sticky_session=1`,可以确保一旦某次会话开始在一个特定节点上处理,后续的所有请求都将被定向到同一节点,直到会话结束或节点发生故障。 - **故障转移**:如果...

    Apache Dubbo:Dubbo服务治理:负载均衡与容错机制

    &lt;dubbo:loadbalance value="random"/&gt; ``` 在此示例中,`demoService`服务引用被配置为采用随机负载均衡策略。这意味着每次调用`demoService`时,Dubbo将随机选择一个服务实例进行调用,从而实现负载的均衡分布。...

    jboss jms参考资料包

    本文档描述了apache web服务器安装以及常用的编译模式;描述了apache+jboss3.2.6做负载均衡(load balance)的部署细节以及一些常见错误说明;描述了部署jboss3.2.3/3.2.6时一些心得、常用配置项

    lvs+keepalives部署.txt

    简介 Keepalived的作用是检测服务器的...主要用作RealServer的健康状态检查以及LoadBalance主机和BackUP主机之间failover的实现。 高可用web架构: LVS+keepalived+nginx+apache+php+eaccelerator(+nfs可选 可不选)

Global site tag (gtag.js) - Google Analytics