阅读更多

3顶
0踩

非技术

原创新闻 Varnish 2.0.1 发布:高性能的HTTP加速器

2008-10-22 10:23 by 见习记者 masterkey 评论(1) 有5138人浏览

Varnish 2.0.1 给我们带来很多新的特性:

 

ESI 支持
轮训或随机负载均衡支持
后端健康检查
新型对象过期检查
OpenSolars 支持
修复一些小错误

 

新版本的Varnish更加稳定和健壮,性能更好!

下载:http://linux.softpedia.com/get/Internet/HTTP-WWW-/Varnish-17663.shtml

 

 

Changes:

                           Change log for Varnish 2.0

Changes between 1.1.2 and 2.0

  varnishd

     * Only look for sendfile on platforms where we know how to use it,
       which is FreeBSD for now.
     * Make it possible to adjust the shared memory log size and bump the
       size from 8MB to 80MB.
     * Fix up the handling of request bodies to better match what RFC2616
       mandates. This makes PUT, DELETE, OPTIONS and TRACE work in
       addition to POST.
     * Change how backends are defined, to a constant structural defintion
       style. See http://varnish.projects.linpro.no/wiki/VclSyntaxChanges
       for the details.
     * Add directors, which wrap backends. Currently, there's a random
       director and a round-robin director.
     * Add "grace", which is for how long and object will be served, even
       after it has expired. To use this, both the object's and the
       request's grace parameter need to be set.
     * Manual pages have been updated for new VCL syntax and varnishd
       options.
     * Man pages and other docs have been updated.
     * The shared memory log file is now locked in memory, so it should
       not be paged out to disk.
     * We now handle Vary correctly, as well as Expect.
     * ESI include support is implemented.
     * Make it possible to limit how much memory the malloc uses.
     * Solaris is now supported.
     * There is now a regsuball function, which works like regsub except
       it replaces all occurences of the regex, not just the first.
     * Backend and director declarations can have a .connect_timeout
       parameter, which tells us how long to wait for a successful
       connection.
     * It is now possible to select the acceptor to use by changing the
       acceptor parameter.
     * Backends can have probes associated with them, which can be checked
       with req.backend.health in VCL as well as being handled by
       directors which do load-balancing.
     * Support larger-than-2GB files also on 32 bit hosts. Please note
       that this does not mean we can support caches bigger than 2GB, it
       just means logfiles and similar can be bigger.
     * In some cases, we would remove the wrong header when we were
       stripping Content-Transfer-Encoding headers from a request. This
       has been fixed.
     * Backends can have a .max_connections associated with them.
     * On Linux, we need to set the dumpable bit on the child if we want
       core dumps. Make sure it's set.
     * Doing purge.hash() with an empty string would cause us to dump
       core. Fixed so we don't do that any more.
     * We ran into a problem with glibc's malloc on Linux where it seemed
       like it failed to ever give memory back to the OS, causing the
       system to swap. We have now switched to jemalloc which appears not
       to have this problem.
     * max_restarts was never checked, so we always ended up running out
       of workspace. Now, vcl_error is called when we reach max_restarts.

  varnishtest

     * varnishtest is a tool to do correctness tests of varnishd. The test
       suite is run by using make check.

  varnishtop

     * We now set the field widths dynamically based on the size of the
       terminal and the name of the longest field.

  varnishstat

     * varnishstat -1 now displays the uptime too.

  varnishncsa

     * varnishncsa now does fflush after each write. This makes tail -f
       work correctly, as well as avoiding broken lines in the log file.
     * It is possible to get varnishncsa to output the X-Forwarded-For
       instead of the client IP by passing -f to it.

  Build system

     * Various sanity checks have been added to configure, it now
       complains about no ncurses or if SO_RCVTIMEO or SO_SNDTIMEO are
       non-functional. It also aborts if there's no working acceptor
       mechanism
     * The C compiler invocation is decided by the configure script and
       can now be overridden by passing VCC_CC when running configure.
3
0
评论 共 1 条 请登录后发表评论
1 楼 gqf2008 2008-10-23 10:00
很好,很强大。

发表评论

您还没有登录,请您登录后再发表评论

相关推荐

  • 中小研发团队架构实践之统一应用分层

    一、写在前面 应用分层这件事情看起来很简单,但每个程序员都有自己的一套,哪怕是初学者。如何让一家公司的几百个应用采用统一的分层结构,并得到大部分程序员的认同呢?这可不是件简单的事情,接下来以我们真实案例与大家一起探讨,先问大家两个技术问题: 服务的调用代码你觉得放到哪一层好呢?A表现层;B业务逻辑层;C数据层;D公共层。 如何组织好VO(View Object视图对象

  • Scaling Apps with Varnish

    Scaling Apps with Varnish

  • 手把手让你实现开源企业级web高并发解决方案(lvs+heartbeat+varnish+nginx+eAccelerator+memcached)...

    http://freeze.blog.51cto.com/1846439/677348   此文凝聚笔者不少心血请尊重笔者劳动,转载请注明出处。违法直接人肉出电话 写大街上。 http://freeze.blog.51cto.com/个人小站刚上线 http://www.linuxw...

  • Web高并发解决方案

    《手把手让你实现开源企业级web高并发解决方案》 (lvs+heartbeat+varnish+nginx+eAccelerator+memcached) 原创作品,允许转载,转载时请务必以超链接形式标明文章 原始出处 、作者信息和本声明。...

  • 手把手让你实现开源企业级web高并发解决方案(lvs+heartbeat+varnish+nginx+eAccelerator+memcached)

    手把手让你实现开源企业级web高并发解决方案(lvs+heartbeat+varnish+nginx+eAccelerator+memcached) http://freeze.blog.51cto.com/ 个人小站刚上线 http://www.linuxwind.com 有问题还可以来QQ群89342115交流...

  • go技术文章精选(2019)

    Java JIT vs Java AOT vs Go适用于短期的小型进程 http://macias.info/entry/201912201300_graal_aot.md LBADD:一个实验性的分布式SQL数据库 https://github.com/tomarrell/lbadd 将现有的Rest...

  • nginx

    cache,proxy) 利用nginx可以对ip限速,可以限制连接数 基于异步网络I/O模型(epoll、kqueue)使得nginx可以支持高并发 具备支持高性能,高并发的特性,并发连接可达数万 (占用资源少 2W并发 开10个线程服务,内存...

  • DCOM实现分布式应用(四)

    安全性使用网络来将应用系统分布化是一个挑战,这不仅是因为带宽的物理限制以及一些潜在的问题,而且也由于它产生一些诸如关系到客户间、组件间以及客户和组件之间的安全问题。因为现在的许多操作可以被网络中的任何一个人访问,所以对这些操作的访问应该被限制在一个高级别上。 如果分布式开发平台没有提供安全支持,那么每一个分布式应用就必需完成自己的安全机制。一种典型的方法是用某种登录的方法要求用户通过用

  • DCOM实现分布式应用(三)

    带宽及潜在问题 分布式应用利用了网络的优点将组件结合到一起。理论上来说,DCOM将组件在不同的机器上运行这一事实隐藏起来。实际上,应用必须考虑到网络连接带来的两个主要限制: 带宽:传递给方法调用的参数的大小直接影响着完成方法调用的时间。 存在问题:物理距离以及相关的网络器件(例如路由器合传输线)甚至能使最小的数据包都被显著地延迟。 DCOM怎样帮助应用解决这些局限呢?DCO

  • DCOM实现分布式应用(六)

     跨平台的互操作性标准从另一方面来说,DCOM为面向对象的分布式计算定义了跨平台服务(或抽象),其中包括连接组件、创建组件、组件的定位、激活组件的方法以及一个安全性框架。 除了这些以外,DCOM仅仅使用了每一个平台上都有的服务来完成多线程化和并发控制、用户界面、文件系统之间的相互作用、非DCOM网络的相互作用以及实际的安全性模块。 使用大多数的DCE RPC DCOM的线路

  • DCOM实现分布式应用(一)

    DCOM实现分布式应用(一)DCOM概述 Microsoft的分布式COM(DCOM)扩展了组件对象模型技术(COM),使其能够支持在局域网、广域网甚至Internet上不同计算机的对象之间的通讯。使用DCOM,你的应用程序就可以在位置上达到分布性,从而满足你的客户和应用的需求。 因为DCOM是世界上领先的组件技术COM的无缝扩展,所以你可以将你现在对基于COM的应用、组件、工具以及知

  • DCOM实现分布式应用(二)

    功能的发展:版本化 除了随着用户的数量以及事务的数量而扩展规模外,当新的特性加入时应用系统也需要扩展规模。随着时间的推移,新的任务被添加进来,原有的任务被更新。传统的做法是或者客户进程和组件都需要同时被更新,或者旧的组件必须被保留直到所有的客户进程被更新,当大量的地理上分布的站点和用户在使用系统时,这就成为一个非常费力的管理问题。DCOM为组件和客户进程提供了灵活的扩展机制。使用C

Global site tag (gtag.js) - Google Analytics