- 浏览: 853711 次
- 性别:
- 来自: 北京
文章分类
最新评论
-
zjhzwx1212:
为什么用threadLocal后,输出值是从20开始的,而定义 ...
j2ee的线程安全--threadlocal -
aeoluspu:
不错 mysql 测试部分感觉不详细
用sysbench(或者super-smack)测试mysql性能 -
nanPrivate:
有没有例子,只理论,实践起来还是不会啊
JMS可靠消息传送 -
lwclover:
一个网络工程师 装什么b
postfix 如何删除队列中的邮件 -
maimode:
我也欠缺不少啊
理想的计算机科学知识体系
After you gather some page-load times and effective bandwidth for real users all over the world, you can experiment with changes that will improve those times. Measure the difference and keep any that offer a substantial improvement.
Try some of the following:
-
Turn on HTTP keepalives for external objects. Otherwise you add an extra round-trip to do another TCP three-way handshake and slow-start for every HTTP request. If you are worried about hitting global server connection limits, set the keepalive timeout to something short, like 5-10 seconds. Also look into serving your static content from a different webserver than your dynamic content. Having thousands of connections open to a stripped down static file webserver can happen in like 10 megs of RAM total, whereas your main webserver might easily eat 10 megs of RAM per connection.
-
Load fewer external objects. Due to request overhead, one bigger file just loads faster than two smaller ones half its size. Figure out how to globally reference the same one or two javascript files and one or two external stylesheets instead of many; if you have more, try preprocessing them when you publish them. If your UI uses dozens of tiny GIFs all over the place, consider switching to a much cleaner CSS -based design which probably won't need so many images. Or load all of your common UI images in one request using a technique called "CSS sprites ".
-
If your users regularly load a dozen or more uncached or uncacheable objects per page, consider evenly spreading those objects over four hostnames. This usually means your users can have 4x as many outstanding connections to you. Without HTTP pipelining, this results in their average request latency dropping to about 1/4 of what it was before.
When you generate a page, evenly spreading your images over four hostnames is most easily done with a hash function, like MD5. Rather than having all <img> tags load objects from http://static.example.com/, create four hostnames (e.g. static0.example.com, static1.example.com, static2.example.com, static3.example.com) and use two bits from an MD5 of the image path to choose which of the four hosts you reference in the <img> tag. Make sure all pages consistently reference the same hostname for the same image URL, or you'll end up defeating caching.
Beware that each additional hostname adds the overhead of an extra DNS lookup and an extra TCP three-way handshake. If your users have pipelining enabled or a given page loads fewer than around a dozen objects, they will see no benefit from the increased concurrency and the site may actually load more slowly. The benefits only become apparent on pages with larger numbers of objects. Be sure to measure the difference seen by your users if you implement this.
-
Possibly the best thing you can do to speed up pages for repeat visitors is to allow static images, stylesheets, and javascript to be unconditionally cached by the browser. This won't help the first page load for a new user, but can substantially speed up subsequent ones.
Set an Expires header on everything you can, with a date days or even months into the future. This tells the browser it is okay to not revalidate on every request, which can add latency of at least one round-trip per object per page load for no reason.
Instead of relying on the browser to revalidate its cache, if you change an object, change its URL. One simple way to do this for static objects if you have staged pushes is to have the push process create a new directory named by the build number, and teach your site to always reference objects out of the current build's base URL. (Instead of <img src="http://example.com/logo.gif"> you'd use <img src="http://example.com/build/1234/logo.gif">. When you do another build next week, all references change to <img src="http://example.com/build/1235/logo.gif">.) This also nicely solves problems with browsers sometimes caching things longer than they should -- since the URL changed, they think it is a completely different object.
If you conditionally gzip HTML, javascript, or CSS, you probably want to add a "Cache-Control : private" if you set an Expires header. This will prevent problems with caching by proxies that won't understand that your gzipped content can't be served to everyone. (The Vary header was designed to do this more elegantly, but you can't use it because of IE brokenness.)
For anything where you always serve the exact same content when given the same URL (e.g. static images), add "Cache-Control: public" to give proxies explicit permission to cache the result and serve it to different users. If a caching proxy local to the user has the content, it is likely to have much less latency than you; why not let it serve your static objects if it has them?
Avoid the use of query params in image URLs, etc. At least the Squid cache refuses to cache any URL containing a question mark by default. I've heard rumors that other things won't cache those URLs at all, but I don't have more information.
-
On pages where your users are often sent the exact same content over and over, such as your home page or RSS feeds, implementing conditional GETs can substantially improve response time and save server load and bandwidth in cases where the page hasn't changed.
When serving a static files (including HTML) off of disk, most webservers will generate Last-Modified and/or ETag reply headers for you and make use of the corresponding If-Modified-Since and/or If-None-Match mechanisms on requests. But as soon as you add server-side includes, dynamic templating, or have code generating your content as it is served, you are usually on your own to implement these.
The idea is pretty simple: When you generate a page, you give the browser a little extra information about exactly what was on the page you sent. When the browser asks for the same page again, it gives you this information back. If it matches what you were going to send, you know that the browser already has a copy and send a much smaller 304 (Not Modified) reply instead of the contents of the page again. And if you are clever about what information you include in an ETag, you can usually skip the most expensive database queries that would've gone into generating the page.
-
Minimize HTTP request size. Often cookies are set domain-wide, which means they are also unnecessarily sent by the browser with every image request from within that domain. What might've been a 400 byte request for an image could easily turn into 1000 bytes or more once you add the cookie headers. If you have a lot of uncached or uncacheable objects per page and big, domain-wide cookies, consider using a separate domain to host static content, and be sure to never set any cookies in it.
-
Minimize HTTP response size by enabling gzip compression for HTML and XML for browsers that support it. For example, the 17k document you are reading takes 90ms of the full downstream bandwidth of a user on 1.5Mbit DSL. Or it will take 37ms when compressed to 6.8k. That's 53ms off of the full page load time for a simple change. If your HTML is bigger and more redundant, you'll see an even greater improvement.
If you are brave, you could also try to figure out which set of browsers will handle compressed Javascript properly. (Hint: IE4 through IE6 asks for its javascript compressed, then breaks badly if you send it that way.) Or look into Javascript obfuscators that strip out whitespace, comments, etc and usually get it down to 1/3 to 1/2 its original size.
-
Consider locating your small objects (or a mirror or cache of them) closer to your users in terms of network latency. For larger sites with a global reach, either use a commercial Content Delivery Network , or add a colo within 50ms of 80% of your users and use one of the many available methods for routing user requests to your colo nearest them.
-
Regularly use your site from a realistic net connection. Convincing the web developers on my project to use a "slow proxy" that simulates bad DSL in New Zealand (768Kbit down, 128Kbit up, 250ms RTT, 1% packet loss) rather than the gig ethernet a few milliseconds from the servers in the U.S. was a huge win. We found and fixed a number of usability and functional problems very quickly.
To implement the slow proxy, I used the netem and HTB kernel modules available in the Linux 2.6 kernel, both of which are set up with the tc command line tool. These offer the most accurate simulation I could find, but are definitely not for the faint of heart. I've not used them, but supposedly Tamper Data for Firefox, Fiddler for Windows, and Charles for OSX can all rate-limit and are probably easier to set up, but they may not simulate latency properly.
-
Use Firebug for Firefox from a realistic net connection to see a graphical timeline of what it is doing during a page load. This shows where Firefox has to wait for one HTTP request to complete before starting the next one and how page load time increases with each object loaded. YSlow extends Firebug to offer tips on how to improve your site's performance.
The Safari team offers a tip on a hidden feature in their browser that offers some timing data too.
Or if you are familiar with the HTTP protocol and TCP/IP at the packet level, you can watch what is going on using tcpdump , ngrep , or ethereal . These tools are indispensable for all sorts of network debugging.
-
Try benchmarking common pages on your site from a local network with ab , which comes with the Apache webserver . If your server is taking longer than 5 or 10 milliseconds to generate a page, you should make sure you have a good understanding of where it is spending its time.
If your latencies are high and your webserver process (or CGI if you are using that) is eating a lot of CPU during this test, it is often a result of using a scripting language that needs to recompile your scripts with every request. Software like eAccelerator for PHP, mod_perl for perl, mod_python for python, etc can cache your scripts in a compiled state, dramatically speeding up your site. Beyond that, look at finding a profiler for your language that can tell you where you are spending your CPU. If you improve that, your pages will load faster and you'll be able to handle more traffic with fewer machines.
If your site relies on doing a lot of database work or some other time-consuming task to generate the page, consider adding server-side caching of the slow operation. Most people start with writing a cache to local memory or local disk, but that starts to fall down if you expand to more than a few web server machines. Look into using memcached , which essentially creates an extremely fast shared cache that's the combined size of the spare RAM you give it off of all of your machines. It has clients available in most common languages.
-
(Optional) Petition browser vendors to turn on HTTP pipelining by default on new browsers. Doing so will remove some of the need for these tricks and make much of the web feel much faster for the average user. (Firefox has this disabled supposedly because some proxies, some load balancers, and some versions of IIS choke on pipelined requests. But Opera has found sufficient workarounds to enable pipelining by default. Why can't other browsers do similarly?)
发表评论
-
CSS+div:下拉菜单详解
2010-08-24 15:04 3631之前由于没有用到下拉菜单,一直没去实践(只在刚学建站时用DW做 ... -
编写跨浏览器兼容的 CSS 代码的金科玉律
2010-08-21 16:46 922理解 CSS 盒子模型 如果你想实现不需要很多奇巧淫技的跨浏览 ... -
网站上线前必做的30个检查
2010-08-21 15:29 1029网站上线前务必做好以下30个检查,防止上线后出乱子。 ... -
IE之短
2010-08-21 13:47 983在实际的网站开发中,我们经常会发现IE浏览器对资源的限制,这让 ... -
div+css设计时的常见技巧
2010-08-21 13:36 876美工设计对于程序员来 ... -
打败IE的葵花宝典:CSS Bug Table
2010-08-21 13:35 1136作为一名前端,我们通常要做的就是让页面在各系统A-Grade浏 ... -
25个下拉菜单
2010-08-13 13:56 999http://vandelaydesign.com/blog/ ... -
10个分析、测试网站加载速度的工具
2010-08-11 14:38 1567自Google宣布页面的加 ... -
超级好的html模板
2010-06-10 10:43 992主要是布局模版 见附件 -
flexigrid
2009-12-28 10:31 1043http://www.flexigrid.info/ -
09年度最实用的小抄/手册总结:HTML, CSS, PHP, Javascript …
2009-12-27 18:45 1551http://www.iteye.com/news/12605 ... -
页面分析软件
2009-08-04 09:37 875yahoo:yslow google:pagespeed ... -
职场白领必读:<如何成为ppt高手>
2009-07-10 17:43 1280转载请保留原文链接 ... -
13个不同类型网页菜单源代码和在线例子
2009-06-19 13:34 852http://www.iteye.com/news/3900- ... -
DIV+CSS常见需要注意的地方
2009-06-17 13:00 952当我们进行CSS+DIV的方式 ... -
最有用的CSS技巧
2009-06-17 12:58 872http://wing929.iteye.com/blog/1 ...
相关推荐
在软件测试过程中,文档的重要性不容忽视。然而,许多人认为文档工作是那些有闲暇时间的人的任务,这种观念需要改变。为了充分利用文档的力量,我们需要将其与项目流程和组织内的其他过程同步,并确保所有相关人员都...
标题中的“《20 tips to empower your story》——欧美PPT公司soap新作.rar”揭示了一个重要的信息,即这是一款由欧美专业PPT设计公司SOAP推出的新的作品,名为“20 tips to empower your story”。这个标题暗示了...
藏经阁Expert Tips to Command Your Native Advertising 藏经阁Expert Tips to Command Your Native Advertising是一份关于本土广告营销解决方案的指南。以下是从这份文件中提取的关键知识点: 什么是本土广告 ...
Accelerating MATLAB Performance: 1001 tips to speed up MATLAB programs By 作者: Yair M. Altman ISBN-10 书号: 1482211297 ISBN-13 书号: 9781482211290 Edition 版本: 1 出版日期: 2014-12-11 pages 页数: ...
The quickest way to do the books!...Gives helpful troubleshooting tips to make your accounting easy Your time is precious—why waste a minute when QuickBooks can make it easier? Get started today!
Oracle Hyperion Chapter 2 - Tips for building your HFM Application
Ten tips for your successful migration
Accelerating MATLAB Performance - 1001 Tips to Speed Up MATLAB Programs MATLAB 程序加速技巧 http://undocumentedmatlab.com/books/matlab-performance Accelerating MATLAB Performance: 1001 Tips to ...
Ross Harmes, Julien Lecomte, Steven Levithan, Stoyan Stefanov, and Matt Sweeney — demonstrate optimal ways to load code onto a page, and offer programming tips to help your JavaScript run as ...
Matlab 加速 技巧,一本非常实用的书,帮助你合理运用Matlab的函数库,加速代码运行
and-answer book on CSS, with over 100 tutorials that’ll show you how to gain more control over the appearance of your web page, create sophisticated Web page navigation controls, design for today’s ...
The aim is not to teach LATEX programming, but to give a quick reference to all the tips and tricks that can be used if you are encountering a (difficult) problem, or simply facing a question which ...
abap tips abap tips abap tips abap tips abap tips