- 浏览: 45745 次
- 性别:
- 来自: 湖南
最新评论
-
laj12347:
javascript 闭包真是牛逼呀。。看了好久,终于有点 ...
JavaScript: The Good Parts 读书笔记(二) -
mu0001:
赞一个,很不错。以现在的知识来看,先看了第一部分。
JavaScript 数据访问(翻译自High Performance Javascript 第二章) -
e421083458:
得买本书好好看看了!
JavaScript 数据访问(翻译自High Performance Javascript 第二章) -
adamed:
这本书已经有中文版出版了。。到处都可以买到。高性能 javas ...
JavaScript 数据访问(翻译自High Performance Javascript 第二章) -
jiyanliang:
elvishehai 写道来源于那里的呀,写的很不错,学习了. ...
JavaScript 数据访问(翻译自High Performance Javascript 第二章)
三.DOM Scripting
DOM scripting is expensive, and it’s a common performance bottleneck in rich web applications. This chapter discusses the areas of DOM scripting that can have a negative effect on an application’s responsiveness and gives recommendations on how to improve response time. The three categories of problems discussed in the chapter include:
1. Accessing and modifying DOM elements
2. Modifying the styles of DOM elements and causing repaints and reflows
3.Handling user interaction through DOM events
- DOM in the Browser World
The Document Object Model (DOM) is a language-independent application interface (API) for working with XML and HTML documents. In the browser, you mostly work with HTML documents, although it’s not uncommon for web applications to retrieve XML documents and use the DOM APIs to access data from those documents.
Even though the DOM is a language-independent API, in the browser the interface is implemented in JavaScript. Since most of the work in client-side scripting has to do with the underlying document, DOM is an important part of everyday JavaScript coding.
It’s common across browsers to keep DOM and JavaScript implementations independent of each other. In Internet Explorer, for example, the JavaScript implementation is called JScript and lives in a library file called jscript.dll, while the DOM implementation lives in another library, mshtml.dll (internally called Trident). This separation allows other technologies and languages, such as VBScript, to benefit from the DOM and the rendering functionality Trident has to offer. Safari uses WebKit’s WebCore for DOM and rendering and has a separate JavaScriptCore engine (dubbed SquirrelFish in its latest version). Google Chrome also uses WebCore libraries from WebKit for rendering pages but implements its own JavaScript engine called V8. In Firefox, SpiderMonkey (the latest version is called TraceMonkey) is the JavaScript implementation, a
separate part of the Gecko rendering engine.
Inherently Slow
What does that mean for performance? Simply having two separate pieces of functionality interfacing with each other will always come at a cost. An excellent analogy is to think of DOM as a piece of land and JavaScript (meaning ECMAScript) as another piece of land, both connected with a toll bridge (see John Hrvatin, Microsoft, MIX09, http://videos.visitmix.com/MIX09/T53F). Every time your ECMAScript needs access to the DOM, you have to cross this bridge and pay the performance toll fee. The more you work with the DOM, the more you pay. So the general recommendation is to cross that bridge as few times as possible and strive to stay in ECMAScript land. The rest of the chapter focuses on what this means exactly and where to look in order to make user interactions faster.
DOM Access and Modification
Simply accessing a DOM element comes at a price—the “toll fee” discussed earlier.Modifying elements is even more expensive because it often causes the browser to recalculate changes in the page geometry.
Naturally, the worst case of accessing or modifying elements is when you do it in loops, and especially in loops over HTML collections.
Just to give you an idea of the scale of the problems with DOM scripting, consider this simple example:function innerHTMLLoop() { for (var count = 0; count < 15000; count++) { document.getElementById('here').innerHTML += 'a'; } }
This is a function that updates the contents of a page element in a loop. The problem with this code is that for every loop iteration, the element is accessed twice: once to read the value of the innerHTML property and once to write it.
A more efficient version of this function would use a local variable to store the updated contents and then write the value only once at the end of the loop:function innerHTMLLoop2() { var content = ''; for (var count = 0; count < 15000; count++) { content += 'a'; } document.getElementById('here').innerHTML += content; }
This new version of the function will run much faster across all browsers. Figure 3-1 shows the results of measuring the time improvement in different browsers. The y-axis in the figure (as with all the figures in this chapter) shows execution time improvement, i.e., how much faster it is to use one approach versus another. In this case, for example,using innerHTMLLoop2() is 155 times faster than innerHTMLLoop() in IE6.
As these results clearly show, the more you access the DOM, the slower your code executes. Therefore, the general rule of thumb is this: touch the DOM lightly, and stay within ECMAScript as much as possible.
a.innerHTML Versus DOM methods
Over the years, there have been many discussions in the web development community over this question: is it better to use the nonstandard but well-supported innerHTML property to update a section of a page, or is it best to use only the pure DOM methods, such as document.createElement()? Leaving the web standards discussion aside, does it matter for performance? The answer is: it matters increasingly less, but still,innerHTML is faster in all browsers except the latest WebKit-based ones (Chrome and Safari). The benefits of innerHTML are more obvious in older browser versions (innerHTML is 3.6 times faster in IE6), but the benefits are less pronounced in newer versions. And in newer WebKit-based browsers it’s the opposite: using DOM methods is slightly faster. So the decision about which approach to take will depend on the browsers your users are commonly using, as well as your coding preferences.
As a side note, keep in mind that this example used string concatenation, which is not optimal in older IE versions. Using an array to concatenate large strings will make innerHTML even faster in those browsers.
Using innerHTML will give you faster execution in most browsers in performance-critical operations that require updating a large part of the HTML page. But for most everyday cases there isn’t a big difference, and so you should consider readability, maintenance, team preferences, and coding conventions when deciding on your approach.
b.Cloning Nodes
Another way of updating page contents using DOM methods is to clone existing DOM elements instead of creating new ones—in other words, using element.cloneNode() (where element is an existing node) instead of document.createElement().
Cloning nodes is more efficient in most browsers, but not by a big margin. Regenerating the table from the previous example by creating the repeating elements only once and then copying them results in slightly faster execution times:
• 2% in IE8, but no change in IE6 and IE7
• Up to 5.5% in Firefox 3.5 and Safari 4
• 6% in Opera (but no savings in Opera 10)
• 10% in Chrome 2 and 3% in Chrome 3
c.HTML Collections
HTML collections are array-like objects containing DOM node references. Examples of collections are the values returned by the following methods:
• document.getElementsByName()
• document.getElementsByClassName()
• document.getElementsByTagName()
The following properties also return HTML collections:
document.images
All img elements on the page
document.links
All a elements
document.forms
All forms
document.forms[0].elements
All fields in the first form on the page
These methods and properties return HTMLCollection objects, which are array-like lists.They are not arrays (because they don’t have methods such as push() or slice()), but provide a length property just like arrays and allow indexed access to the elements in the list. For example, document.images[1] returns the second element in the collection. As defined in the DOM standard, HTML collections are “assumed to be live, meaning that they are automatically updated when the underlying document is updated” (see http://www.w3.org/TR/DOM-Level-2-HTML/html.html#ID-75708506).
The HTML collections are in fact queries against the document, and these queries are being reexecuted every time you need up-to-date information, such as the number of elements in the collection (i.e., the collection’s length). This could be a source of inefficiencies.
To demonstrate that the collections are live, consider the following snippet:
// an accidentally infinite loop var alldivs = document.getElementsByTagName('div'); for (var i = 0; i < alldivs.length; i++) { document.body.appendChild(document.createElement('div')) }
This code looks like it simply doubles the number of div elements on the page. It loops through the existing divs and creates a new div every time, appending it to the body. But this is in fact an infinite loop because the loop’s exit condition, alldivs.length, increases by one with every iteration, reflecting the current state of the underlying document.
Looping through HTML collections like this may lead to logic mistakes, but it’s also slower, due to the fact that the query needs to run on every iteration. When the length of the collection is accessed on every iteration, it causes the collection to be updated and has a significant performance penalty across all browsers. The way to optimize this is to simply cache the length of the collection into a variable and use this variable to compare in the loop’s exit condition:
function loopCacheLengthCollection() { var coll = document.getElementsByTagName('div'), len = coll.length; for (var count = 0; count < len; count++) { // ... } }
The previous example used just an empty loop, but what happens when the elements of the collection are accessed within the loop?
In general, for any type of DOM access it’s best to use a local variable when the same DOM property or method is accessed more than once. When looping over a collection, the first optimization is to store the collection in a local variable and cache the length outside the loop, and then use a local variable inside the loop for elements that are accessed more than once.
In the next example, three properties of each element are accessed within the loop. The lowest version accesses the global document every time, an optimized version caches a reference to the collection, and the fastest version also stores the current element of the collection into a variable. All three versions cache the length of the collection.
// slow function collectionGlobal() { var coll = document.getElementsByTagName('div'), len = coll.length, name = ''; for (var count = 0; count < len; count++) { name = document.getElementsByTagName('div')[count].nodeName; name = document.getElementsByTagName('div')[count].nodeType; name = document.getElementsByTagName('div')[count].tagName; } return name; }; // faster function collectionLocal() { var coll = document.getElementsByTagName('div'), len = coll.length, name = ''; for (var count = 0; count < len; count++) { name = coll[count].nodeName; name = coll[count].nodeType; name = coll[count].tagName; } return name; }; // fastest function collectionNodesLocal() { var coll = document.getElementsByTagName('div'), len = coll.length, name = '', el = null; for (var count = 0; count < len; count++) { el = coll[count]; name = el.nodeName; name = el.nodeType; name = el.tagName; } return name; };
- Walking the DOM
The DOM API provides multiple avenues to access specific parts of the overall document structure. In cases when you can choose between approaches, it’s beneficial to use the most efficient API for a specific job.
a.Crawling the DOM
Often you need to start from a DOM element and work with the surrounding elements,maybe recursively iterating over all children. You can do so by using the childNodes collection or by getting each element’s sibling using nextSibling.
Consider these two equivalent approaches to a nonrecursive visit of an element’s children:function testNextSibling() { var el = document.getElementById('mydiv'), ch = el.firstChild, name = ''; do { name = ch.nodeName; } while (ch = ch.nextSibling); return name; }; function testChildNodes() { var el = document.getElementById('mydiv'), ch = el.childNodes, len = ch.length, name = ''; for (var count = 0; count < len; count++) { name = ch[count].nodeName; } return name; };
Bear in mind that childNodes is a collection and should be approached carefully, caching the length in loops so it’s not updated on every iteration.
The two approaches are mostly equal in terms of execution time across browsers. But in IE, nextSibling performs much better than childNodes. In IE6, nextSibling is 16 times faster, and in IE7 it’s 105 times faster. Given these results, using nextSibling is the preferred method of crawling the DOM in older IE versions in performance-critical cases. In all other cases, it’s mostly a question of personal and team preference.
b.Element nodes
DOM properties such as childNodes, firstChild, and nextSibling don’t distinguish between element nodes and other node types, such as comments and text nodes (which are often just spaces between two tags). In many cases, only the element nodes need to be accessed, so in a loop it’s likely that the code needs to check the type of node returned and filter out nonelement nodes. This type checking and filtering is unnecessary DOM work. Many modern browsers offer APIs that only return element nodes. It’s better to use those when available, because they’ll be faster than if you do the filtering yourself in JavaScript. Table 3-1 lists those convenient DOM properties.
Table 3-1. DOM properties that distinguish element nodes (HTML tags) versus all nodesProperty Use as a replacement for children childNodes childElementCount childNodes.length firstElementChild firstChild lastElementChild lastChild nextElementSibling nextSibling previousElementSibling previousSibling
All of the properties listed in Table 3-1 are supported as of Firefox 3.5, Safari 4, Chrome 2, and Opera 9.62. Of these properties, IE versions 6, 7, and 8 only support children.
Looping over children instead of childNodes is faster because there are usually less items to loop over. Whitespaces in the HTML source code are actually text nodes, and they are not included in the children collection. children is faster than childNodes across all browsers, although usually not by a big margin—1.5 to 3 times faster. One notable exception is IE, where iterating over the children collection is significantly faster than iterating over childNodes—24 times faster in IE6 and 124 times faster in IE7.
c.Element nodes
When identifying the elements in the DOM to work with, developers often need finer control than methods such as getElementById() and getElementsByTagName() can provide. Sometimes you combine these calls and iterate over the returned nodes in order to get to the list of elements you need, but this refinement process can become inefficient.
On the other hand, using CSS selectors is a convenient way to identify nodes because developers are already familiar with CSS. Many JavaScript libraries have provided APIs for that purpose, and now recent browser versions provide a method called querySelectorAll() as a native browser DOM method. Naturally this approach is faster than using JavaScript and DOM to iterate and narrow down a list of elements. Consider the following:var elements = document.querySelectorAll('#menu a');
The value of elements will contain a list of references to all a elements found inside an element with id="menu". The method querySelectorAll() takes a CSS selector string as an argument and returns a NodeList—an array-like object containing matching nodes. The method doesn’t return an HTML collection, so the returned nodes do not represent the live structure of the document. This avoids the performance (and potentially logic) issues with HTML collection discussed previously in this chapter.
To achieve the same goal as the preceding code without using querySelectorAll(), you will need the more verbose:var elements = document.getElementById('menu').getElementsByTagName('a');
In this case elements will be an HTML collection, so you’ll also need to copy it into an array if you want the exact same type of static list as returned by querySelectorAll(). Using querySelectorAll() is even more convenient when you need to work with a union of several queries. For example, if the page has some div elements with a class name of “warning” and some with a class of “notice”, to get a list of all of them you can use querySelectorAll():var errs = document.querySelectorAll('div.warning, div.notice');
Getting the same list without querySelectorAll() is considerably more work. One way is to select all div elements and iterate through them to filter out the ones you don’t need.var errs = [], divs = document.getElementsByTagName('div'), classname = ''; for (var i = 0, len = divs.length; i < len; i++) { classname = divs[i].className; if (classname === 'notice' || classname === 'warning') { errs.push(divs[i]); } }
The Selectors API is supported natively in browsers as of these versions: Internet Explorer 8, Firefox 3.5, Safari 3.1, Chrome 1, and Opera 10.
- Repaints and Reflows
Once the browser has downloaded all the components of a page—HTML markup,JavaScript, CSS, images—it parses through the files and creates two internal data structures:A DOM tree A representation of the page structure A render tree A representation of how the DOM nodes will be displayed
The render tree has at least one node for every node of the DOM tree that needs to be displayed (hidden DOM elements don’t have a corresponding node in the render tree). Nodes in the render tree are called frames or boxes in accordance with the CSS model that treats page elements as boxes with padding, margins, borders, and position. Once the DOM and the render trees are constructed, the browser can display (“paint”) the elements on the page.
When a DOM change affects the geometry of an element (width and height)—such as a change in the thickness of the border or adding more text to a paragraph, resulting in an additional line—the browser needs to recalculate the geometry of the element as well as the geometry and position of other elements that could have been affected by the change. The browser invalidates the part of the render tree that was affected by the change and reconstructs the render tree. This process is known as a reflow. Once the reflow is complete, the browser redraws the affected parts of the screen in a process called repaint.
Not all DOM changes affect the geometry. For example, changing the background color of an element won’t change its width or height. In this case, there is a repaint only (no reflow), because the layout of the element hasn’t changed. Repaints and reflows are expensive operations and can make the UI of a web application less responsive. As such, it’s important to reduce their occurrences whenever possible.
a.When Does a Reflow Happen?
As mentioned earlier, a reflow is needed whenever layout and geometry change. This happens when:
• Visible DOM elements are added or removed
• Elements change position
• Elements change size (because of a change in margin, padding, border thickness, width, height, etc.)
• Content is changed, e.g., text changes or an image is replaced with one of a different size.
• Page renders initially
• Browser window is resized
Depending on the nature of the change, a smaller or bigger part of the render tree needs to be recalculated. Some changes may cause a reflow of the whole page: for example, when a scroll bar appears.
b.Queuing and Flushing Render Tree Changes
Because of the computation costs associated with each reflow, most browsers optimize the reflow process by queuing changes and performing them in batches. However, you may (often involuntarily) force the queue to be flushed and require that all scheduled changes be applied right away. Flushing the queue happens when you want to retrieve layout information, which means using any of the following:
• offsetTop, offsetLeft, offsetWidth, offsetHeight
• scrollTop, scrollLeft, scrollWidth, scrollHeight
• clientTop, clientLeft, clientWidth, clientHeight
• getComputedStyle() (currentStyle in IE)
The layout information returned by these properties and methods needs to be up to date, and so the browser has to execute the pending changes in the rendering queue and reflow in order to return the correct values.
During the process of changing styles, it’s best not to use any of the properties shown in the preceding list. All of these will flush the render queue, even in cases where you’re retrieving layout information that wasn’t recently changed or isn’t even relevant to the latest changes.
Consider the following example of changing the same style property three times (this is probably not something you’ll see in real code, but is an isolated illustration of an important topic):// setting and retrieving styles in succession var computed, tmp = '', bodystyle = document.body.style; if (document.body.currentStyle) { // IE, Opera computed = document.body.currentStyle; } else { // W3C computed = document.defaultView.getComputedStyle(document.body, ''); } // inefficient way of modifying the same property // and retrieving style information right after bodystyle.color = 'red'; tmp = computed.backgroundColor; bodystyle.color = 'white'; tmp = computed.backgroundImage; bodystyle.color = 'green'; tmp = computed.backgroundAttachment;
In this example, the foreground color of the body element is being changed three times,and after every change, a computed style property is retrieved. The retrieved properties—backgroundColor, backgroundImage, and backgroundAttachment—are unrelated to the color being changed. Yet the browser needs to flush the render queue and reflow due to the fact that a computed style property was requested.
A better approach than this inefficient example is to never request layout information while it’s being changed. If the computed style retrieval is moved to the end, the code looks like this:bodystyle.color = 'red'; bodystyle.color = 'white'; bodystyle.color = 'green'; tmp = computed.backgroundColor; tmp = computed.backgroundImage; tmp = computed.backgroundAttachment;
c.Minimizing Repaints and Reflows
Reflows and repaints can be expensive, and therefore a good strategy for responsive applications is to reduce their number. In order to minimize this number, you should combine multiple DOM and style changes into a batch and apply them once.
Style changes
Consider this example:var el = document.getElementById('mydiv'); el.style.borderLeft = '1px'; el.style.borderRight = '2px'; el.style.padding = '5px';
Here there are three style properties being changed, each of them affecting the geometry of the element. In the worst case, this will cause the browser to reflow three times. Most modern browsers optimize for such cases and reflow only once, but it can still be inefficient in older browsers or if there’s a separate asynchronous process happening at the same time (i.e., using a timer). If other code is requesting layout information while this code is running, it could cause up to three reflows. Also, the code is touching the DOM four times and can be optimized.
A more efficient way to achieve the same result is to combine all the changes and apply them at once, modifying the DOM only once. This can be done using the cssText property:var el = document.getElementById('mydiv'); el.style.cssText = 'border-left: 1px; border-right: 2px; padding: 5px;';
Modifying the cssText property as shown in the example overwrites existing style nformation,
so if you want to keep the existing styles, you can append this to the cssText string:el.style.cssText += '; border-left: 1px;';
Another way to apply style changes only once is to change the CSS class name instead of changing the inline styles. This approach is applicable in cases when the styles do not depend on runtime logic and calculations. Changing the CSS class name is cleaner and more maintainable; it helps keep your scripts free of presentation code, although it might come with a slight performance hit because the cascade needs to be checked when changing classes.
var el = document.getElementById('mydiv'); el.className = 'active';
Batching DOM changes
When you have a number of changes to apply to a DOM element, you can reduce the number of repaints and reflows by following these steps:
1. Take the element off of the document flow.
2. Apply multiple changes.
3. Bring the element back to the document.
This process causes two reflows—one at step 1 and one at step 3. If you omit those steps, every change you make in step 2 could cause its own reflows.There are three basic ways to modify the DOM off the document:
• Hide the element, apply changes, and show it again.
• Use a document fragment to build a subtree outside of the live DOM and then copy it to the document.
• Copy the original element into an off-document node, modify the copy, and then
replace the original element once you’re done.
To illustrate the off-document manipulations, consider a list of links that must be updated with more information:<ul id="mylist"> <li><a href="http://phpied.com">Stoyan</a></li> <li><a href="http://julienlecomte.com">Julien</a></li> </ul>
Suppose additional data, already contained in an object, needs to be inserted into this list. The data is defined as:
var data = [ { "name": "Nicholas", "url": "http://nczonline.net" }, { "name": "Ross", "url": "http://techfoolery.com" } ];
The following is a generic function to update a given node with new data:function appendDataToElement(appendToElement, data) { var a, li; for (var i = 0, max = data.length; i < max; i++) { a = document.createElement('a'); a.href = data[i].url; a.appendChild(document.createTextNode(data[i].name)); li = document.createElement('li'); li.appendChild(a); appendToElement.appendChild(li); } };
The most obvious way to update the list with the data without worrying about reflows would be the following:var ul = document.getElementById('mylist'); appendDataToElement(ul, data);
Using this approach, however, every new entry from the data array will be appended to the live DOM tree and cause a reflow. As discussed previously, one way to reduce reflows is to temporarily remove the <ul> element from the document flow by changing the display property and then revert it:var ul = document.getElementById('mylist'); ul.style.display = 'none'; appendDataToElement(ul, data); ul.style.display = 'block';
Another way to minimize the number of reflows is to create and update a document fragment, completely off the document, and then append it to the original list. A document fragment is a lightweight version of the document object, and it’s designed to help with exactly this type of task—updating and moving nodes around. One syntactically convenient feature of the document fragments is that when you append a fragment to a node, the fragment’s children actually get appended, not the fragment itself. The following solution takes one less line of code, causes only one reflow, and touches the live DOM only once:var fragment = document.createDocumentFragment(); appendDataToElement(fragment, data); document.getElementById('mylist').appendChild(fragment);
A third solution would be to create a copy of the node you want to update, work on the copy, and then, once you’re done, replace the old node with the newly updated copy:var old = document.getElementById('mylist'); var clone = old.cloneNode(true); appendDataToElement(clone, data); old.parentNode.replaceChild(clone, old);
The recommendation is to use document fragments (the second solution) whenever possible because they involve the least amount of DOM manipulations and reflows. The only potential drawback is that the practice of using document fragments is currently underused and some team members may not be familiar with the technique.
- Caching Layout Information
As already mentioned, browsers try to minimize the number of reflows by queuing changes and executing them in batches. But when you request layout information such as offsets, scroll values, or computed style values, the browser flushes the queue and applies all the changes in order to return the updated value. It is best to minimize the number of requests for layout information, and when you do request it, assign it to local variables and work with the local values.
Consider an example of moving an element myElement diagonally, one pixel at a time, starting from position 100 × 100px and ending at 500 × 500px. In the body of a timeout loop you could use:// inefficient myElement.style.left = 1 + myElement.offsetLeft + 'px'; myElement.style.top = 1 + myElement.offsetTop + 'px'; if (myElement.offsetLeft >= 500) { stopAnimation(); }
This is not efficient, though, because every time the element moves, the code requests the offset values, causing the browser to flush the rendering queue and not benefit from its optimizations. A better way to do the same thing is to take the start value position once and assign it to a variable such as var current = myElement.offsetLeft;. Then, inside of the animation loop, work with the current variable and don’t request offsets:current++ myElement.style.left = current + 'px'; myElement.style.top = current + 'px'; if (current >= 500) { stopAnimation(); }
- Take Elements Out of the Flow for Animations
Showing and hiding parts of a page in an expand/collapse manner is a common interaction pattern. It often includes geometry animation of the area being expanded, which pushes down the rest of the content on the page.
Reflows sometimes affect only a small part of the render tree, but they can affect a larger portion, or even the whole tree. The less the browser needs to reflow, the more responsive your application will be. So when an animation at the top of the page pushes down almost the whole page, this will cause a big reflow and can be expensive, appearing choppy to the user. The more nodes in the render tree that need recalculation, the worse it becomes.
A technique to avoid a reflow of a big part of the page is to use the following steps:
1. Use absolute positioning for the element you want to animate on the page, taking it out of the layout flow of the page.
2. Animate the element. When it expands, it will temporarily cover part of the page. This is a repaint, but only of a small part of the page instead of a reflow and repaint of a big page chunk.
3. When the animation is done, restore the positioning, thereby pushing down the rest of the document only once.
- IE and :hover
Since version 7, IE can apply the :hover CSS pseudo-selector on any element (in strict mode). However, if you have a significant number of elements with a :hover, the responsiveness degrades. The problem is even more visible in IE 8.
For example, if you create a table with 500–1000 rows and 5 columns and use tr:hover to change the background color and highlight the row the user is on, the performance degrades as the user moves over the table. The highlight is slow to apply, and the CPU usage increases to 80%–90%. So avoid this effect when you work with a large number of elements, such as big tables or long item lists.
- Event Delegation
When there are a large number of elements on a page and each of them has one or more event handlers attached (such as onclick), this may affect performance. Attaching every handler comes at a price—either in the form of heavier pages (more markup or JavaScript code) or in the form of runtime execution time. The more DOM nodes you need to touch and modify, the slower your application, especially because the event attaching phase usually happens at the onload (or DOMContentReady) event, which is a busy time for every interaction-rich web page. Attaching events takes processing time, and, in addition, the browser needs to keep track of each handler, which takes up memory. And at the end of it, a great number of these event handlers might never be needed (because the user clicked one button or link, not all 100 of them, for example), so a lot of the work might not be necessary.
A simple and elegant technique for handling DOM events is event delegation. It’s based on the fact that events bubble up and can be handled by a parent element. With event delegation, you attach only one handler on a wrapper element to handle all events that happen to the children descendant of that parent wrapper.
According to the DOM standard, each event has three phases:
• Capturing
• At target
• Bubbling
Capturing is not supported by IE, but bubbling is good enough for the purposes of delegation.
- Summary:
DOM access and manipulation are an important part of modern web applications. But every time you cross the bridge from ECMAScript to DOM-land, it comes at a cost. To reduce the performance costs related to DOM scripting, keep the following in mind:
• Minimize DOM access, and try to work as much as possible in JavaScript.
• Use local variables to store DOM references you’ll access repeatedly.
• Be careful when dealing with HTML collections because they represent the live,underlying document. Cache the collection length into a variable and use it when iterating, and make a copy of the collection into an array for heavy work on collections.
• Use faster APIs when available, such as querySelectorAll() and firstElementChild.
• Be mindful of repaints and reflows; batch style changes, manipulate the DOM tree “offline,” and cache and minimize access to layout information.
• Position absolutely during animations, and use drag and drop proxies.
• Use event delegation to minimize the number of event handlers.
发表评论
-
High Performance JavaScript 读书笔记(五)
2011-02-27 21:24 2276五.Strings and Regular Expressio ... -
High Performance JavaScript 读书笔记(六)
2011-02-27 20:49 2661六.Responsive Interfaces There ... -
High Performance JavaScript 读书笔记(四)
2011-02-27 19:50 1216四.Algorithms and Flow Control ... -
High Performance JavaScript 读书笔记(一)
2011-02-27 13:44 1612一.Loading and Execution JavaS ... -
JavaScript 数据访问(翻译自High Performance Javascript 第二章)
2011-02-27 13:21 1932计算机科学中一 ... -
JavaScript Patterns 读书笔记(七)
2011-02-15 20:32 14437.Client Pattern DOM Access ... -
JavaScript Patterns 读书笔记(六)
2011-02-15 19:38 12626.Design Pattern Singleton ... -
JavaScript Patterns 读书笔记(五)
2011-02-15 19:08 12335.Inheritance Pattern Classic ... -
JavaScript Patterns 读书笔记(四)
2011-02-15 18:19 11884.Function Pattern Namespace ... -
JavaScript Patterns 读书笔记(二)
2011-02-14 21:20 1164二.Object Object Constructor C ... -
JavaScript Patterns 读书笔记(三)
2011-02-14 21:19 1463三.Function Background The ... -
JavaScript Patterns 读书笔记(一)
2011-02-14 20:09 1662一.Global 全局域中的this = window.m ... -
JavaScript: The Good Parts 读书笔记(五)
2011-01-27 12:56 2091五.Javascript 总结 语 ... -
JavaScript: The Good Parts 读书笔记(四)
2011-01-27 11:37 1002四.数组与正则表达式 ... -
JavaScript: The Good Parts 读书笔记(三)
2011-01-26 23:38 1408三.继承 概述 Javascr ... -
JavaScript: The Good Parts 读书笔记(二)
2011-01-26 23:01 1784二.函数 JS 中函数亦是对象 使用字面变量表面 ... -
JavaScript: The Good Parts 读书笔记(一)
2011-01-26 21:44 1290一.基础特性 在浏览器中,每个<script> ...
相关推荐
- **演讲题目**:《高性能JavaScript》(High Performance JavaScript) - **演讲者**:Nicholas Zakas,业界知名专家 - 共同创建了csslint.net - YUITest的创作者 - 出版过多本关于JavaScript的书籍 - 曾担任...
《High Performance JavaScript》是Nicholas C. Zakas所著的一本专注于提高JavaScript代码效率的书籍。这本书涵盖了许多使***ript运行更高效的技巧和方法,为开发人员提供了优化代码的策略。以下是根据提供的文件...
High Performance Computing 英文无水印原版pdf pdf所有页面使用FoxitReader、PDF-XChangeViewer、SumatraPDF和Firefox测试都可以打开 本资源转载自网络,如有侵权,请联系上传者或csdn删除 查看此书详细信息请...
- **性能分析工具**:如Chrome DevTools中的Performance面板,可以用来检测页面加载过程中的性能瓶颈。 - **自动化构建工具**:例如Webpack或Gulp,可以帮助开发者自动完成代码压缩、合并等工作,从而提高生产效率。...
《高性能JavaScript》是一本深入探讨如何优化JavaScript代码执行效率的专业书籍。它主要针对那些希望提升JavaScript应用程序性能的开发者,无论是Web前端还是后端,都能从中受益。作者通过自身的实践经验,揭示了...
《高性能JavaScript编程》是由Nicholas C. Zakas所著的一本深入探讨JavaScript性能优化的书籍。这本书以中英对照的形式,为读者提供了丰富的JavaScript性能优化技巧和实践方法,旨在帮助开发者构建更快、更稳定的...
High Performance Java Persistence 英文无水印pdf pdf所有页面使用FoxitReader和PDF-XChangeViewer测试都可以打开 本资源转载自网络,如有侵权,请联系上传者或csdn删除 本资源转载自网络,如有侵权,请联系...
Take your application to the next level of high performance using the extensive capabilities of Node.js About This Book Analyze, benchmark, and profile your Node.js application to find slow spots, ...
Java 9 High Performance 英文epub 本资源转载自网络,如有侵权,请联系上传者或csdn删除 本资源转载自网络,如有侵权,请联系上传者或csdn删除
Python High Performance(2nd) 英文无水印pdf 第2版 pdf所有页面使用FoxitReader和PDF-XChangeViewer测试都可以打开 本资源转载自网络,如有侵权,请联系上传者或csdn删除 本资源转载自网络,如有侵权,请联系...
Hands-On High Performance Programming with Qt 5: Build cross-platform applications using concurrency, parallel programming, and memory management Author: Marek Krajewski Pub Date: 2019 ISBN: 978-...