What CSS selector or rules can significantly affect the performance of interface / rendering in the real world?

Should you worry about CSS rendering performance? Or are we just not going to worry about performance at all with CSS and just focus on writing elegant or maintainable CSS?

This question is intended to be a useful resource for interface developers on which parts of CSS can significantly affect device performance and which devices / browsers or mechanisms may be affected. It is not a question of how to write elegant or supported CSS, it is purely about performance (although I hope that written here may contain more general articles about best practice).

Existing evidence

Google and Mozilla have written guidelines for writing effective CSS, and the CSSLint rule set includes:

Avoid selectors that look like regular expressions .. don't use complex equality operators to avoid performance penalties

but not one of them gives any evidence (which I could find) about the influence that they have.

A css-tricks.com article on effective CSS claims (after outlining best practices for efficiency) that we should not .. sacrifice semantics or maintainability for efficient CSS these days.

Perfection kills a blog post , suggested border-radius and box-shadow are an order of magnitude slower than simpler CSS rules. This was very important in Opera, but in Webkit it was insignificant. In addition, smashing magazine CSS benchmark found that the rendering time for CSS3 rendering rules was small and significantly faster than rendering the equivalent effect using images.

Know your mobile having tested various mobile browsers and found that they all displayed CSS3 equally insignificantly fast (in 12 ms), but it looks like they did tests on a PC, so we can’t say anything about how hand-held devices work with CSS3 in whole.

Here are many online articles on how to write effective CSS. However, I have yet to find exhaustive evidence that poorly-considered CSS actually has a significant impact on site rendering time or attachment.

Background

back off, I'm going to try science http://brightgreenscotland.org/wp-content/uploads/2010/09/stand-back-Im-going-to-try-science.png

I suggested generosity for this question to try to use the power of the SO community to create a useful, well-researched resource.

+52
performance css client-side css3
Sep 05 '12 at 10:37
source share
6 answers

The first thing that comes to mind here: how smart is the rendering mechanism that you use?

This is usually very important when you ask the effectiveness of CSS rendering / selection. For example, suppose the first rule in your CSS file is:

 .class1 { /*make elements with "class1" look fancy*/ } 

So, when a very simple engine sees this (and since this is the first rule), it goes and looks at every element of your DOM and checks for the existence of class1 in each. Better engines probably map class names to a list of DOM elements and use something like a hash table to search efficiently.

 .class1.class2 { /*make elements with both "class1" and "class2" look extra fancy*/ } 

Our example “base engine” will go and revise each element in the DOM, looking for both classes. A smarter engine will compare n('class1') and n('class2') , where n(str) is the number of elements in the DOM with the class str , and is taken depending on what is minimal; suppose class1 then passes all the elements with class1 , looking for elements with class2 .

In any case, modern engines are smart (smarter than the example described above), and brilliant new processors can perform millions (tens of millions) of operations per second. It is highly unlikely that you have millions of elements in your DOM, so the worst performance for any choice ( O(n) ) will not be so bad.




Update:

To get the actual practical illustrative evidence, I decided to do some tests. First of all, to get an idea of ​​how many DOM elements on average we see in real applications, let's take a look at how many elements are on the web pages of popular websites:

Facebook: ~ 1900 items (verified on my personal homepage).
Google : ~ 340 items (checked on the main page, no search results).
Google: ~ 950 items (checked on the search results page).
Yahoo! : ~ 1400 elements (checked on the main page).
Stackoverflow: ~ 680 items (checked on the question page).
AOL: ~ 1060 elements (checked on the main page).
Wikipedia: ~ 6000 elements, of which 2420 are not spans or anchors (checked in the Wikipedia article on Glee ).
Twitter: ~ 270 elements (checked on the main page).

To summarize, we get an average of ~ 1,500 elements. Now it's time to do some testing. For each test, I generated 1,500 divs (nested in some other divs for some tests), each of which has corresponding attributes depending on the test.




Tests

Styles and elements are created using PHP. I downloaded the PHP files I used and created an index so that others could test locally: a small link .




Results:

Each test runs 5 times in three browsers (average time is reported): Firefox 15.0 (A), Chrome 19.0.1084.1 (B), Internet Explorer 8 (C):

  ABC 1500 class selectors (.classname) 35ms 100ms 35ms 1500 class selectors, more specific (div.classname) 36ms 110ms 37ms 1500 class selectors, even more specific (div div.classname) 40ms 115ms 40ms 1500 id selectors (#id) 35ms 99ms 35ms 1500 id selectors, more specific (div#id) 35ms 105ms 38ms 1500 id selectors, even more specific (div div#id) 40ms 110ms 39ms 1500 class selectors, with attribute (.class[title="ttl"]) 45ms 400ms 2000ms 1500 class selectors, more complex attribute (.class[title~="ttl"]) 45ms 1050ms 2200ms 



Similar experiments:

Other people seem to have done similar experiments; it also has useful statistics: a small link .




Bottom line:

If you don't care about saving a few milliseconds when rendering (1 ms = 0.001 s), don't worry about thinking too much. On the other hand, it’s good practice to avoid using complex selectors to select large subsets of elements, as this can have a noticeable difference (as we can see from the test results above). All common CSS selectors are fast enough in modern browsers.

Suppose you create a chat page and you want to style all the messages. You know that every post is in a div that has a title and is nested in a div with the class .chatpage . use .chatpage div[title] to select posts correctly , but this is also bad practice. It is simpler, more convenient to maintain, and more efficiently giving all messages a class and selecting them using this class.




Outstanding single-line output:

Anything under “yes, this CSS makes sense,” everything is fine.

+48
Sep 05 '12 at 10:50
source share
— -

Most of the answers here focus on selector performance, as if that is the only thing that matters. I will try to cover up some little things (spoiler alert: they are not always a good idea), css uses value and rendering of certain properties.

Before I get an answer, let me distract IMO: I personally strongly disagree with the stated need for “evidence-based data”. It just makes the performance statement a measure of reliability, while in fact the area of ​​the rendering engines is heterogeneous enough to make such a statistical conclusion inaccurate for measurement and inappropriate adoption or monitoring.

As the original finds quickly become obsolete, I would prefer that front-end developers understand the principles of the framework and their relative value with respect to supported / readable brown points. In the end, premature optimization is the root of all evil;)




Start with selector performance:

Small, preferably single-level, specific selectors are processed faster. There are no obvious performance indicators in the original answer, but the key point remains: at runtime, the HTML document is parsed in the DOM tree containing N elements with average depth D and with what the total number of CSS S rules has. To reduce the computational complexity of O(N*D*S) , you should

  • Use the correct keys as few elements as possible - the selectors correspond from right to left ^ for an individual rule, therefore, if the right-most key does not match a specific element, there is no need to further process the selector and it is discarded.

    It is generally accepted that the selector * should be avoided, but this point should be taken further. The "normal" CSS reset really matches most elements - when this SO page is profiled, reset is responsible for about 1/3 of the time the selector is selected, so you may prefer normalize.css (nevertheless it adds up to 3.5 ms - point against premature optimization is strong)

  • Avoid descendant selectors as they require up to ~D elements to be repeated. This mainly affects the confirmation of non-compliance - for example, a positive matching .container .content may require only one step for elements in the relationship between parents and children, but the DOM tree needs to go to the html level before a negative match can be confirmed.

  • Minimize the number of DOM elements , because their styles are applied individually (it is worth noting, this is compensated by the logic of the browser, such as caching links and processing styles from the same elements - for example, when the style of identical siblings)

  • Delete unused rules , because the browser has to evaluate their applicability for each displayed element, It’s enough said that the fastest rule is the one that is not :)

This will lead to a quantitative (but, depending on the page, not necessarily perceived) improvement in terms of rendering engine performance, however, there are always additional factors, such as traffic overhead and DOM parsing, etc.




Next, CSS3 properties performance:

CSS3 brought us (among other things) rounded corners, background gradients and shadow variations, and with it a load problem. Think about it, by definition, a pre-processed image works better than the CSS3 rule set that should be displayed first. From the webkit wiki :

Gradients, shadows, and other decorations in CSS should only be used when necessary (for example, when the shape is dynamic based on the content) - otherwise, static images are always faster.

If this is not so bad, gradients, etc., may have to be recounted at each repaint / reflow event (see below for more details). Keep this in mind until most users can view a css3-heavy page, for example, without noticeable lag.




Next, write-off performance:

Avoid high and wide sprites , even if their transport footprint is relatively small. It is well known that the rendering mechanism cannot work with gif / jpg / png, and at runtime all graphic assets are managed as uncompressed bitmaps. At least it's easy to calculate: this sprite width times height times four bytes per pixel (RGBA) 238*1073*4≅1MB . Use it on multiple items on different tabs that are simultaneously open, and this quickly adds significant value.

A rather extreme case: it was found on mozilla webdev , but this is not at all unexpected when there are dubious practices such as diagonal sprites .

An alternative to viewing are individual base64 encoded images embedded directly in CSS.




Further, it overpays and reconstructs:

This is a misconception that reflow can only be launched using JS DOM manipulation - in fact, any style layout application will lead to its effect on the target element, its children and subsequent elements, etc. The only way to prevent unnecessary iterations is to try and avoid rendering dependencies . A direct example of this might be rendering tables :

Tables often require several passes before the layout is fully set, because they are one of the rare cases where elements can affect the display of other elements that were before them on the DOM. Imagine a cell at the end of a table with very wide content that causes a complete column change. This is why spreadsheets are not progressing gradually in all browsers.




I will make changes if I remember something important that was missed. Some links to complete:

http://perfectionkills.com/profiling-css-for-fun-and-profit-optimization-notes/

http://jacwright.com/476/runtime-performance-with-css3-vs-images/

https://developers.google.com/speed/docs/best-practices/payload

https://trac.webkit.org/wiki/QtWebKitGraphics

https://blog.mozilla.org/webdev/2009/06/22/use-sprites-wisely/

http://dev.opera.com/articles/view/efficient-javascript/

+11
Sep 15 '12 at 8:24
source share

Although it is true that

computers were slower 10 years ago.

You also have a much wider range of devices that can currently access your site. And while desktops / laptops have appeared in leaps, devices in the market of mid-range and low-end smartphones are in many cases not much more powerful than what we had on desktops ten years ago.

But having said that CSS selection speed is probably at the bottom of the list of things you need to worry about in terms of providing a good experience on as wide a range of devices as possible.

Having expanded, I could not find specific information regarding more modern browsers or mobile devices struggling with inefficient CSS selectors, but I could find the following:

+4
Sep 05 '12 at 11:08
source share

For such great generosity, I’m ready to risk a Null answer: there are no official CSS selectors that cause noticeable slowdowns in rendering and (on this day fast computers and fast browser iteration) any that are found are quickly solved by browser developers. Even in mobile browsers, there is no problem unless the careless developer wants to use non-standard jQuery selectors. These jQuery developers are flagged as dangerous and can be problematic.

In this case, the lack of evidence indicates a lack of problems. So, use semantic markup (especially OOCSS) and report any slowdowns that you find when using standard CSS selectors in obscure browsers.

People from the future: CSS performance issues in 2012 were already a thing of the past.

+4
Sep 13 '12 at 19:12
source share

css has nothing to do with making it faster, it should be the last thing you look at when you look at performance. Make your css any way that suits you, compile it. and then put it in your head. This may be rude, but there are many other things to look for when looking at browser performance. If you work in a digital office, you will not have to pay for an additional 1 ms at boot time.

As I commented on the use of pagepeed for chrome of my Google tool, which parses a website in 27 parameters, css is one of them.

My message only concerns the fact that about 99% of web users who can open the website and see it correctly, even people with IE7, etc. will not have it. Then, closing 10% using css3 (if it turns out that you can get an additional 1-10 ms for performance).

Most people have at least 1mbit / 512kbit or higher, and if you download a heavy site, it takes about 3 seconds to download, but you can save 10ms, maybe css ??

And when it comes to mobile devices, you should create sites only for mobile phones, so when you have a device with a screen size smaller than the "Width", you have a separate site

Please comment below, this is my perspective and my personal experience with web development.

+1
Sep 11 '12 at 11:19
source share

Without directly contacting the code, using <link> over @import to include your style sheets provides much better performance.

'Do not use @import' through stevesouders.com

The article contains numerous examples of speed tests between each type, as well as one type with another (for example: a CSS file, called via <link> , also contains @import in another css file).

0
Sep 14 '12 at 13:38
source share



All Articles